Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP
Path: utzoo!mnetor!seismo!rochester!ritcv!cci632!rb
From: rb@cci632.UUCP (Rex Ballard)
Newsgroups: comp.sys.misc
Subject: Re: Why a Micro is not as powerful as a Vax
Message-ID: <805@cci632.UUCP>
Date: Tue, 6-Jan-87 12:48:52 EST
Article-I.D.: cci632.805
Posted: Tue Jan  6 12:48:52 1987
Date-Received: Tue, 6-Jan-87 22:43:03 EST
References: <984@hounx.UUCP> <2880@rsch.WISC.EDU> <1611@hoptoad.uucp> <1920@alvin.mcnc.UUCP>
Reply-To: rb@ccird2.UUCP (Rex B)
Distribution: world
Organization: CCI, Communications Systems Division, Rochester, NY
Lines: 56
Keywords: micro vax 750
Summary: Why it is and is not.

This debate is hardly new.  One of the key points in determining
which is "better" for a specific set of needs is to determine
what is actually needed.

First, a mini or mainframe has access to, and needs, more storage.
If you wanted to access 4 gigabytes of data, application code, and
tools, a mini or mainframe is probably a good idea.  Even with
CPU speed being divided among 100 or more users, it is likely,
especially with "text only" processing, that the CPU won't be
that heavily loaded, but rather that the drives will be "crunching
away".

On the other hand, if you want to do bit-mapped manipulations of
graphics, windows, what-you-see-is-what-you-get editing, and similar
loads that require a great deal of CPU overhead dedicated to one
user, it's probably a good idea to incorporate a micro into the
user level interface.

In spite of the CPU benchmarks, the figures are very misleading.
A VAX for example runs the main CPU at about the same speed as
the Atari ST, yet when connected to 100 or so VT-100's or Techtronix
terminals, and additional effective 10 to 100 mips is being used
in a "distributed functionality" mode.  In many cases the "VT-100"
isn't a terminal at all, but rather a "terminal emulator", often
running at speeds of up to 1 MIPs.

Mainframe people like to think of micros/terminals/emulators/...
as "dumb tubes" and attempt to do as much of the "intelligent work"
in the host.  Micro people tend to think of servers, telecommunications
services, videotex,... as "dumb disks" and attempt to to the
"intelligent work" in the micro.

Slowly, the interconnections between host and micro are becoming more
sophisticated.  Interfaces like X-windows, and various "remote file
systems" are causing a closer blend and a tighter, more efficient
interface between the two.  As this occurs, both "micro" and "mainframe"
become more productive, with the mainframe handling more users and more
storage, and the micros handling more complex presentation.

Perhaps in a few years, we'll start seeing integration of Host and
Micro become so tight that systems such as a VAX 8600 cluster, or a
6/32-FT will be running as many as 1000 users, 10 or 20 intelligent
disk drives (built in caching, i-node searching, directory traversal...)
and developing performance numbers measured in BIPS (billion
instructions/second).

A good example of such integration would be a simple editor.  The
host would appear to be running effectively a "line editor" like
ed, the "disk drive" would be inserting and deleting blocks from
the file, and the micro would be handling font presentation and
converting the visual information to "ed" and/or NROFF type commands
to the host.  With proper load balancing, it would be possible to
reach speeds well into the 2 BIPS region.

Anybody wanna buy a used crystal ball?
Rex B.