Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP
Posting-Version: version B 2.10.2 9/18/84; site ames.UUCP
Path: utzoo!linus!decvax!genrad!mit-eddie!godot!harvard!seismo!hao!ames!eugene
From: eugene@ames.UUCP (Eugene Miya)
Newsgroups: net.micro,net.college
Subject: Re: Overloaded Computing Systems
Message-ID: <731@ames.UUCP>
Date: Fri, 28-Dec-84 13:05:43 EST
Article-I.D.: ames.731
Posted: Fri Dec 28 13:05:43 1984
Date-Received: Sat, 29-Dec-84 23:14:45 EST
References: <471@mako.UUCP> <271@ahuta.UUCP>
Organization: NASA-Ames Research Center, Mtn. View, CA
Lines: 89

> CC:         dmt
> REFERENCES:  <471@mako.UUCP>
> 
> >So keep the user efficiency / machine efficiency ratio in mind,
> >and use the appropriate tools for the task, but let's not
> >go off half cocked eliminating useful tools just because
> >they aren't as efficient for the computer as some other tool.
> 
> I echo the sentiment, and would like to suggest that the technology
> is reaching a point where we can match tool efficiency to people
> efficiency. While raw mode is a problem for expensive machines
> which need to be shared to pay for themselves, there obviously
> are small, cheap chunks of compute power that you wouldn't mind
> burdening with keystroke-catching. (See included portion of
> original posting below.)
> We ought to be using the terminal (workstation, PC, etc.) to handle
> the user interface, and save the shared resource to deal with
> transactions (probably more complex than single lines).
> 
> >We have terminals with 32 bit processors and memory measured
> >in megabytes (not computers, not workstations,  t e r m i n a l s ),
> >and there still aren't enought cycles.  There will *never* be
> >enough cycles. (one of those Murphy's laws things)
> 				Dave Tutelman

As one of the people who restarted this discussion, I feel I have to
respond to this one.  First, I agree with the human to machine cycles analogy
and I like everybody else on the net will push distributing functions
(e.g., into a smarter terminal).

I began with a basic thesis that today's supercomputers can be tomorrow's
micro.  I justified that historically with the first super computers
like ENIACs and so on and that we have many times that power sitting on
our desks [like the Mac I type from].

Interaction cost:
I sit and use a Cray interactively on occasion.  It's really nice.
I ponder what it would be like to have siting on my desk in a small box.
But one thing you have to keep aware of, if you think it's nice,
100 other users will think it's nice, and you may have defeated the
Cray to begin with [my management speaking].
Some people talk about the concept of 'process servers' like file servers
except these fast machines on a net.  The problem is
we don't have very good models of distributed or parallel computing.
This goes for smart terminals [how do you distribute function?].
Will programming my Cray from a distributed system start with my opening
a socket(2)?  Don't sacriface too many Cray cycles for character
interrupts.
Problem: how do you program these distributed systems of the future?

Computer Architectures:
Computers will always have features which are not utilized near 100% of
the time. This does not make them inefficient.  One significant problem
with existing computer architectures is the architecture itself.  Suppose
you wish to time(1)[in the sense of the Unix command] a process.  What you
end up doing is using the machine's own cycles to accomplish this
[the Hawthorne effect].  I have been doing research on multiprocessors
and see this is a common problem.
The CMU builders of the C.mmp and the Cm* discovered this to be a problem
and they plan to rectify this with their next multiprocessor.

In another area related of architecture:
I would almost bet [not quite] that specialized architectural features
such as vector registers will appear on micros [if we call them that]
in 20 years as 'standard equipment'.

In yet another area related of architecture:
On another level, consider what percentage of instruction sets are used
frequently?  Will we see machines with instruction sets in the 1000s?
I think not.  We are beginning to reach conceptual limits like trying
to shrink our keyboards on the same scale we are shinking chips :-).
RISCs are really getting popular.  Maybe of systems have to get big first
before we can refine them and make them small?

If we are to have Crays-on-a-desk, we are going to have to clean up
'sloppy' portions of micros to make them faster machines.  Micros in the past
and to this day get away with a lot because of their 'size.'
Steve Lundstrom, now at Stanford, commented once that supercomputing is
the only area left in computer science where we still count cycles.
The important thing to keep in mind: balance the human and machine
cycles. [is this an extension of "balance mind and body?" :-)]

Lots of problems for PhD thesis.....


--eugene miya
  NASA Ames Research Center
  {hplabs,ihnp4,dual,hao,vortex}!ames!aurora!eugene
  emiya@ames-vmsb.ARPA