Path: utzoo!attcan!uunet!cs.utexas.edu!mailrus!ames!ames.arc.nasa.gov!lamaster
From: lamaster@ames.arc.nasa.gov (Hugh LaMaster)
Newsgroups: comp.arch
Subject: Re: *big iron*
Message-ID: <32636@ames.arc.nasa.gov>
Date: 27 Sep 89 19:09:43 GMT
References: <21962@cup.portal.com> <1989Sep12.031453.22947@wolves.uucp> <32512@ames.arc.nasa.gov> <42229@sgi.sgi.com>
Sender: usenet@ames.arc.nasa.gov
Organization: NASA - Ames Research Center
Lines: 27

In article <42229@sgi.sgi.com> markb@denali.sgi.com (Mark Bradley) writes:

>I can't yet publish our IPI numbers due to signed non-disclosure, but suffice
>it to say that it would not make sense to go to a completely different controller
>and drive technology for anything less than VERY LARGE performance wins or
>phenomenal cost savings....

You might, however, be able to say what architectural features of your system
and the controller contributed.  For example, is there anything about cache,
memory, etc. that helps a lot.  What controller features are needed?  Which
ones are bad? 

>maximum throughput.  This is something many companies seem to miss the boat
>in doing.  Clearly I am somewhat biased, but the numbers don't lie (see below

I agree.  *Big Iron* machines have been able to provide sustained sequential
reads at 70% of theoretical channel/disk speed on multiple channels,
while providing 70% of CPU time in user CPU state to other CPU bound jobs, 
for at least the past 10 years.  Many of today's workstations have as fast 
CPUs as those machines did then, but, needless to say, the I/O hasn't been 
there.  I am glad to see that this is getting a lot more attention in industry
now.   

  Hugh LaMaster, m/s 233-9,  UUCP ames!lamaster
  NASA Ames Research Center  ARPA lamaster@ames.arc.nasa.gov
  Moffett Field, CA 94035     
  Phone:  (415)694-6117