Path: utzoo!utgpu!attcan!uunet!husc6!spdcc!ima!haddock!suitti
From: suitti@haddock.ISC.COM (Steve Uitti)
Newsgroups: comp.arch
Subject: Re: Balanced system - a tentative definition
Message-ID: <6082@haddock.ISC.COM>
Date: 12 Aug 88 16:08:40 GMT
References: <794@cernvax.UUCP>
Reply-To: suitti@haddock.ima.isc.com (Steve Uitti)
Distribution: comp.arch
Organization: Interactive Systems, Boston
Lines: 41

In article <794@cernvax.UUCP> hjm@cernvax.UUCP () writes:
>"A balanced system is one where an improvement in the performance
>of  any single part would not increase the overall performance of
>the system, and where the degrading of any single part would  de-
>crease the overall performance."
>
>The explanation of this definition is that every part of the sys-
>tem  is  going as fast as it can, and that no one part is holding
>up the process.  Consequently, if any one  part  slows  down,  it
>would drag the rest of the system down with it.

	I would first add to the definition that parts of the system
are subsystems that act independantly of each other in that they can
perform at the same time (pure overlap).  If one looks at a RAM system
(with bandwidth and latency) and the CPU system (with instruction
speed), for a "balanced" system, the RAM bandwidth should be saturated
while the CPU is never stalled (under some conditions which make up
the rest of the system).

	It is a nice definition, but doesn't do much for the real world.

	In real life I generally consider the CPU/RAM part as one half
of the system, and disk I/O as the other half (and I ignore other
kinds of I/O, unless the application being considered requires them).
Usually the parameter that could be changed is how much RAM the system
has, and this determines how much paging will happen.  I will contend
that a system that does not have to page (or swap) is always faster
than one which does, in real life, even if the disk bandwidth is never
even close to saturation, even if disk I/O can take place while the
CPU is doing usefull work (perhaps more often the case than not).  The
problem is that disk I/O requires CPU.  CPU is required to manage RAM
resources, disk paging area resources, perform context switches, etc.
My experience with real (UNIX) systems is that a faster CPU performs
I/O quicker than a slower one, even to the same very slow (floppies)
disks.  Very sad.

	The interdependancies for RAM bandwidth & CPU speed with cache
have enough complications for similar problems.  I'm sure someone out
there can elaborate.

	Stephen