Path: utzoo!attcan!utgpu!jarvis.csri.toronto.edu!mailrus!purdue!haven!grebyn!ckp
From: ckp@grebyn.com (Checkpoint Technologies)
Newsgroups: comp.sys.amiga
Subject: Re: GVP controller
Message-ID: <12263@grebyn.com>
Date: 11 Aug 89 17:40:00 GMT
References: <8908072207.AA14796@jade.berkeley.edu> <12254@grebyn.com> <501@tardis.Tymnet.COM>
Reply-To: ckp@grebyn.UUCP (Checkpoint Technologies)
Organization: Grebyn Corp., Vienna, VA, USA
Lines: 57

In article <501@tardis.Tymnet.COM> jms@tardis.Tymnet.COM (Joe Smith) writes:
>
>What about DMA overhead?  It takes some bus cycles to start DMA (the device
>has to get a Bus Grant signal), and non-zero time to stop DMA (release the
>bus).  If you have an extremely fast disk, the controller could grab the
>bus, squirt all the data into RAM, and beat the pants off any controller that
>uses the CPU to copy data.  But, unfortunately, most disks don't have data
>transfer rates that match the Amiga's raw bus speed.

	Getting a DMA device on and off the bus is a lot less expensive
than you make it sound - more like a clock cycle or two, not several
*bus* cycles (which are at least 4 clock cycles each). And certainly
nobody would build a DMA controller that wouldhog the bus for the
duration of a transfer, given the speed hard disks move the data.

>
>So, what do you do with DMA and a slower disk?
>  A) Transfer the data as a single DMA request, locking the CPU out until
>     the last byte is transfered.  (slows down or stops CPU intensive tasks)
>  B) Transfer one word at a time, allowing the CPU to run between consecutive
>     pairs of bytes from the disk.  (CPU eaten up by DMA requests and grants)
>  C) Transfer data to cache on controller, let CPU copy to RAM via a program
>     loop (read data into CPU, write from CPU, fetch opcodes, branch)
>  D) Transfer data to cache on controller, let DMA hardware copy complete
>     buffer of data to RAM at full bus speed using a single DMA request.
>
	Well, as it happens, the Commodore A2090(a) has a 64 byte FIFO.
This is not a hole sector buffer, but the principle is the same - disk
data enters the FIFO buffer, thenis transferred to memory in short DMA
bursts. The HardFrame DMA controller also has a FIFO buffer, I don't
know exactly how large, but the idea is the same. And the speed is
greater than would be produced by transferring the entire sector into
the buffer first, then into RAM, because of the overlap between disk
data to FIFO and FIFO to RAM.

	Another wrinkle is that a SCSI disk drive *must* contain it's
own sector buffer. This is because the SCSI controller must read the
entire sector locally to perform ECC error correction, and then present
corrected data to the host. So it's not unreasonable to think that the
data presented across the SCSI interface would be moving much faster
than the data appeared from the disk drive.

>Option D produces is the least taxing on the CPU, and is the most expensive.

	But the final argument is for CPU performance degradation. You
say yourself that option D provides the least CPU overhead; well, it
happens to be the option all DMA controllers provide (but with FIFO
rather than cache). If you look, the DMA controllers are not that much
more expensive than the non-DMA controllers (the A2090 is anexception,
but it also offfers an ST506 interface, which accounts for the extra
cost.)
 
-- 
First comes the logo: C H E C K P O I N T  T E C H N O L O G I E S      / /  
                                                                    \\ / /    
Then, the disclaimer:  All expressed opinions are, indeed, opinions. \  / o
Now for the witty part:    I'm pink, therefore, I'm spam!             \/