Path: utzoo!utgpu!watmath!att!tut.cis.ohio-state.edu!cs.utexas.edu!uunet!iconsys!caeco!jose!i-core!geo-works!bryan
From: bryan@geo-works.UUCP (Bryan Ford)
Newsgroups: comp.sys.amiga
Subject: Software-only network protocol standards, Again
Message-ID: <2109.AA2109@geo-works>
Date: 11 Aug 89 18:48:49 GMT
References: <1789.AA1789@geo-works> <21047@cup.portal.com>
Followup-To: comp.sys.amiga
Lines: 32

In article <21047@cup.portal.com>, Thad P Floryan writes:
>Re: Bryan Ford's comments about UNIX "requiring" at least a 68020 ...
>
>Besides the Sun-2 (with a 68010 and custom MMU), we find:
>
>[several Unix machines with custom MMU's]

As I mentioned before, it was a bad assumption, I stand corrected, and I'm
sorry if it caused confusion.  However, this doesn't seem to have much to
do with my original question, about (relatively) high-speed networking in
software.  (By Relatively, I mean something that takes lots of the Amiga's
CPU time.) I've gotten more replies on my mostly unrelated folly than on
the networking thing itself.

Now, another question.  Which method would take the least amount of CPU (so
the local user doesn't 'lose' the system too much):  transmitting a packet
at a time, at the highest speed the hardware can handle (i.e.  280,000
bps), and disallowing multitasking in between characters, or transmitting
at a much slower speed, and using interrupts for each character?  Which
would provide better sustained throughput?

Thanks for your help,

				Bryan

--

     _______________________________________
   _/   Bryan Ford - bryan@geo-works.uucp   \_
 _/  ..!utah-cs!caeco!i-core!geo-works!bryan  \_
/ ..!uunet!iconsys!caeco!i-core!geo-works!bryan \
\_____________Author: Chroma Paint______________/