Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP Path: utzoo!mnetor!uunet!seismo!rutgers!labrea!glacier!jbn From: jbn@glacier.STANFORD.EDU (John B. Nagle) Newsgroups: comp.unix.wizards Subject: Re: 4.3 BSD networking Message-ID: <17140@glacier.STANFORD.EDU> Date: Sun, 26-Jul-87 14:24:30 EDT Article-I.D.: glacier.17140 Posted: Sun Jul 26 14:24:30 1987 Date-Received: Sun, 26-Jul-87 21:27:55 EDT References: <8479@brl-adm.ARPA> Reply-To: jbn@glacier.UUCP (John B. Nagle) Organization: Stanford University Lines: 83 In article <8479@brl-adm.ARPA> cpw%sneezy@LANL.GOV (C. Philip Wood) writes: > > case 1. The first case came to light when I discovered most of the mbufs > were linked on a tcp reassembly queue for some telnet connection > from a VMS system over MILNET. Each mbuf had one character in it. > With a receive window of 4K you can run out of mbufs pretty easy. This is the old "tinygram problem", and appears in many old TCP implementations, including 4.2BSD. I devised a theoretical solution to this problem years ago (see RFC896, Jan. 1984), and Mike Karels put it in 4.3BSD. But there are still a lot of broken TCP implementations around, especially ones that are derived from 4.2. Ordinarily the tinygram problem only results in wasted bandwidth. But crashing the system is unreasonable. The receiver could protect itself against this situation by limiting the number of mbufs on the reassembly queue to (1+(window/max seg size)). A sender with the tinygram problem fixed will not exceed this limit. When that limit is reached, drop something, preferably the packet with the largest sequence number in the window. This will prevent buffer exhaustion due to out of order tinygrams. Examine the TCP sequence numbers in the queued mbufs and find out if there are duplicates. If many packets are duplicated, the other end has a broken retransmission algorithm. > case 2. The second case came resulted from sending lots of udp packets > of trash over an ethernet and swamping the udp queue. The system crashed just because of a transient packet overload? Strange. > > case 3. The last case I investigated, resulted from many domain name > udp packets queueing up on the udp queue. Similar to case 2, > but in this case the packets were 'legitimate'. > One problem with a shared dynamic resource such as mbufs is that for a system to work reliably, either all requestors for the resource must be able to tolerate rejected requests for the resource, or all requestors of the resource must have quotas which prevent hogging. Given the way 4.3BSD works, the first solution appears to be partially implemented. When out of mbufs, one can discard incoming packets, of course, but this can be regarded only as an emergency measure. On the other hand, waiting for an mbuf introduces the possibility of deadlock. >AS I SEE IT > >The above points to two related items: > >1. The 4.3 BSD kernel must be made more robust, to avoid being corrupted > by rude hosts. Does anyone have ideas on how to identify resource hogs? > What to do when you find one? > >2. Once a misbehaving host has been identified, who is it we contact > to get the problem fixed in a timely fashion? Where is it written > down who to contact when XYZ vendors, ABC-n/XXX, zzzOS operating > system is doing something wrong, and it is located 2527 miles away > in a vault operated by QRS, International? Should this be part of > the registration process for a particular domain? Is it already? > The system manager for each host known to the NIC is in the "whois" database. When I was faced with the problem of dealing with faulty hosts, I used to send letters along the lines of "your MILNET host is causing network interference due to noncompliance with MIL-STD-1778 (Transmission Control Protocol) para 9.2.5.5.; see attached annotated packet trace; copy to DCA code 252.", and followed this up with a phone call. After about a year of nagging, most of the worst offenders were fixed. Now that there exist decent TCP implementations for most iron, it is usually sufficient to get sites upgraded to a current revision of the network software for their machine. So it is easier than it used to be to get these problems fixed. The stock 4.3BSD kernel doesn't log much useful data to help in this task. It's a real question whether this sort of test instrumentation belongs in a production system. I once put heavy logging in a 4.1BSD system using 3COM's old TCP, and found it immensely useful, but one shouldn't generalize from this. This is really a subject for the TCP-IP list. John Nagle John Nagle