Path: utzoo!mnetor!uunet!husc6!hao!ames!ucbcad!ucbvax!UMN-REI-UC.ARPA!slevy From: slevy@UMN-REI-UC.ARPA (Stuart Levy) Newsgroups: comp.protocols.tcp-ip Subject: Determining gateway buffer capacity Message-ID: <8712110823.AA05126@uc.msc.umn.edu> Date: 11 Dec 87 08:23:21 GMT Sender: daemon@ucbvax.BERKELEY.EDU Organization: The ARPA Internet Lines: 29 Some recent messages on this list described efforts to define a scheme for hosts to find a network path's maximum preferred message size, e.g. the largest datagram which could be sent without some gateway fragmenting it. Has anyone considered having gateways give advice to hosts on how -much- data they should send, either in terms of total amount of outstanding data (which TCPs could use to limit windows) or data flow rate (which, say, NETBLTs could use to set rate parameters)? This seems like a natural function to piggyback onto a max-message-size probe. It suffers from some of the same limitations, that there may be multiple paths with different behavior. A gateway giving buffering advice would have to make some assumption about what fraction of its capacity (or its links' capacity) should be available to a given session. It might make a static decision ("I expect to have <= 10 active connections passing through me at once, so I advise everybody to use no more than 1/10th of my capacity") or a dynamic one ("Things are getting too crowded, so I'll tell everyone to send no more than 1000 bytes per second"). If implemented as an IP option it could be hung on Source Quenches to give some crude quantitative information. Capacity assumptions could be wrong, but at least they would prevent hosts with 32K TCP windows from swamping gateways with 20K of buffer space. Is anything along these lines being discussed? (Or has it already been discussed and abandoned?) Stuart Levy, Minnesota Supercomputer Center slevy@uc.msc.umn.edu, (612) 626-0211