Path: utzoo!utgpu!water!watmath!clyde!att!osu-cis!tut.cis.ohio-state.edu!mailrus!ames!lll-tis!helios.ee.lbl.gov!lace.lbl.gov!dagg From: dagg@lace.lbl.gov (Darren Griffiths) Newsgroups: comp.protocols.tcp-ip Subject: Re: Multiple TCP/IP servers on one host Message-ID: <985@helios.ee.lbl.gov> Date: 21 Sep 88 05:52:49 GMT References: <8809151736.AA01161@fji.isi.edu> Sender: usenet@helios.ee.lbl.gov Reply-To: dagg@lbl.gov (Darren Griffiths) Organization: Lawrence Berkeley Laboratory, Berkeley Lines: 69 In article <8809151736.AA01161@fji.isi.edu> prue@VENERA.ISI.EDU writes: >>> 1) If the second path was 25% of the speed of the first path then 25% of >>> the packets could be sent that way. [...] >>> if the two end sides were running the Van Jacobsen/Mike Karels code >>> I believe this wouldn't be to much of a problem. [...] >> >> >>The first thing (splitting load among routes) would screw up the >>Jacobson/Karels improved TCP completely. They get a big win by >>estimating the variance of the round trip time; using alternating >>routes for different packets would drive this variance way up, causing >>the timeout to be set high, causing long stoppages on lost packets. > >I disagree. The first path is four times as fast but has four times as >many packets. The link delay is only 1/4 the second line but the queuing >delay is four times as great. The variation of the delivery times for the >five packets would be less than using a single line. As the queue sizes go >up the variation in the network delay goes up. > >I do however agree with your other point, type of service routing could >put the second path to very good use. The first time I thought out loud about routing through two different paths I did mention that it may mess up the van/mike tcp code. Fortunately this was at the SIGcomm conference in Stanford and Van wasn't far away. He quickly set me straight and with a little extra thought it is fairly obvious that the code would stabalize around the acks of the faster line. A couple of people have mentioned that instead of looking for well known ports to decide whether to use a fast line or not that the TOS field should be used. I agree, this would be much better, but I decided to watch a couple of packets go by on the local LBL net. I didn't look to long, but I found very few things actually touch that field. Another possible thing to do would be to try and figure out which link is the fastest, if it has one line that is 25% the speed of a second line then the router could simply send 25% of the packets down the slow connection and the rest along the fast connection (perhaps using a random number to see which line the packet is going to go along.) I seem to remember that someone wrote a paper on this but I can't remember who of the top of my head. It would be interesting to try different routing algorithms and see which is the more efficient, with efficient be defined by the perceived throughput to the user as well as the number of packets going through. I can think of a few methods that could be tried: 1 - Leave things the way they are, and just taking the fastest path. 2 - Choose the path depending on the TOS field, and hope someone uses it. 3 - Choose the path based on the port (TELNETs go the fast way, SMTP goes the slow way.) 4 - Send a percentage of the packets down one path and the rest down another. Can anyone think of any others? Perhaps when I have some free time I'll do a couple of tests and see what I come up with. Cheers, --darren ------------------------------------------------------------------------------- Darren Griffiths DAGG@LBL.GOV Lawrence Berkeley Labs Information and Computing Sciences Division