Path: utzoo!attcan!uunet!husc6!rutgers!paul.rutgers.edu!aramis.rutgers.edu!athos.rutgers.edu!hedrick
From: hedrick@athos.rutgers.edu (Charles Hedrick)
Newsgroups: comp.protocols.tcp-ip
Subject: Re: telnet SUPRESS-TELNET option...
Message-ID: 
Date: 17 Jul 88 15:53:41 GMT
References: <12413914739.7.BILLW@MATHOM.CISCO.COM> <12414589612.8.OP.WILLETT@SCIENCE.UTAH.EDU>
Organization: Rutgers Univ., New Brunswick, N.J.
Lines: 33

The reason search for IAC's is a performance issue is because without
it one need not look at individual characters.  Billw works for cisco.
They make terminal servers.  Their terminal servers are not bound by
performance limitations in the Berkeley implementation.  I don't know
how far Billw has gone in optimization, but in principle, they could
get the Ethernet interface to DMA a packet into memory, do a bit of
header processing, and then hand the packet to a DMA serial output
device, without ever looking at the characters at all.  Obviously this
isn't an issue for echoing single characters.  But for screen
refreshes, output on graphics terminals, output to printers, etc., a
reasonable TCP should be able to produce full packets.  Even for the
Berkeley code, processing characters individualy could matter.  It is
true that with a straight-forward implementation, there are a number
of context swaps.  But for large amounts of output data, you should be
able to get the whole screen refresh, or at least a substantial
portion of it, in one activation of telnetd.  In that case, the
efficiency with which it can process the characters may matter.  We
did see noticable differences in CPU time used by telnetd vs rlogind
before we put all of that stuff in the kernel.  In principle, rlogind
can simply do a read from the pty into a buffer and a write from the
same buffer to the network, where telnetd must look at the characters.

I'm not saying whether I think it's a good idea to have an option that
disables IAC checking.  But I can certainly see why Bill believes
there are performance implications.  My guess is that a carefully
tuned implementation can minimze those implications.  (e.g. one could
use something like index(buffer,IAC) to see whether there is an IAC
present, and then work at tuning index in assembly language.)  It's
always a matter of judgement as to whether it is a better idea for the
protocol design to encourage that kind of trickiness or not.  The
tradeoff of course is that to prevent it we complicate the protocol,
and make it likely that implementors won't bother to tune the case
where IAC's are left enabled.