Path: utzoo!utgpu!water!watmath!clyde!att!osu-cis!tut.cis.ohio-state.edu!rutgers!aramis.rutgers.edu!athos.rutgers.edu!hedrick
From: hedrick@athos.rutgers.edu (Charles Hedrick)
Newsgroups: comp.dcom.lans
Subject: Re: Terminal servers over ethernet?
Message-ID: 
Date: 5 Jul 88 23:50:56 GMT
References: <320@ucrmath.UUCP>  <3960@saturn.ucsc.edu>  <9816@e.ms.uky.edu> <23612@bu-cs.BU.EDU>
Organization: Rutgers Univ., New Brunswick, N.J.
Lines: 88

There are flow controls issues with using terminal servers.  I'm not
sure whether this message was unclear or I am just misreading it, but
at any rate, let me say what our experience is.

First, we normally run normal terminals without depending upon flow
control.  There are various reasons for this, of which network
terminal servers are only one.  (The original reason is that if you
make the terminal generate xoff, Emacs users are constantly getting
spurious search commands.)  This is done by putting appropriate
padding entries into the termcap terminal descriptions.  If your
terminal control codes are properly padded, then they will work on
direct lines or terminal servers without any difference.

The cases where flow control is needed are generally driving printers
or similar devices.  In that case, there is a potential problem,
because the buffering delays introduce enough sloppiness in timing
that you can't depend upon the host to respond to an xoff fast enough.
Note that packet loss and network contention are not the issue.  What
causes the delay in response to xoff is the fact that network software
tends to put lots of data in one packet, and to use multiple
buffering.  The result is that when the host senses an xoff, there is
a lot of data already in its own output buffers, in packets on the
network, in gateways, in terminal servers, etc.  Even if the host
stops sending data immediately in response to xoff, you'll see as much
as several Kbytes of data that was already output.  The simplest
solution to this is to set things up so that the terminal server
itself processes the xoff.  In that case, things stop and start
immediately, and in general work just like a hardwired line.  This
is a complete solution for printers and most other similar devices.

The hard case is flow control for connections with humans on them.
Note that we are talking only about the case where a human wants to
stop output so it doesn't scroll off the screen.  We are not talking
about terminals that are too slow to keep up with output, since we've
already solved that by padding.  If you are willing to have ^S always
stop output, there is no problem.  Just have the terminal server do it
locally, and everyhthing will work like a hardwired line.  However
there are programs like Emacs that expect to use ^S as a search
command.  So there are two possibilities.  (We support both.)  One is
to use a protocol like rlogin, where the terminal server is told
by the host when ^S should stop output and when it should act like
a normal character.  The other is to find some character other than
^S that isn't essential to Emacs or whatever similar applications
your site runs.  The terminal servers are set up to pause locally
whenever they get that character.  We use ^\ by default.   (The
user can change it.)

I would not expect SLIP to be very different in overhead from normal
terminal server traffic.  It might be slightly slower, but in practice
I suspect differences are more in the quality of the implementation
(i.e. in how much optimization has been done) than due to inherent
differences in the CPU requirements.

The other difference is feel from a terminal server is also related to
buffering on the network: ^C, ^O, etc.  Most operating systems have
control characters that tell it to start discarding output.  For the
same reason that the host can't immediately stop output when you type
^S, it can't immediately suspend output when you type ^O or ^C: a lot
of it is already in flight.  Both telnet and rlogin allow a host to
send an "out of band" messge to the terminal server telling it to
discard any data that is already in the pipeline.  When this is
implemented (it was not in 4.3 telnetd), you may see a few extra lines
of output after a ^C, but little more than that.  WIthout this
mechanism, output can go on for pages after you have told it to stop.
Even with the out of band support, users notice the extra few lines of
output, but generally it doesn't bother people.

It's important to bring these issues out because managers should know
that switching from hardwired lines to terminal servers will not be
totally transparent.  In order to get good results, your host telnet
software must be up to date.  And even then, users will notice some
slight changes in the way things behave, mostly in stoppping or
suspending output.  When people said that a terminal server felt just
like a local terminal, I think what they were referring to was
response time.  That much is true.  With a properly engineered
network, you will not notice a slowdown in echoing.  On the other
hand, it does increase the number of things on which you are
depending.  Not only must the host be working, but the terminal server
must be, and the network must be free of short-circuits, broadcast
storms, etc.  Inevitably there will be times when this is not the
case, and you will see misbehaviors due to the network or terminal
server.  However serial interfaces are among the more failure-prone
pieces of most hosts, so the overall reliability may not be decreased.
(This is particularly true in multi-vendor situations.  I think our
overall quality of service to dialup users is better with terminal
servers than in the days when we had many different machines, each
with its peculiarities in modem control.  However you should at least
think about the implications before making any such change.