Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP
Posting-Version: version nyu B notes v1.5 12/10/84; site csd2.UUCP
Path: utzoo!linus!philabs!cmcl2!csd2!martillo
From: martillo@csd2.UUCP (Joachim Martillo)
Newsgroups: net.lan
Subject: Re: socket library under System V?
Message-ID: <3070003@csd2.UUCP>
Date: Tue, 20-Aug-85 18:55:00 EDT
Article-I.D.: csd2.3070003
Posted: Tue Aug 20 18:55:00 1985
Date-Received: Fri, 23-Aug-85 20:05:50 EDT
References: <284@SCIRTP.UUCP>
Organization: New York University
Lines: 122

/* csd2:net.lan / robert@cheviot.uucp (Robert Stroud) /  9:37 am  Aug 15, 1985 */
>David Hinnant (dfh@SCIRTP.UUCP) asked about library implementations
>of the 4.2 socket interface.

>Joachim Martillo (martillo@csd2.UUCP) replied and argued that the socket
>interface gave a uniform approach to ipc whilst the library approach was
>inflexible and inefficient because of all the protocol dependent code
>which got linked into the user program. (See <3070002@csd2.UUCP> for the
>original article).

This was not the only  reason.  The inflexibility  arises because they
library is being  supplied for  a specific protocol  in this  case for
TCP/IP.  In the Berkeley universe a socket  is not simply  a construct
for TCP/IP communication but a generalized communication mechanism.  I
might not want to run TCP/IP but rather ChaosNet or something else.

Further,   I pointed out that    routing is  quite a  problem  for the
Library/Driver approach.   I also  see a lot  of problems with address
resolution.  I suspect  the library/driver  approach works best with a
small network where  all hosts are almost always  up, where routing is
static,  and  where address  resolution  is handled via  static tables
maintained in files on all hosts.

Even if such a setup is sufficient for the current needs of a site to
start, I suspect the users would eventually find this set-up limiting.

>I always thought that a library implementation of sockets simply mapped 
>calls like socket, bind and send more or less directly into open,
>ioctl and write. 

This is my impression as well.


>		  I don't see why you can't keep all the protocol dependent
>code inside the kernel. 

This is beyond the library/driver  approach and would  not be possible
for someone running Xenix on an AT because microsoft does not  provide
source,  but  for argument  assume the software   suppliers were nice,
friendly  people, consider  the pain    of opening  up  an  ether  net
connection to a  remote host  using TCP/IP  assuming all  the  virtual
circuit protocol will be handled in the kernel.

First  we  open  up /dev/ethernet for  reading  and  writing  and then
perform  necessary  ioctl's   to get  a unique virtual   circuit  port
allocated to our process.

If we want to communicate on a well-known port on the foreign machine,
we use a library routine to get the foreign host addr from the foreign
host name.  Now what do we do with this addr + port?  In 4.2 we  would
do  a connect but  here we  now have  to resolve the  foreign ethernet
address.  This is easy if we have static tables and our hardware never
breaks down.  Now we have to do some  routing calculations  if we have
any but the  simplest network.  This  could not be handled within  the
current formalism because  this  is a network  topological problem and
not a protocol problem.   Now  we  could put  some  address resolution
protocol routines in the kernel and  run routing  daemons but  then we
have begun to reinvent a large part of 4.2 ipc.

I suppose some fancy ioctl's could be invented to take care of getting
the proper address and  routing info to the network  protocol routines
in the kernel but this is not the normal use of ioctl which is used to
pass control info to the driver for talking to the hardware interface.
The address and routing data is not meant for the hardware.  I suppose
you could at this point invent a  bunch of protocol  pseudodevices but
this  strikes  me as much  more complicated than the current  Berkeley
socket interface.

Well, now after doing all the fancy ioctl's on /dev/ethernet  and on a
bunch of gross pseudodevices we are ready to write our first  message.
Now suppose  we are going to  use  this virtual  circuit   to set up a
telnet session.  Here  come another horde of  pseudodevices!   This is
all just too   complicated.  The formalism  of read/write/ioctl  which
works well for tty's, lp's, and disk controllers just  is not flexible
enough and was never meant  to handle devices like  networks which are
"open" on the other side.

>			 Is it really that difficult to bend the socket
>interface to fit the conventional device driver interface? If it is a
>little awkward, then all the more reason to hide the grotty details in
>a library, but why go to the trouble of introducing a new set of system
>calls when the old ones are more or less adequate?? 

The old closed system calls are not adequate for open systems.

>I'm not necessarily suggesting that the socket abstraction is a bad one, 
>but does it have to be in the kernel? We all use the  library
>and that's not part of the kernel...!

>Please don't flame me about this - it's a serious question and I would
>appreciate some discussion of the issues involved. It has been suggested that
>the 8th Edition concept of a Stream can be used to implement sockets, 
>presumably through the ordinary open/read/write/ioctl special device 
>interface. Would anyone care to expand on this?

I am not so sure the edition 8 formalism  is  all  that different  for
Berkeley's formalism.  Looking at  pg 1901 October  1984 ATTBLTJ, I am
not sure that there  are  not system  calls for talking  to proto/out,
proto/in modules which would perform the edition 8 stream version of a
connect.

If you  look  at   pg  1906  of  the same  article,    the  diagram is
suspiciously like the  Berkeley client/server model.  The user/process
is  the remote user   application talking to the  remote  pty.  The PT
looks  like  the server   which sends messages   to the local machines
client.  I think Ritchie may have generalized this so  that the server
can easily either be local or remote or divided.  Berkeley assumes the
server is  remote.   With black magic, you can  put the server on  the
local machine talking to the device driver and have the remote process
be  the  client.   The X  window   system   from   MIT   does this for
communication with a VS100.   The server might actually  be built into
the formalism in  some  basic sort  of way although  a built-in server
might not be a good idea  if it  is  too inflexible if  it forces more
context switches between user and kernel process.

>One of the systems I use, (a Perq running PNX), provides both a datagram
>and transport service on an Ethernet in a conventional way without sockets
>so it can be done!

But  I have  the  impression that   edition 8  takes    basically  the
equivalents of socket,  bind,  connect and listen  as fundamental  and
then built open and ioctl with some extra pseudodevices on top of this.