Path: utzoo!mnetor!uunet!lll-winken!lll-tis!ames!nrl-cmf!mailrus!rutgers!aramis.rutgers.edu!athos.rutgers.edu!hedrick
From: hedrick@athos.rutgers.edu (Charles Hedrick)
Newsgroups: comp.sys.ibm.pc
Subject: Re: PC AT clones, MSDOS and OS2 - request for info.
Message-ID: 
Date: 11 May 88 05:52:08 GMT
References: <23861@ucbvax.BERKELEY.EDU>
Organization: Rutgers Univ., New Brunswick, N.J.
Lines: 169
To: brand@janus.berkeley.edu


>This brought up the more fundamental question of the difference between
>the 8086, 80286 and 80386. I had thought that the main difference was
>that the 80286 supported truw multitasking while the '386 supported that
>as well as true multiuser capability. If that is the case, why would
>the make of clone matter? Would someone on the net advise enlighten me?

I'm certainly no expert on Intel architecture, but my impression is
that things are not so clear-cut as this.  The main difference between
the 8086 (aside from things that would lead to faster speed for the
286) seems to be the memory mapping.  However the 286 is still a
16-bit machine, i.e. the registers hold only 16-bit quantities, and
you are limited to 64K of contiguous memory (though compilers give
various kinds of emulation of larger address spaces, the various
"memory models").  The 386 has memory-mapping similar to the 286, but
it is a 32-bit machine.  Thus there are 32-bit registers, and more
importantly, the memory-mapping supports contiguous segments of memory
larger than 64K.  Also, there is hardware support for a "virtual
machine" that allows Unix and MS-DOS to run at the same time in a
clean way.

In the end you can do anything with any chip: in a theoretical sense
you can show that they are all equivalent, and Unix -- which handle
both multiple processes and multiple users -- has been implemented on
all of them.  However some things can be so difficult that nobody is
really going to bother doing them.  

It seems that the lack of memory mapping on the 8086 make it unlikely
that there will be any new multi-process operating systems written for
it.  Not that doing so is impossible, though in my view it is probably
impossible to do a secure multi-user system without protected memory.
But the 286 and 386 are enough better for this task that no one will
bother with the 8086.  Actually the 286 and 386 memory mapping seem
fairly similar.  I don't think that saying the 286 can do
multi-tasking but not multi-user makes sense, except in the sense that
maybe a 286 doesn't have enough CPU power to really support more than
one user.  It seems that the 286 and 386 are very similar in their
ability to support advanced operating systems.  (This is no doubt why
OS/2 is being done for both.)

The main advantages of the 386 are the 32-bit register and
instructions, and having contiguous address space larger than 64K.
Again, anything that can be done on the 386 can be done on the 286
eventually.  But it's enough easier to do big spreadsheets, editor
buffers (Gnu Emacs appears to be impractical to port to a 286), etc.,
that you're going to start seeing larger applications written to
support only the 386.  Programmers have enough trouble making things
work that they simply aren't going to go to extra trouble to tune
their programs to work on 16-bit machines very much longer.  This is
particularly true in the Unix community.  Most new Unix software these
days assumes a 32-bit machine.  Not that it couldn't be written for a
16-bit machine.  Often it could.  But taking software written on a
32-bit machine and converting it is at best a continual headache and
at worst very nearly impractical.  All that new stuff you're hearing
about from ATT and Sun will be available on the 386.  It's unclear to
me whether they will ever get around to doing it for the 286.  Under
OS/2 maybe this won't be the case.  386's are still new enough in the
MS-DOS community that most software is still written assuming 16-bit
machines.  But I have to believe that as 386's spread the MS-DOS and
OS/2 folks will eventually catch the 32-bit disease as well.

The reason that not all AT compatibles will run OS/2, by the way, has
nothing to do with the chip.  The same would be true of 386 machines,
except that since the 386 is fairly new, most machines using it are
designed with OS/2 in mind (or so the designers claim -- one wonders
how many of them will *really* run it).  The real problem is with
display adapters, I/O controllers, memory, etc.  MS-DOS uses the BIOS
to hide hardware dependencies.  So you can build a wierd machine, and
as long as you put enough cruft into the BIOS to handle the hardware,
MS-DOS will never know.  Unfortunately, the original BIOS design isn't
well suited to a multi-process system.  So OS/2 (and Unix) bypasses
the BIOS and deals directly with the hardware.  (Of course they could
have designed a new BIOS that could support OS/2.  But since the BIOS
is in ROM that would be a logistical nightmare.)  Thus if you have
hardware that is different from IBM's, the BIOS can no longer make up
for it.  It will still be possible to replace the low-level parts of
OS/2 with special software tuned for your machine's hardware, just as
the BIOS is.  The concern is that not all vendors may do that (indeed
they may not all be around to do it), and also that users may be
displeased at having to get a different version of OS/2 for each
machine from its own vendor, rather than being able to run the
standard IBM software on all their machines.









Unix, and OS/2 if you choose to classify it as an advanced operating
system):

  - 32-bit registers and instructions.  This is most important for
	Unix, where most software was originally written for VAXes
	and other 32-bit machines, in C.  Many big Unix programs
	would require a fair amount of work to run on a 16-bit
	machine.  Compilers could compensate for this.  It's perfectly
	possible to declare a 32-bit integer in C on the 8086 and 286.
	The compiler automatically deals with the low-order and
	high-order halves.  Clearly one could write a compiler where 
	this was the default, and you'd then have compatibility with 
	the VAX etc.  However the reports are that the performance 
	penalty is unacceptable.  So far I've talked only about
	porting Unix code from the VAX, because of the laziness
	of Unix programmers in declarating variables.  To the extent
	that you have applications that need to deal with data that
	takes more than 16 bits, then of course you're going to have the
	same performance problem whether you are using Unix or not
	and whether you're lazy or not.  This obviously depends
	upon the application, but there are many applications for
	which moving from 16 to 32 bits makes a significant
	performance difference.

  - large contiguous address space.  On the 8086 and 286, you can
	only increment through an array that is 64K bytes long.
	You can have larger data structures, but you have to
	adjust a separate pointer register at least once every
	64K bytes.  This isn't always crucial.  Originally I thought
	a reasonable Lisp would be impossible to do for an 8086 or
	286, but on further thought it's clear that it is not a
	problem, because no single Lisp object would need to be
	bigger than 64K.  But big Fortran arrays are painful.
	Of course compilers can compensate for this also, and
	some do (that's what the huge model is about).  But again,
	the performance penalty is heavy, and the official ATT
	286 Unix does not support the huge model.  (It may be
	that Xenix does, however.)  It appears that the huge
	model is worse to implement in protected mode than
	real mode, by the way, which may explain why it is fairly
	common in MS-DOS C compilers but not in System V.  Again,
	at the bare minimum, the 8086 and 286 cause portability
	problems for code that is written on machines without
	these constraints.  For example, Gnu Emacs reads your
	whole file into a contiguous buffer.  Unless you want
	to be limited to editing 64K files, you need a machine
	that can treat a large area of memory as a contiguous
	array.  As far as we can determine, no existing combination
	of operating system and compiler will allow Gnu Emacs to
	be ported to an 8086 or 286.  Of course it could have
	been written to break your file into smaller pieces and
	use a linked list.  But who is going to adopt more
	complicated programming practices just to fit old chips?
	Programming is already hard enough without operating 
	with one hand tied behind your back.



	



Here's my sense of what is unreasonable on various
types of machines:
  8086 - addressing more than 1M of memory, except via bank-switching.
	This makes a "big" OS such as the more recent versions of Unix
	or OS/2 impractical.  Not impossible.  If there were some
	law of nature that made better machines impossible to build,
	I'm sure there would be 10Mbyte 8086's running huge OS's, with
	the compilers hiding the bank-switching.  But with the 286 and
	386, that's never going to happen.  The other thing that
	makes multi-process and multi-user systems (and really there
	isn't much distinction: in both cases you want clean
	protection of multiple address spaces) problematical is that
	there is no memory-mapping hardware.  That could be
	supplied externally, as it is for all 680x0's below the
	68030, but it seems like at the low end nobody does that,
	and nobody is going to build a high-end 8086.