Path: utzoo!utgpu!water!watmath!clyde!att!osu-cis!tut.cis.ohio-state.edu!mailrus!uwmcsd1!marque!uunet!sugar!peter
From: peter@sugar.uu.net (Peter da Silva)
Newsgroups: comp.sys.atari.st
Subject: Re: Another great quote from Mr. Good
Message-ID: <2474@sugar.uu.net>
Date: 14 Aug 88 13:07:26 GMT
References: <3308@druhi.ATT.COM> <1104@atari.UUCP> <364@bdt.UUCP> <1045@scolex>
Distribution: comp
Organization: Sugar Land Unix - Houston, TX
Lines: 61

In article <1045@scolex>, kurth@sco.COM (Kurt Hutchison) writes:
> I for one agree with the argument that releasing a new OS that breaks
> old software is a bad idea.  Theological arguments about the proper
> behavior of programs always take a back seat to compatibility concerns.

And this is why operating systems have a limited life. Eventually the bugs
left in to keep the old software from breaking accumulate to the point where
it's not worth keeping the old software. This is one of the reasons UNIX has
so far been so successful as a third party O/S: with no binary standard there's
been no old software to keep running.

Now that Xenix and COFF have become binary standards, I expect this to change.
In fact, it's beginning to... look how big the new SV/Xenix/BSD merge is
going to be. Look how big SV already is.

> Remember the original PC clone wars?  I worked for a hardware company then
> that made a PC clone which they thought was "better" than the PC.  It didn't
> sell at all because it wasn't exactly compatible, you had to buy a special
> version of DOS for it and "Ill-behaved" programs didn't run.

But when *IBM* came out with a beter (and slightly non-standard) PC it sold
just fine, despite the fact that ill-behaved programs didn't run on it. They've
done it twice now, first with the AT and now with the PS/2 line.

> Ill-behaved
> programs are the rule rather than the exception, most programs that perform
> really well were Ill-Behaved.

This is because the operating system didn't support things, like fast text,
that the programs needed. I thought one of the reasons for going with GEM
was to keep stuff like this from happening.

There's nothing about your cheap Malloc that required programs to make
assumptions about how memory was allocated. It could have been used like
"sbrk" in UNIX as the low-level allocator... just split memory up into 7
or 8 chunks, and Malloc them whenever the existing malloc pool got filled
up.

This reminds me of the bourne shell on UNIX... it assumed that you could
always restart instructions after a segmentation violation, so it didn't
bother allocating memory until it got a SIG_SEGV. It broke when UNIX was
ported to the 68000, where this behaviour didn't work. A very close
parallel: it was making assumptions about how memory management worked.
But since this wasn't just a caese of binary compatibility, it was fixed
(the alternative would have been to put two 68000s in every UNIX box, and
use one to restart instructions...).

> While compatibility is not a universal truth, compatibility between OS
> releases is a good thing.  Would you be willing to wait six months for
> new releases of all of your software so that it would run again?

People have done this time and time again. They just kept using the old
version of the operating system or computer while waiting for the new one
to get up to speed.

> Kurt Hutchison - The Santa Cruz Operation - Software Engineer

How did you people get the bourne shell working right on the '86?
-- 
		Peter da Silva  `-_-'  peter@sugar.uu.net
		 Have you hugged  U  your wolf today?