Megalextoria
Retro computing and gaming, sci-fi books, tv and movies and other geeky stuff.

Home » Archive » net.micro.pc » Standard, What standard???
Show: Today's Messages :: Show Polls :: Message Navigator
E-mail to friend 
Switch to threaded view of this topic Create a new topic Submit Reply
Standard, What standard??? [message #112732] Mon, 16 September 2013 13:50 Go to next message
root is currently offline  root
Messages: 85
Registered: June 1984
Karma: 0
Member
Message-ID: <143@idmi-cc.UUCP>
Date: Wed, 30-Jan-85 14:46:30 EST
Article-I.D.: idmi-cc.143
Posted: Wed Jan 30 14:46:30 1985
Date-Received: Sat, 2-Feb-85 14:03:24 EST
Distribution: net
Organization: I.D.M.I., Arlington, Va.
Lines: 38
Xref: watmath net.flame:8169 net.micro:9238 net.micro.pc:3251


	This is sort of a flame, though it's also an honest attempt
to get some support one way or the other for an arguement on this issue
that has been going around my office for over a year.
	My problem is that every time IBM announces a 'new' product for 
one of it's PC's there is a flurry of activity in the media that centers 
around the question "Is IBM defining a new standard in ...(whatever)?".
Times used to be, when setting a standard refered to either a) a standard
of excellence or b) a standard for 'state of the art' (ie/ a breakthrough
of some sort).  My understanding of the IBM line of PC's is: a) none of them
have ever used anything but already exisisting technology,  b) none of them
have been any bit more reliable than all but the worst of their competitors,
c) some of them (the PCjr for instance) have been a good deal worse than
other similarly priced products, d) and finally, rarely has any of the
software with IBM's name on it been 'state of the art' (let alone a breakthrough
on the 'leading edge').  The fact is I can't recall ever having seen anything
come from IBM that hadn't already been available in equal or better form
from somewhere else for less money.
	Don't get me wrong, I like IBM.  I think they have had a big hand
in the popularization of computers.  I just don't think they have set any
standards in the computer world.  I am tired of hearing the question, over
and over again, "is IBM setting a standard in ..." when what realy is being
asked is "is IBM doing another thing in a mediocre way that everyone else
will be forced to accept, emulate, or improve on because of the general public's
low level of knowledge concerning computers".
	It is my feeling that to say IBM sets standards in the computer
industry is like saying Betty Crocker sets the standards for fine french pastry.

-----
"This message will self destruct in 5 seconds"

The views expressed herein are probably not worth much to anyone and
therefore should not be mistaken to represent I.D.M.I, it's officers
or any other people heretofore or hereafter associated with said company.

			Andrew R. Scholnick
			Information Design and Management Inc., Alexandria, Va.
		{rlgvax,prcrs}!idmi-cc!andrew
Re: Standard, What standard??? [message #114129 is a reply to message #112732] Tue, 17 September 2013 15:08 Go to previous messageGo to next message
jss is currently offline  jss
Messages: 71
Registered: May 2013
Karma: 0
Member
Message-ID: <810@sjuvax.UUCP>
Date: Wed, 6-Feb-85 03:57:15 EST
Article-I.D.: sjuvax.810
Posted: Wed Feb  6 03:57:15 1985
Date-Received: Fri, 8-Feb-85 00:47:35 EST
References: <143@idmi-cc.UUCP>
Distribution: net
Organization: Saint Josephs Univ. Phila., Pa.
Lines: 17
Xref: watmath net.flame:8233 net.micro:9287 net.micro.pc:3280

[Aren't you hungry...?

 >  My understanding of the IBM line of PC's is: a) none of them
 >  have ever used anything but already exisisting technology.
 >  
 >  			Andrew R. Scholnick
 >  			Information Design and Management Inc., Alexandria, Va.
 >  		{rlgvax,prcrs}!idmi-cc!andrew

Not only that, but they didn't even try to use it innovatively, nor even
efficiently.  Same true for the PC-AT.

Segmented Architecture... AAAAAAAARRRRRRRRRRRRRRGGGGGGGGGGHHHHHHHH!

Jonathan S. Shapiro
Haverford College
..!allegra!sjuvax!jss
Re: Standard, What standard??? [message #114137 is a reply to message #112732] Tue, 17 September 2013 15:08 Go to previous messageGo to next message
mf1 is currently offline  mf1
Messages: 4
Registered: September 2013
Karma: 0
Junior Member
Message-ID: <4814@ukc.UUCP>
Date: Fri, 1-Feb-85 17:45:20 EST
Article-I.D.: ukc.4814
Posted: Fri Feb  1 17:45:20 1985
Date-Received: Sat, 9-Feb-85 06:01:30 EST
References: <143@idmi-cc.UUCP>
Reply-To: mf1@ukc.UUCP (Michael Fischer)
Distribution: net
Organization: Computing Laboratory, U of Kent at Canterbury, UK
Lines: 13
Xref: watmath net.flame:8246 net.micro:9301 net.micro.pc:3287
Summary: 

<>
I have to agree there is a conservative effect IBM seems to have
introduced into the marketplace, at the same time increasing the
popularity of the micro.  I went to a technologically isolated area in
asia for two years.  When I returned I noted many changes; the
traditional culture shock.  One thing that hadn't changed in those two
years was micros.  I cam back expecting great marvels, and obsolesence
of my own knowledge, and instead found marketing expansion, but few new
ideas.  I have very mixed feelings about this, and must admit some
disappointment.

Michael Fischer vax135!ukc!mf1
Re: Standard, What standard??? [message #115476 is a reply to message #112732] Wed, 18 September 2013 17:59 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: david@daisy.UUCP (David Schachter)
Message-ID: <56@daisy.UUCP>
Date: Wed, 20-Feb-85 23:19:22 EST
Article-I.D.: daisy.56
Posted: Wed Feb 20 23:19:22 1985
Date-Received: Wed, 27-Feb-85 09:19:59 EST
References: <143@idmi-cc.UUCP> <810@sjuvax.UUCP>
Reply-To: david@daisy.UUCP (David Schachter)
Distribution: net
Organization: Daisy Systems Corp., Mountain View, Ca
Lines: 30
Xref: utcs net.flame:8170 net.micro:9086 net.micro.pc:3298
Summary: 


Mr. Shapiro writes that IBM doesn't use new technology innovatively or
efficiently.  He closes with "Segmented Architecture... AAAAARRRRRRRRRR-
GGGGGGGGHHHHHHHH!"  I beg to differ.

The circuitry coupling the PC-AT bus to the PC-XT bus (to remain compatible)
is neither simple nor brilliant.  But it does accomplish the presumed design
goal: get the job done cheaply.  In this case, efficiency with respect to
cost.  The base concept, that of connecting the old and new busses to remain
compatible, is innovative, at least mildly.  Most companies would simply
say "sorry folks but this is a new generation.  Throw out your old hardware."
IBM didn't.  (They did the same thing with the IBM 360/370/30xy mainframes.
DEC did the same with the VAX.  Intel did the same with the 80x86.)  Note
that I am referring to hardware compatibility: the hardware interface to the
outside world is retained even though the hardware guts are radically dif-
ferent.  Compared with the rest of the micro-world, IBM's approach is
innovative.

Finally, although I am not a fan of segmentation a la Intel, I am compelled
to point out that my company has done quite a lot within the Intel archi-
tecture.  Our experience in writing complex Computer-Aided-Engineering
programs is that if you need segments > 64kB, you probably don't know
what you are doing: there exists a better algorithm to do what you want to
do.  This is not always true but it is true often enough that the Intel
architecture doesn't cause us much pain.  In summary, while segmentation
looks bad, it really doesn't hurt too much.

(I have no connection with Intel or its competitors.  Nobody likes me.
The opinions expressed herein are mine, not those of my company or its
employees.)  {Perfection is temporary.}
Re: Standard, What standard??? [message #115480 is a reply to message #112732] Wed, 18 September 2013 17:59 Go to previous messageGo to next message
agn is currently offline  agn
Messages: 6
Registered: September 2013
Karma: 0
Junior Member
Message-ID: <287@cmu-cs-k.ARPA>
Date: Mon, 25-Feb-85 18:45:13 EST
Article-I.D.: cmu-cs-k.287
Posted: Mon Feb 25 18:45:13 1985
Date-Received: Fri, 1-Mar-85 07:38:20 EST
References: <143@idmi-cc.UUCP> <810@sjuvax.UUCP>, <56@daisy.UUCP>
Organization: Carnegie-Mellon University, CS/RI
Lines: 21
Xref: watmath net.micro:9532 net.micro.pc:3414


Defending the 80x86's segmented architecture, Mr. Schachter writes:

 >  Finally, although I am not a fan of segmentation a la Intel, I am compelled
 >  to point out that my company has done quite a lot within the Intel archi-
 >  tecture.  Our experience in writing complex Computer-Aided-Engineering
 >  programs is that if you need segments >  64kB, you probably don't know
 >  what you are doing: there exists a better algorithm to do what you want to
 >  do.  This is not always true but it is true often enough that the Intel
 >  architecture doesn't cause us much pain.  In summary, while segmentation
 >  looks bad, it really doesn't hurt too much.

Using said software (on Daisy's 80286-based CAD workstations), I find
the opposite to be true: Segmented addressing with a 16bit limit
is a royal pain in the neck! I ran into quite a few problems that were
directly related to the fact that some table in some data structure had
to fit into 64K byte. While the CAD software itself is reasonable, I
wished more than once that they had used a 68K or 16/32K processor.

  Andreas.Nowatzyk              ARPA:   agn@cmu-cs-k.ARPA
                              USENET:   ...!seismo!cmu-cs-k!agn
Re: Standard, What standard??? [message #117293 is a reply to message #112732] Mon, 23 September 2013 17:31 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: david@daisy.UUCP (David Schachter)
Message-ID: <77@daisy.UUCP>
Date: Wed, 27-Feb-85 22:49:19 EST
Article-I.D.: daisy.77
Posted: Wed Feb 27 22:49:19 1985
Date-Received: Mon, 4-Mar-85 04:58:03 EST
References: <143@idmi-cc.UUCP> <810@sjuvax.UUCP> <56@daisy.UUCP> <287@cmu-cs-k.ARPA>
Reply-To: david@daisy.UUCP (David Schachter)
Organization: Daisy Systems Corp., Mountain View, Ca
Lines: 28
Xref: watmath net.micro:9575 net.micro.pc:3430
Summary: 


Mr. Nowatzyk of Carnegie Mellon states that 64K segmentation limits have
caused him problems in using 80286 software on our workstations.  If he is
running very large designs through our older software, this can happen.
This has been corrected in newer releases in those places where it has
caused problems.  (When we designed the software, we designed it with what
we thought were generous safety margins.  Our customers promptly used the
increased efficiency of computer aided engineering to do much larger designs
than before!  Parkinson's law strikes again.)

All of the newer software, particularly in the physical layout tools and
in the hardware accelerator realm have taken advantage of what we learned
in doing the older software.  (That's what I meant in my earlier posting
when I  used the term "experience.")  We learned, in short, how to
design our code to run in the method intended by the designers of the CPU.
If you want to get maximum performance on a CPU you didn't design, this is
always a requirement, be it a NS32000, an MC68000, an 80286, or a PDP-8.

In our experience writing CAE software, in the rare cases where 64K segmentation
is a problem, it usually means that we don't know what we are doing yet.  There
is almost always a better algorithm that we haven't discovered yet, one which
uses smaller data structures >faster<.

Large address spaces are convenient.  They are not essential.  Moreover, their
convenience can rob you of the incentive to get maximum performance.  The
Intel architecture is a dark cloud with a silver lining: the need to keep within
the small address space frequently causes us to find solutions that are smaller
and faster, helping us meet our performance goals.
Re: Standard, What standard??? [message #117300 is a reply to message #112732] Mon, 23 September 2013 17:31 Go to previous message
rsellens is currently offline  rsellens
Messages: 38
Registered: June 2013
Karma: 0
Member
Message-ID: <1068@watdcsu.UUCP>
Date: Mon, 4-Mar-85 13:59:57 EST
Article-I.D.: watdcsu.1068
Posted: Mon Mar  4 13:59:57 1985
Date-Received: Tue, 5-Mar-85 01:53:04 EST
References: <143@idmi-cc.UUCP> <810@sjuvax.UUCP> <56@daisy.UUCP> <287@cmu-cs-k.ARPA> <77@daisy.UUCP>
Reply-To: rsellens@watdcsu.UUCP (Rick Sellens - Mech. Eng.)
Organization: U of Waterloo, Ontario
Lines: 53
Xref: watmath net.micro:9586 net.micro.pc:3437
Summary: 

In article <77@daisy.UUCP> david@daisy.UUCP (David Schachter) writes:
 > 
 > In our experience writing CAE software, in the rare cases where 64K segmentation
 > is a problem, it usually means that we don't know what we are doing yet.  There
 > is almost always a better algorithm that we haven't discovered yet, one which
 > uses smaller data structures >faster<.
 > 
 > Large address spaces are convenient.  They are not essential.  Moreover, their
 > convenience can rob you of the incentive to get maximum performance.  The
 > Intel architecture is a dark cloud with a silver lining: the need to keep within
 > the small address space frequently causes us to find solutions that are smaller
 > and faster, helping us meet our performance goals.


I understand this to mean that it is desirable to have arbitrary restrictions
imposed on your software development by a hardware design. (By arbitrary I
mean that the restriction, in this case 64K addressable by 16 bits, has
nothing to do with the application, but is dictated by the hardware.)

I submit that:
    1. Small efficient algorithms can be implemented with equal ease in 
       any address space larger than the algorithm.
    2. Larger algorithms are often difficult to implement in small address
       spaces.
    3. Larger address spaces require larger addresses, which in turn may
       give larger overheads in the address arithmetic.

On this basis I feel that the only good thing about the 64K maximum segment
size is that it keeps address arithmetic within a segment down to the 16 bit
capabilities of the 8088/8086 processors. Offsetting this advantage is the
sometimes significant disadvantage that larger algorithms and data structures
are difficult to implement. With the coming of 32 bit systems for relatively
low prices, the advantage of a small maximum segment size will go away.

In any case, there are only two valid incentives for increasing the speed of
a piece of software. The first is the price/performance incentive. Faster
software *may* mean a significant reduction in hardware cost. Without this
reduction in hardware cost there is no incentive to achieve "maximum
performance" except where there is the need to accomplish a task in some
fixed amount of real time. Interactive tasks need to move quickly enough to 
keep up with the user. Real time tasks like data aquisition need to keep up 
with the real world. In these cases there is still some limit beyond which
further improvement in speed gives no improvement in the true performance
of the task.

I hate to hear restrictive hardware designs defended as "good in themselves".
Hardware restrictions will always be with us, but they are never desirable.


Rick Sellens
UUCP:  watmath!watdcsu!rsellens
CSNET: rsellens%watdcsu@waterloo.csnet
ARPA:  rsellens%watdcsu%waterloo.csnet@csnet-relay.arpa
Re: Standard, What standard??? [message #118573 is a reply to message #112732] Sun, 10 March 1985 01:11 Go to previous message
Anonymous
Karma:
Originally posted by: david@daisy.UUCP (David Schachter)
Article-I.D.: daisy.87
Posted: Sun Mar 10 01:11:16 1985
Date-Received: Wed, 20-Mar-85 04:59:11 EST
References: <143@idmi-cc.UUCP> <810@sjuvax.UUCP> <56@daisy.UUCP> <287@cmu-cs-k.ARPA> <77@daisy.UUCP> <1068@watdcsu.UUCP>
Reply-To: david@daisy.UUCP (David Schachter)
Organization: Daisy Systems Corp., Mountain View, Ca
Lines: 63
Xref: watmath net.micro:9756 net.micro.pc:3522

In article <1068@watdcsu.UUCP>, Rick Sellens writes:
 > In article <77@daisy.UUCP> david@daisy.UUCP (David Schachter) writes:
 >> 
 >> In our experience writing CAE software, in the rare cases where 64K segment
 >> ation is a problem, it usually means that we don't know what we are doing yet.
 >> There is almost always a better algorithm that we haven't discovered yet, one
 >>  which uses smaller data structures >faster<.
 >> 
 >> Large address spaces are convenient.  They are not essential.  Moreover, their
 >> convenience can rob you of the incentive to get maximum performance.  The
 >> Intel architecture is a dark cloud with a silver lining: the need to keep 
 >> within the small address space frequently causes us to find solutions that are
 >> smaller and faster, helping us meet our performance goals.
 > 
 > 
 > I understand this to mean that it is desirable to have arbitrary restrictions
 > imposed on your software development by a hardware design. (By arbitrary I
 > mean that the restriction, in this case 64K addressable by 16 bits, has
 > nothing to do with the application, but is dictated by the hardware.)
... (omitted text) ...
 > 
 > I hate to hear restrictive hardware designs defended as "good in themselves".
 > Hardware restrictions will always be with us, but they are never desirable.

Bosh and twaddle, Mr. Sellens.  Normally, I would assume that my posting was
unclear.  In this case, I believe it was clear and you mis-interpreted it.
Small address spaces are not good, in and of themselves.  But they force you
to find smaller algorithms which often run faster as well.  I don't know why
smaller algorithms >tend< to run faster; I'm not a philosopher.

In the applications I write, CAE programming, the small address space of the
miserable Intel architecture does not often cause pains.  When it does, it
is usually because the algorithm stinks.  The effort to find an algorithm
which uses less space often produces, as a nice side effect, a program that
runs faster.

Mr. Sellens claims that 'fast enough' is often sufficient and he would be
correct if he was talking about a single-job CPU.  But in the real world,
systems frequently run multiple jobs.  Any spare cycles left over by a pro-
gram that runs 'too fast' are available for other programs.

The Intel architecture provides the ability to write very fast programs.  It
provides the ability to write very small programs.  If you want to provide the
best price-performance ratio for your customers, the Intel architecture can be
a good choice.  If your only goal is to get something out the door, other
architectures are better.

Mr. Sellens also states that with the coming availability of 32 bit micro-
processors, the speed advantage of a processor that uses 16 bits as the
native object size will disappear.  (The argument is that if you have a
16 bit bus, you don't want to deal with 32 bit quantities when 16 bits will
do.)  Mr. Sellens is right.  SOME DAY, 32 bit machines will be available
in production quantity.  But they are not available now.  Our customers
don't want to wait a year or two.  They want solutions now.

Architectural chavinism helps no one.  I don't like the Intel architecture.
But it is not the swamp that others make it out to be.

[The opinions expressed above are my own and not necessarily those of Daisy
Systems Corporation, its employees, or subsidiaries.  If anyone else would
like these opinions, they are available for $40 each, $75 for two.]
{Eight foot four, mouth that roars, pitch storm troopers out the door, has
anybody seen my Wookie?}
  Switch to threaded view of this topic Create a new topic Submit Reply
Previous Topic: Address space "convenience"
Next Topic: 64K segments are good for you
Goto Forum:
  

-=] Back to Top [=-
[ Syndicate this forum (XML) ] [ RSS ] [ PDF ]

Current Time: Thu Mar 28 13:56:30 EDT 2024

Total time taken to generate the page: 0.02933 seconds