Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP
Posting-Version: version B 2.10.2 9/18/84; site watdcsu.UUCP
Path: utzoo!watmath!watdcsu!rsellens
From: rsellens@watdcsu.UUCP (Rick Sellens - Mech. Eng.)
Newsgroups: net.micro,net.micro.pc
Subject: Re: Standard, What standard???
Message-ID: <1068@watdcsu.UUCP>
Date: Mon, 4-Mar-85 13:59:57 EST
Article-I.D.: watdcsu.1068
Posted: Mon Mar  4 13:59:57 1985
Date-Received: Tue, 5-Mar-85 01:53:04 EST
References: <143@idmi-cc.UUCP> <810@sjuvax.UUCP> <56@daisy.UUCP> <287@cmu-cs-k.ARPA> <77@daisy.UUCP>
Reply-To: rsellens@watdcsu.UUCP (Rick Sellens - Mech. Eng.)
Organization: U of Waterloo, Ontario
Lines: 53
Xref: watmath net.micro:9586 net.micro.pc:3437
Summary: 

In article <77@daisy.UUCP> david@daisy.UUCP (David Schachter) writes:
>
>In our experience writing CAE software, in the rare cases where 64K segmentation
>is a problem, it usually means that we don't know what we are doing yet.  There
>is almost always a better algorithm that we haven't discovered yet, one which
>uses smaller data structures >faster<.
>
>Large address spaces are convenient.  They are not essential.  Moreover, their
>convenience can rob you of the incentive to get maximum performance.  The
>Intel architecture is a dark cloud with a silver lining: the need to keep within
>the small address space frequently causes us to find solutions that are smaller
>and faster, helping us meet our performance goals.


I understand this to mean that it is desirable to have arbitrary restrictions
imposed on your software development by a hardware design. (By arbitrary I
mean that the restriction, in this case 64K addressable by 16 bits, has
nothing to do with the application, but is dictated by the hardware.)

I submit that:
    1. Small efficient algorithms can be implemented with equal ease in 
       any address space larger than the algorithm.
    2. Larger algorithms are often difficult to implement in small address
       spaces.
    3. Larger address spaces require larger addresses, which in turn may
       give larger overheads in the address arithmetic.

On this basis I feel that the only good thing about the 64K maximum segment
size is that it keeps address arithmetic within a segment down to the 16 bit
capabilities of the 8088/8086 processors. Offsetting this advantage is the
sometimes significant disadvantage that larger algorithms and data structures
are difficult to implement. With the coming of 32 bit systems for relatively
low prices, the advantage of a small maximum segment size will go away.

In any case, there are only two valid incentives for increasing the speed of
a piece of software. The first is the price/performance incentive. Faster
software *may* mean a significant reduction in hardware cost. Without this
reduction in hardware cost there is no incentive to achieve "maximum
performance" except where there is the need to accomplish a task in some
fixed amount of real time. Interactive tasks need to move quickly enough to 
keep up with the user. Real time tasks like data aquisition need to keep up 
with the real world. In these cases there is still some limit beyond which
further improvement in speed gives no improvement in the true performance
of the task.

I hate to hear restrictive hardware designs defended as "good in themselves".
Hardware restrictions will always be with us, but they are never desirable.


Rick Sellens
UUCP:  watmath!watdcsu!rsellens
CSNET: rsellens%watdcsu@waterloo.csnet
ARPA:  rsellens%watdcsu%waterloo.csnet@csnet-relay.arpa