Megalextoria
Retro computing and gaming, sci-fi books, tv and movies and other geeky stuff.

Home » Digital Archaeology » Computer Arcana » Computer Folklore » Micros as number crunchers
Show: Today's Messages :: Show Polls :: Message Navigator
E-mail to friend 
Switch to threaded view of this topic Create a new topic Submit Reply
Micros as number crunchers [message #393521] Sat, 18 April 2020 08:59 Go to next message
Anonymous
Karma:
Originally posted by: Thomas Koenig

[F'up]

Wow.

Seems like somebody actually took the time and effort to do
some assembly version of Linpack routines on a few micros (C64,
BBC Micro, plus a few others) and see how fast they are, both in
their native Basic dialects and hand-coded assembly which used
the native floating point format.

http://eprints.maths.manchester.ac.uk/2029/1/Binder1.pdf

One thing that's also interesting from this is that the floating
point format of these machines was actually not bad - a 40 bit
word using a 32 bit mantissa actually gives much better roundoff
results than today's 32 bit single precision real variables.

Also interesting is the fact that a C64 was around a factor of 2000
slower than a VAX 11/780 (with the C64 using optimized assembly).
Which computer gave you more flops for the buck I don't know
because I don't know the price of a VAX at the time :-)
Re: Micros as number crunchers [message #393522 is a reply to message #393521] Sat, 18 April 2020 09:02 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Thomas Koenig

Thomas Koenig <tkoenig@netcologne.de> schrieb:
> Also interesting is the fact that a C64 was around a factor of 2000
> slower than a VAX 11/780 (with the C64 using optimized assembly).

Correction: That number was for BASIC. For optimized assembly,
it was around 330 times slower.
Re: Micros as number crunchers [message #393524 is a reply to message #393521] Sat, 18 April 2020 10:07 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: J. Clarke

On Sat, 18 Apr 2020 12:59:47 -0000 (UTC), Thomas Koenig
<tkoenig@netcologne.de> wrote:

> [F'up]
>
> Wow.
>
> Seems like somebody actually took the time and effort to do
> some assembly version of Linpack routines on a few micros (C64,
> BBC Micro, plus a few others) and see how fast they are, both in
> their native Basic dialects and hand-coded assembly which used
> the native floating point format.
>
> http://eprints.maths.manchester.ac.uk/2029/1/Binder1.pdf
>
> One thing that's also interesting from this is that the floating
> point format of these machines was actually not bad - a 40 bit
> word using a 32 bit mantissa actually gives much better roundoff
> results than today's 32 bit single precision real variables.
>
> Also interesting is the fact that a C64 was around a factor of 2000
> slower than a VAX 11/780 (with the C64 using optimized assembly).
> Which computer gave you more flops for the buck I don't know
> because I don't know the price of a VAX at the time :-)

Just a note but any decent machine today has 64-bit floating point,
and any Intel later than the 386 has 80-bit available.
Re: Micros as number crunchers [message #393527 is a reply to message #393521] Sat, 18 April 2020 13:16 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Ron Shepard

On 4/18/20 7:59 AM, Thomas Koenig wrote:
> Also interesting is the fact that a C64 was around a factor of 2000
> slower than a VAX 11/780 (with the C64 using optimized assembly).
> Which computer gave you more flops for the buck I don't know
> because I don't know the price of a VAX at the time :-)

This was an interesting time for microcomputers and numerical computing.
In 1985, there were several VAX models available, 780, 750, 730, and I
think the microvaxes were becoming available about that time. The
microvaxes cost about $10k and were maybe 2x or 3x slower than the
flagship 780 model, which cost about $100k-$200k.

I think a C64 cost about $500 at that time, so it was maybe 20x cheaper
than a microvax and ran the linpack code about 1000x slower. Maybe I'm
off a factor of two here or there, but those should be roughly correct.

Of course there were other major differences than just the floating
point performance. The C64 had 16-bit addressing, the VAX had 32-bit
addressing, you could communicate with ethernet with the VAX, you could
put "large" 300MB disk drives on the VAX, and so on.

Basically what happened was that the minicomputers in the late 1970s and
early 1980s downsized faster than the microcomputers upsized, so they
eventually just got squeezed out of the market regarding floating point
computation.

In the early 1980s you could also buy FPS attached processors for the
VAX. The ones I used cost about $100k and ran about 50x the speed for
the linpack benchmark. These were word addressable 64-bit floating point
machines (physical memory, not virtual, no time sharing). The cross
compiler ran on the VAX front end, then offloaded the execution onto the
array processor. This was probably the most cost effective way to
compute in the early 1980s.

Then in the late 1980s came all of the RISC microprocssors (MIPS, SPARC,
RS6000, etc.) which reduced computing costs by about another factor of
10. These were ganged together to make parallel machines, and that was
probably the most cost effective way to compute throughout the 1990s.

Then by the early 2000s, Intel/AMD microcomputers began to catch up in
speed, and they were ganged together to form parallel machines. That is
pretty much where we are today, 20 years later, with the twist that now
there are multiple cores per chip, and they have vector engines and
"graphical" coprocessors to offload the floating point computations.

The RISC processors are still all around us, in our phones, cars,
tablets, and so on, but they aren't doing floating point science, they
are doing mostly networking and signal processing. They downsized,
taking their PC and Macintosh applications with them, and eventually
just displaced the microprocessors. These are mostly just appliances
now, not programming devices.

That is my perspective of what happened to the microprocessor computing
effort in the late 1970s. Some of the problems it encountered were
technical, and some were based on market forces.

BTW, the author Nicholas Higham is still today one of the most respected
numerical analysts.

https://en.wikipedia.org/wiki/Nicholas_Higham

$.02 -Ron Shepard
Re: Micros as number crunchers [message #393528 is a reply to message #393524] Sat, 18 April 2020 13:30 Go to previous messageGo to next message
Quadibloc is currently offline  Quadibloc
Messages: 4399
Registered: June 2012
Karma: 0
Senior Member
On Saturday, April 18, 2020 at 8:07:54 AM UTC-6, J. Clarke wrote:

> Just a note but any decent machine today has 64-bit floating point,
> and any Intel later than the 386 has 80-bit available.

There _was_ the 486 SX which skipped the hardware floating-point.

John Savard
Re: Micros as number crunchers [message #393529 is a reply to message #393528] Sat, 18 April 2020 14:10 Go to previous messageGo to next message
Gordon Henderson is currently offline  Gordon Henderson
Messages: 73
Registered: April 2013
Karma: 0
Member
In article <d5103995-2455-4830-be6a-fffb308cdabd@googlegroups.com>,
Quadibloc <jsavard@ecn.ab.ca> wrote:
> On Saturday, April 18, 2020 at 8:07:54 AM UTC-6, J. Clarke wrote:
>
>> Just a note but any decent machine today has 64-bit floating point,
>> and any Intel later than the 386 has 80-bit available.
>
> There _was_ the 486 SX which skipped the hardware floating-point.

SX -> Sux, DX -> Delux, as I recall..

-Gordon
Re: Micros as number crunchers [message #393532 is a reply to message #393529] Sat, 18 April 2020 14:51 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: J. Clarke

On Sat, 18 Apr 2020 18:10:44 -0000 (UTC), Gordon Henderson
<gordon+usenet@drogon.net> wrote:

> In article <d5103995-2455-4830-be6a-fffb308cdabd@googlegroups.com>,
> Quadibloc <jsavard@ecn.ab.ca> wrote:
>> On Saturday, April 18, 2020 at 8:07:54 AM UTC-6, J. Clarke wrote:
>>
>>> Just a note but any decent machine today has 64-bit floating point,
>>> and any Intel later than the 386 has 80-bit available.
>>
>> There _was_ the 486 SX which skipped the hardware floating-point.
>
> SX -> Sux, DX -> Delux, as I recall..

IIRC that was an effort to make lemonade--they had a run that had a
defect in the floating point so they cut whatever they needed to to
turn it off and stamped them "SX". Then it turned out that there was
a market for them so they started making them without the floating
point--IIRC the major market was laptops where every little bit of
power reduction helped.
Re: Micros as number crunchers [message #393538 is a reply to message #393527] Sat, 18 April 2020 16:01 Go to previous messageGo to next message
scott is currently offline  scott
Messages: 4237
Registered: February 2012
Karma: 0
Senior Member
Ron Shepard <nospam@nowhere.org> writes:

> The RISC processors are still all around us, in our phones, cars,
> tablets, and so on, but they aren't doing floating point science, they
> are doing mostly networking and signal processing.

Two points:

1) I think the term RISC can no longer be applied to ARM processors;
the latest ARMv8 processors have thousands of instructions, up to
three distinct instruction sets,

https://developer.arm.com/docs/101726/latest/explore-the-sca lable-vector-extension-sve/what-is-the-scalable-vector-exten sion

multiple privilege levels, scalable vector lengths up to 2048 bits, and
a reference manual containing 8128 pages (just for the architecture,
instruction set, and registers). Then there is tens of thousands
of additional pages of documentation for the I/OMMU, Interrupt Controller,
coresight (external debug/trace), et alia.

2) The latest ARMv8 processors from Marvell are used in supercomputers:

https://www.top500.org/system/179565
https://www.marvell.com/products/server-processors/thunderx2 -arm-processors.html
https://www.hpcwire.com/2020/03/17/marvell-talks-up-thunderx 3-and-arm-server-roadmap/

Fujitsu also as an ARM-based supercomputer:

https://www.nextplatform.com/2019/11/22/arm-supercomputer-ca ptures-the-energy-efficiency-crown/
Re: Micros as number crunchers [message #393546 is a reply to message #393538] Sat, 18 April 2020 20:55 Go to previous messageGo to next message
Peter Flass is currently offline  Peter Flass
Messages: 8375
Registered: December 2011
Karma: 0
Senior Member
Scott Lurndal <scott@slp53.sl.home> wrote:
> Ron Shepard <nospam@nowhere.org> writes:
>
>> The RISC processors are still all around us, in our phones, cars,
>> tablets, and so on, but they aren't doing floating point science, they
>> are doing mostly networking and signal processing.
>
> Two points:
>
> 1) I think the term RISC can no longer be applied to ARM processors;
> the latest ARMv8 processors have thousands of instructions, up to
> three distinct instruction sets,

So what’s left in the RISC space (commercially)? Is RISC another good idea
that flopped?

--
Pete
Re: what's a RISC, Micros as number crunchers [message #393551 is a reply to message #393546] Sat, 18 April 2020 22:25 Go to previous messageGo to next message
John Levine is currently offline  John Levine
Messages: 1405
Registered: December 2011
Karma: 0
Senior Member
In article <747758917.608935650.195739.peter_flass-yahoo.com@news.eternal-september.org>,
Peter Flass <peter_flass@yahoo.com> wrote:
> So what’s left in the RISC space (commercially)? Is RISC another good idea
> that flopped?

RISC was and is a perfectly good idea, to take stuff out of the
hardware that software can do better. Hardware is a lot more capable
than it was in the 1980s so stuff like simple instruction decoding
that mattered then doesn't now.

Oracle released the SPARC design as the open source OpenSPARC in the
late 2000s. I gather it's still used in embedded designs.

MIPS processors are common in routers and switches.



--
Regards,
John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Re: Micros as number crunchers [message #393552 is a reply to message #393546] Sat, 18 April 2020 23:41 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: J. Clarke

On Sat, 18 Apr 2020 17:55:45 -0700, Peter Flass
<peter_flass@yahoo.com> wrote:

> Scott Lurndal <scott@slp53.sl.home> wrote:
>> Ron Shepard <nospam@nowhere.org> writes:
>>
>>> The RISC processors are still all around us, in our phones, cars,
>>> tablets, and so on, but they aren't doing floating point science, they
>>> are doing mostly networking and signal processing.
>>
>> Two points:
>>
>> 1) I think the term RISC can no longer be applied to ARM processors;
>> the latest ARMv8 processors have thousands of instructions, up to
>> three distinct instruction sets,
>
> So what’s left in the RISC space (commercially)? Is RISC another good idea
> that flopped?

There's ARM (Acorn RISC Machine), which is doing very well in cell
phones--they're also used in the Raspberry Pi.
Re: Micros as number crunchers [message #393554 is a reply to message #393546] Sun, 19 April 2020 03:00 Go to previous messageGo to next message
Robin Vowels is currently offline  Robin Vowels
Messages: 426
Registered: July 2012
Karma: 0
Senior Member
On Sunday, April 19, 2020 at 10:55:46 AM UTC+10, Peter Flass wrote:
> Scott Lurndal <s....@slp53.sl.home> wrote:
>> Ron Shepard <nospam@nowhere.org> writes:
>>
>>> The RISC processors are still all around us, in our phones, cars,
>>> tablets, and so on, but they aren't doing floating point science, they
>>> are doing mostly networking and signal processing.
>>
>> Two points:
>>
>> 1) I think the term RISC can no longer be applied to ARM processors;
>> the latest ARMv8 processors have thousands of instructions, up to
>> three distinct instruction sets,
>
> So what’s left in the RISC space (commercially)? Is RISC another good idea
> that flopped?

RISC had its adherants.
However, in creating a machine possessing instructions that
did only very basic stuff, meant that you needed a high-speed
channel supplying the instructions to the CPU to be executed.
At the same time, the instructions being executed made memory
references that competed for access to memory.
That's a sort of bottleneck.

On the other hand, instructions that do a lot of work
reduce the rate at which instructions need to be fed
from memory.

Taking the IBM system z as an example,
a load instruction to a register may requires an index.
Given that the index is already held in a register,
the contents need to be multiplied by 2, or 4, or 8 [shifted]
and then used by the Load instruction.
Thus:
LR 3,2 copy the index from register 2.
SLL 3,2 a shift of 2 places left multiplies by 4.
L 5,0(3,6) An indexed load gets the value.

If the Load instruction achieved the shift of 2 places
as part of its execution, you need only write
L 5,0(2,6)

which saves loading and executing 2 instructions.
Re: Micros as number crunchers [message #393555 is a reply to message #393546] Sun, 19 April 2020 03:37 Go to previous messageGo to next message
Quadibloc is currently offline  Quadibloc
Messages: 4399
Registered: June 2012
Karma: 0
Senior Member
On Saturday, April 18, 2020 at 6:55:46 PM UTC-6, Peter Flass wrote:
> Scott Lurndal <scott@slp53.sl.home> wrote:

>> 1) I think the term RISC can no longer be applied to ARM processors;
>> the latest ARMv8 processors have thousands of instructions, up to
>> three distinct instruction sets,

> So what’s left in the RISC space (commercially)? Is RISC another good idea
> that flopped?

It all depends on what you call RISC.

If by RISC you mean - all the instructions are exactly 32 bits long, only load
and store instructions reference memory - SPARC, PowerPC, MIPS, and the wildly
successful ARM are all RISC.

But _originally_, when the idea of RISC was first presented, one of the features
included in its definition was that *every instruction would execute in one
cycle*. This would kind of put paid to hardware divide, let alone hardware
floating-point.

That flopped, but I don't think it was a good idea.

So RISC was an idea... the good parts of which have succeeded. RISC is basically
the new standard; x86 and zArchitecture are still around today representing
CISC, but they're legacy architectures; new commercial designs are almost always
RISC.

Modern VLIW designs are RISC-like in many respects; RISC-V has variable-length
instructions, which means it isn't quite RISC, but given its name, it's still
intended to be mostly RISC-like.

John Savard
Re: Micros as number crunchers [message #393556 is a reply to message #393546] Sun, 19 April 2020 04:31 Go to previous messageGo to next message
Jorgen Grahn is currently offline  Jorgen Grahn
Messages: 606
Registered: March 2012
Karma: 0
Senior Member
On Sun, 2020-04-19, Peter Flass wrote:
> Scott Lurndal <scott@slp53.sl.home> wrote:
>> Ron Shepard <nospam@nowhere.org> writes:
>>
>>> The RISC processors are still all around us, in our phones, cars,
>>> tablets, and so on, but they aren't doing floating point science, they
>>> are doing mostly networking and signal processing.
>>
>> Two points:
>>
>> 1) I think the term RISC can no longer be applied to ARM processors;
>> the latest ARMv8 processors have thousands of instructions, up to
>> three distinct instruction sets,
>
> So what’s left in the RISC space (commercially)? Is RISC another
> good idea that flopped?

I seem to see a trend of people assuming everything is Intel x86 with
features like little-endianness, unaligned accesses which work most of
the time, and tolerant SMP designs. Also it seems to me ARM tries to
emulate these properties.

Which leaves me a bit bitter, because I learned to write valid C code
when targeting intolerant processors like PowerPC and SPARC.

/Jorgen

--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
Re: Micros as number crunchers [message #393559 is a reply to message #393546] Sun, 19 April 2020 05:37 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Thomas Koenig

Peter Flass <peter_flass@yahoo.com> schrieb:

> So what’s left in the RISC space (commercially)? Is RISC another good idea
> that flopped?

The POWER architecture is still pretty RISCy, is being opened up
and is still very much in commercial use.

Apparently, they have an extremely large memory bandwith for graphics
cards, which makes them good for supercomputers.

You can even buy a desktop PC or a mainboard, and not from IBM :-)
For people who are concerned about things like the Intel Management
Egnine, that one is completely open source.
Re: Micros as number crunchers [message #393561 is a reply to message #393524] Sun, 19 April 2020 06:46 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Thomas Koenig

J Clarke <jclarke.873638@gmail.com> schrieb:

>> Also interesting is the fact that a C64 was around a factor of 2000
>> slower than a VAX 11/780 (with the C64 using optimized assembly).
>> Which computer gave you more flops for the buck I don't know
>> because I don't know the price of a VAX at the time :-)

> Just a note but any decent machine today has 64-bit floating point,
> and any Intel later than the 386 has 80-bit available.

Sure.

A problem with scientific calculation today is that people often
cannot use 32 bit single precision (and even more often, they do
not bother to try) because of severe roundoff erros.

There was a good reason why the old IBM scientific machines had
36 bit floating point, but that was sacrificed on the altar of
the all-round system, the 360.

Why are eight-bit bytes so common today?
Re: Micros as number crunchers [message #393562 is a reply to message #393561] Sun, 19 April 2020 08:14 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: J. Clarke

On Sun, 19 Apr 2020 10:46:23 -0000 (UTC), Thomas Koenig
<tkoenig@netcologne.de> wrote:

> J Clarke <jclarke.873638@gmail.com> schrieb:
>
>>> Also interesting is the fact that a C64 was around a factor of 2000
>>> slower than a VAX 11/780 (with the C64 using optimized assembly).
>>> Which computer gave you more flops for the buck I don't know
>>> because I don't know the price of a VAX at the time :-)
>
>> Just a note but any decent machine today has 64-bit floating point,
>> and any Intel later than the 386 has 80-bit available.
>
> Sure.
>
> A problem with scientific calculation today is that people often
> cannot use 32 bit single precision (and even more often, they do
> not bother to try) because of severe roundoff erros.
>
> There was a good reason why the old IBM scientific machines had
> 36 bit floating point, but that was sacrificed on the altar of
> the all-round system, the 360.
>
> Why are eight-bit bytes so common today?

The major reason is that that is what is expected. With regard to
floating point most processors today with hardware floating point
implement the IEEE 754 format, which defines 32, 64, and 128 bit
formats. Note that Intel's original floating point was 80 bit and
that is still available on their processors.
Re: Micros as number crunchers [message #393597 is a reply to message #393562] Sun, 19 April 2020 09:04 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Thomas Koenig

J Clarke <jclarke.873638@gmail.com> schrieb:
> On Sun, 19 Apr 2020 10:46:23 -0000 (UTC), Thomas Koenig

>> Why are eight-bit bytes so common today?
>
> The major reason is that that is what is expected.

Ok, but why has it become ubiquitous?

In mainframe times, there were lots of different architectures.
36-bit IBM scientific computers, 36-bit PDP-10, 36-bit Univac (I
just missed that at University), 60-bit CDCs, 48 bit Burroughs, ...
and the 32-bit, byte-oriented IBM /360.

Minicomputers? 18-bit PDP-1, 12-bit PDP-8, 16-bit PDP-11, 32-bit
VAX and 32-bit Eclipse, and 16-bit System/32 (if you can call that
a mini).

One-chip microprocessors? Starting with the 4-bit 4004, 8 bit 8008,
8080, Z80, 6502, 6800, 68000 etc...

Micros? Based on one-chip microprocessors or RISC designs, all of
which are 32-bit based, as far as I know.

So, we see a convergence towards 8-bit (or even powers of two)
over the years. What drove this?

Was it the Arpanet? Octet-based from the start, as far as I know.
Re: Micros as number crunchers [message #393599 is a reply to message #393597] Sun, 19 April 2020 10:21 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: J. Clarke

On Sun, 19 Apr 2020 13:04:33 -0000 (UTC), Thomas Koenig
<tkoenig@netcologne.de> wrote:

> J Clarke <jclarke.873638@gmail.com> schrieb:
>> On Sun, 19 Apr 2020 10:46:23 -0000 (UTC), Thomas Koenig
>
>>> Why are eight-bit bytes so common today?
>>
>> The major reason is that that is what is expected.
>
> Ok, but why has it become ubiquitous?
>
> In mainframe times, there were lots of different architectures.
> 36-bit IBM scientific computers, 36-bit PDP-10, 36-bit Univac (I
> just missed that at University), 60-bit CDCs, 48 bit Burroughs, ...
> and the 32-bit, byte-oriented IBM /360.
>
> Minicomputers? 18-bit PDP-1, 12-bit PDP-8, 16-bit PDP-11, 32-bit
> VAX and 32-bit Eclipse, and 16-bit System/32 (if you can call that
> a mini).
>
> One-chip microprocessors? Starting with the 4-bit 4004, 8 bit 8008,
> 8080, Z80, 6502, 6800, 68000 etc...
>
> Micros? Based on one-chip microprocessors or RISC designs, all of
> which are 32-bit based, as far as I know.
>
> So, we see a convergence towards 8-bit (or even powers of two)
> over the years. What drove this?
>
> Was it the Arpanet? Octet-based from the start, as far as I know.

You're looking for a technological reason when I think the real reason
is more related to marketing and sales and accidents of history.

IBM wanted something with an 8-bit bus for the PC because they had a
stock of 8-bit glue chips that they wanted to use up. And whatever
got on the IBM PC was going to be dominant.

ARM had a similar requirement--their first target was a second
processor for a 6502 machine for the BBC. How they became dominant
in cell phones is not clear to me but I suspect their marketing model
is a big part of it--they don't make chips, they sell designs that can
be adjusted to fit customer needs or embedded in system-on-a-chip
designs.

I suspect Unix had something to do with it--while Unix was originally
developed on an 18 bit architecture, by the time it had been revised
into a portable form it was pretty much locked into multiples of 8
bits. To be cost competitive, chip manufacturers needed a large
market to amortize the cost of the fab--if they weren't Intel they
were pretty well locked out of the desktop PC market so the next
target was engineering workstations, which typically ran Unix, so they
had to have an architecture for which Unix was easily ported.

Other micros just didn't succeed.
Re: Micros as number crunchers [message #393601 is a reply to message #393599] Sun, 19 April 2020 10:50 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Douglas Miller

On Sunday, April 19, 2020 at 9:21:29 AM UTC-5, J. Clarke wrote:
> ...
>
> I suspect Unix had something to do with it--while Unix was originally
> developed on an 18 bit architecture, by the time it had been revised
> into a portable form it was pretty much locked into multiples of 8
> bits. To be cost competitive, chip manufacturers needed a large
> market to amortize the cost of the fab--if they weren't Intel they
> were pretty well locked out of the desktop PC market so the next
> target was engineering workstations, which typically ran Unix, so they
> had to have an architecture for which Unix was easily ported.
>
> Other micros just didn't succeed.

I don't have the same perspective of history. 8-bit was well established long before the IBM-PC. The industry was already coalescing on 8-bit in the early 70's. The reasons certain microprocessors "succeeded" and others did less are varied, and not always based on "better technology".

Even mainframes, with core memory, (50's and 60's) were often (always?) using 8-bit wide memory (buses) (although +1 for parity was usually necessary).. Some just assigned a couple bits to "punctuation" and so typically had data/address values that were a multiple of 6 bits.

I could be wrong, as I never worked on the architecture, but the PDP-11 may have had an 18-bit address width (originally 16, then 18 or 22 with MMU hardware), but it was still a byte (8-bit) oriented machine. 8, 16 and 32-bit data.
Re: what's a RISC, Micros as number crunchers [message #393604 is a reply to message #393551] Sun, 19 April 2020 10:56 Go to previous messageGo to next message
scott is currently offline  scott
Messages: 4237
Registered: February 2012
Karma: 0
Senior Member
John Levine <johnl@taugh.com> writes:
> In article <747758917.608935650.195739.peter_flass-yahoo.com@news.eternal-september.org>,
> Peter Flass <peter_flass@yahoo.com> wrote:
>> So what’s left in the RISC space (commercially)? Is RISC another good idea
>> that flopped?
>
> RISC was and is a perfectly good idea, to take stuff out of the
> hardware that software can do better. Hardware is a lot more capable
> than it was in the 1980s so stuff like simple instruction decoding
> that mattered then doesn't now.
>
> Oracle released the SPARC design as the open source OpenSPARC in the
> late 2000s. I gather it's still used in embedded designs.
>
> MIPS processors are common in routers and switches.

Fewer and fewer each year, as ARM has been winning new designs.

Cavium started moving from MIPS to ARM in 2012.
Re: Micros as number crunchers [message #393605 is a reply to message #393552] Sun, 19 April 2020 10:59 Go to previous messageGo to next message
scott is currently offline  scott
Messages: 4237
Registered: February 2012
Karma: 0
Senior Member
J. Clarke <jclarke.873638@gmail.com> writes:
> On Sat, 18 Apr 2020 17:55:45 -0700, Peter Flass
> <peter_flass@yahoo.com> wrote:
>
>> Scott Lurndal <scott@slp53.sl.home> wrote:
>>> Ron Shepard <nospam@nowhere.org> writes:
>>>
>>>> The RISC processors are still all around us, in our phones, cars,
>>>> tablets, and so on, but they aren't doing floating point science, they
>>>> are doing mostly networking and signal processing.
>>>
>>> Two points:
>>>
>>> 1) I think the term RISC can no longer be applied to ARM processors;
>>> the latest ARMv8 processors have thousands of instructions, up to
>>> three distinct instruction sets,
>>
>> So what’s left in the RISC space (commercially)? Is RISC another good idea
>> that flopped?
>
> There's ARM (Acorn RISC Machine), which is doing very well in cell
> phones--they're also used in the Raspberry Pi.

However, they can't really be considered 'reduced instruction set' any more,
even the armv7 (which has two complete instruction sets, a32 and t32), much
less the armv8 which is in most cell phones built in the last four years,
and in newer pi's (3 and higher) and hardocp, etc.
Re: Micros as number crunchers [message #393606 is a reply to message #393605] Sun, 19 April 2020 11:09 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Douglas Miller

On Sunday, April 19, 2020 at 9:59:37 AM UTC-5, Scott Lurndal wrote:
> ...
> However, they can't really be considered 'reduced instruction set' any more,
> even the armv7 (which has two complete instruction sets, a32 and t32), much
> less the armv8 which is in most cell phones built in the last four years,
> and in newer pi's (3 and higher) and hardocp, etc.

(Un)Like the rest of the world today, there's little room for extremism. RISC processors have incorporated complexity, CISC have incorporated things learned from RISC. Both have moved towards the center.
Re: Micros as number crunchers [message #393614 is a reply to message #393599] Sun, 19 April 2020 13:45 Go to previous messageGo to next message
ted@loft.tnolan.com ( is currently offline  ted@loft.tnolan.com (
Messages: 161
Registered: August 2012
Karma: 0
Senior Member
In article <ndlo9fh0i33do3fq1df5jdc5oc30gerpi7@4ax.com>,
J. Clarke <jclarke.873638@gmail.com> wrote:
> On Sun, 19 Apr 2020 13:04:33 -0000 (UTC), Thomas Koenig
> <tkoenig@netcologne.de> wrote:
>
>> J Clarke <jclarke.873638@gmail.com> schrieb:
>>> On Sun, 19 Apr 2020 10:46:23 -0000 (UTC), Thomas Koenig
>>
>>>> Why are eight-bit bytes so common today?
>>>
>>> The major reason is that that is what is expected.
>>
>> Ok, but why has it become ubiquitous?
>>
>> In mainframe times, there were lots of different architectures.
>> 36-bit IBM scientific computers, 36-bit PDP-10, 36-bit Univac (I
>> just missed that at University), 60-bit CDCs, 48 bit Burroughs, ...
>> and the 32-bit, byte-oriented IBM /360.
>>
>> Minicomputers? 18-bit PDP-1, 12-bit PDP-8, 16-bit PDP-11, 32-bit
>> VAX and 32-bit Eclipse, and 16-bit System/32 (if you can call that
>> a mini).
>>
>> One-chip microprocessors? Starting with the 4-bit 4004, 8 bit 8008,
>> 8080, Z80, 6502, 6800, 68000 etc...
>>
>> Micros? Based on one-chip microprocessors or RISC designs, all of
>> which are 32-bit based, as far as I know.
>>
>> So, we see a convergence towards 8-bit (or even powers of two)
>> over the years. What drove this?
>>
>> Was it the Arpanet? Octet-based from the start, as far as I know.
>
> You're looking for a technological reason when I think the real reason
> is more related to marketing and sales and accidents of history.
>
> IBM wanted something with an 8-bit bus for the PC because they had a
> stock of 8-bit glue chips that they wanted to use up. And whatever
> got on the IBM PC was going to be dominant.
>
> ARM had a similar requirement--their first target was a second
> processor for a 6502 machine for the BBC. How they became dominant
> in cell phones is not clear to me but I suspect their marketing model
> is a big part of it--they don't make chips, they sell designs that can
> be adjusted to fit customer needs or embedded in system-on-a-chip
> designs.
>
> I suspect Unix had something to do with it--while Unix was originally
> developed on an 18 bit architecture, by the time it had been revised
> into a portable form it was pretty much locked into multiples of 8
> bits. To be cost competitive, chip manufacturers needed a large
> market to amortize the cost of the fab--if they weren't Intel they
> were pretty well locked out of the desktop PC market so the next
> target was engineering workstations, which typically ran Unix, so they
> had to have an architecture for which Unix was easily ported.
>
> Other micros just didn't succeed.

I recall running Unix on the BBN C70, a machine with 10 bit bytes.
I even got BSD vi to compile on and run on it, which surprised me a little.
--
columbiaclosings.com
What's not in Columbia anymore..
Re: Micros as number crunchers [message #393615 is a reply to message #393555] Sun, 19 April 2020 13:47 Go to previous messageGo to next message
Peter Flass is currently offline  Peter Flass
Messages: 8375
Registered: December 2011
Karma: 0
Senior Member
Quadibloc <jsavard@ecn.ab.ca> wrote:
> So RISC was an idea... the good parts of which have succeeded. RISC is basically
> the new standard; x86 and zArchitecture are still around today representing
> CISC, but they're legacy architectures; new commercial designs are almost always
> RISC.
>

They’re both RISC, or some variant of RISC, under the covers, it’s just not
user-accessible. It’s just sad that CISC architectures like VAX aren’t
currently commercial, a worse architecture than x86 is hard to imagine,
although that might be a good project ;-)


--
Pete
Re: Micros as number crunchers [message #393621 is a reply to message #393521] Sun, 19 April 2020 17:53 Go to previous messageGo to next message
Apple2Steward is currently offline  Apple2Steward
Messages: 44
Registered: January 2013
Karma: 0
Member
On 4/18/2020 7:59 AM, Thomas Koenig wrote:
> [F'up]
>
> Wow.
>
> Seems like somebody actually took the time and effort to do
> some assembly version of Linpack routines on a few micros (C64,
> BBC Micro, plus a few others) and see how fast they are, both in
> their native Basic dialects and hand-coded assembly which used
> the native floating point format.
>
> http://eprints.maths.manchester.ac.uk/2029/1/Binder1.pdf
>
> One thing that's also interesting from this is that the floating
> point format of these machines was actually not bad - a 40 bit
> word using a 32 bit mantissa actually gives much better roundoff
> results than today's 32 bit single precision real variables.
>
> Also interesting is the fact that a C64 was around a factor of 2000
> slower than a VAX 11/780 (with the C64 using optimized assembly).
> Which computer gave you more flops for the buck I don't know
> because I don't know the price of a VAX at the time :-)

Used a DX-64 (the C-64 "portable" w/ 5" color monitor)
<https://www.c64-wiki.com/wiki/Executive_64>. Apparently I was one of
only a few that actually managed to purchase a "D" instead of "C" with
the two drives. Not having had a regular Commodore before, the lack of
cassette interface didn't bother me.

I used both a Fortran compiler and the builtin basic, but mostly a Forth
interpreter (forget the origin of which) that had access to the sprites
and all...it was multitasking kernel and was really quite slick.

Did not incorporate floating point, though, as few Forths did at the time.

--
Re: Micros as number crunchers [message #393623 is a reply to message #393561] Sun, 19 April 2020 21:36 Go to previous messageGo to next message
John Levine is currently offline  John Levine
Messages: 1405
Registered: December 2011
Karma: 0
Senior Member
In article <r7ha5v$5tu$1@newsreader4.netcologne.de>,
Thomas Koenig <tkoenig@netcologne.de> wrote:
> There was a good reason why the old IBM scientific machines had
> 36 bit floating point, but that was sacrificed on the altar of
> the all-round system, the 360.

The 360 design totally botched the floating point, and lost close to a
full digit on each operation compared to a well designed 32 bit format
like the later IEEE one. It's not a great comparison.

For that matter, if 36 bits was enough why did the CDC supercomputers have
60 bit words?

--
Regards,
John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Re: Micros as number crunchers [message #393624 is a reply to message #393623] Mon, 20 April 2020 00:51 Go to previous messageGo to next message
Robin Vowels is currently offline  Robin Vowels
Messages: 426
Registered: July 2012
Karma: 0
Senior Member
On Monday, April 20, 2020 at 11:36:54 AM UTC+10, John Levine wrote:
> In article <r......@newsreader4.netcologne.de>,
> Thomas Koenig <t......@netcologne.de> wrote:
>> There was a good reason why the old IBM scientific machines had
>> 36 bit floating point, but that was sacrificed on the altar of
>> the all-round system, the 360.
>
> The 360 design totally botched the floating point, and lost close to a
> full digit on each operation

The initial release could lose significance when the
mantissa of one operand was shifted down.
This was rectified by the hardware upgrade of the
guard digit, which retained the final digit that was
shifted out during pre-normalisation. If it was then
necessary to shift up the mantissa during post-normalisation,
the guard digit was shifted back into the register.

The implementation (hex floating-point) was a means to
obtain good execution times when pre- and post-normalising,
while maintaining at least 21 bits of mantissa.

The only blunder was the way HER (halve floating-point) was
implemented. It merely shifted the mantissa down by 1 place.
It failed to post-normalise, so that if it happened that the
most-significant digit of the mantissa was 1, you were left
with only 20 bits of precision instead of 21.

The advantage of HER was that it took a fraction of the time
that the Divide instruction took (to divide by 2).

This flaw was not corrected until the S/370.

> compared to a well designed 32 bit format
> like the later IEEE one. It's not a great comparison.
>
> For that matter, if 36 bits was enough why did the CDC supercomputers have
> 60 bit words?

Well, CDC was not going to complicate things by having a 30-bit float
as well as the 60-bit float. They were running out of spare op-codes.
Giving effectively double precision for all float operations was
a reasonable design choice.
Re: Micros as number crunchers [message #393627 is a reply to message #393601] Mon, 20 April 2020 07:09 Go to previous messageGo to next message
Niklas Karlsson is currently offline  Niklas Karlsson
Messages: 265
Registered: January 2012
Karma: 0
Senior Member
On 2020-04-19, Douglas Miller <durgadas311@gmail.com> wrote:
> On Sunday, April 19, 2020 at 9:21:29 AM UTC-5, J. Clarke wrote:
>>
>> I suspect Unix had something to do with it--while Unix was originally
>> developed on an 18 bit architecture, by the time it had been revised
>> into a portable form it was pretty much locked into multiples of 8
>> bits.
....
> I could be wrong, as I never worked on the architecture, but the
> PDP-11 may have had an 18-bit address width (originally 16, then 18 or
> 22 with MMU hardware), but it was still a byte (8-bit) oriented
> machine. 8, 16 and 32-bit data.

The PDP-11 was indeed byte oriented, but Unix originally started on the
18-bit PDP-7.

Niklas
--
If books were designed by Microsoft, the Anarchist's Cookbook would
explode when you read it.
-- Mark 'Kamikaze' Hughes, asr
Re: Micros as number crunchers [message #393628 is a reply to message #393627] Mon, 20 April 2020 07:15 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Thomas Koenig

Niklas Karlsson <anksil@yahoo.se> schrieb:
> On 2020-04-19, Douglas Miller <durgadas311@gmail.com> wrote:
>> On Sunday, April 19, 2020 at 9:21:29 AM UTC-5, J. Clarke wrote:
>>>
>>> I suspect Unix had something to do with it--while Unix was originally
>>> developed on an 18 bit architecture, by the time it had been revised
>>> into a portable form it was pretty much locked into multiples of 8
>>> bits.
> ...
>> I could be wrong, as I never worked on the architecture, but the
>> PDP-11 may have had an 18-bit address width (originally 16, then 18 or
>> 22 with MMU hardware), but it was still a byte (8-bit) oriented
>> machine. 8, 16 and 32-bit data.
>
> The PDP-11 was indeed byte oriented, but Unix originally started on the
> 18-bit PDP-7.

.... which is why octal plays such an important role in Unix and C,
instead of hexadecimal.
Re: Micros as number crunchers [message #393631 is a reply to message #393628] Mon, 20 April 2020 10:49 Go to previous messageGo to next message
John Levine is currently offline  John Levine
Messages: 1405
Registered: December 2011
Karma: 0
Senior Member
In article <r7k08o$9gh$1@newsreader4.netcologne.de>,
Thomas Koenig <tkoenig@netcologne.de> wrote:
>> The PDP-11 was indeed byte oriented, but Unix originally started on the
>> 18-bit PDP-7.

It hopped to the PDP-11 very early, wasn't any 18-bitisms I could see in 1975.

> ... which is why octal plays such an important role in Unix and C,
> instead of hexadecimal.

Nope. DEC always used octal in their PDP-11 software and
documentation. Since it had 8 registers, three-bit octal digits in
the opcodes were handy.



--
Regards,
John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Re: Micros as number crunchers [message #393632 is a reply to message #393624] Mon, 20 April 2020 10:54 Go to previous messageGo to next message
John Levine is currently offline  John Levine
Messages: 1405
Registered: December 2011
Karma: 0
Senior Member
In article <ff337658-caf5-405d-bd07-c8327d5dd263@googlegroups.com>,
<robin.vowels@gmail.com> wrote:
>> The 360 design totally botched the floating point, and lost close to a
>> full digit on each operation
>
> The initial release could lose significance when the
> mantissa of one operand was shifted down.
> This was rectified by the hardware upgrade of the
> guard digit, which retained the final digit that was
> shifted out during pre-normalisation. If it was then
> necessary to shift up the mantissa during post-normalisation,
> the guard digit was shifted back into the register.
>
> The implementation (hex floating-point) was a means to
> obtain good execution times when pre- and post-normalising,
> while maintaining at least 21 bits of mantissa.
>
> The only blunder was the way HER (halve floating-point) was
> implemented. It merely shifted the mantissa down by 1 place.
> It failed to post-normalise, so that if it happened that the
> most-significant digit of the mantissa was 1, you were left
> with only 20 bits of precision instead of 21.

No, the blunder was using hex. They botched the analysis of fraction
distribution and assumed it was linear rather than logrithmic, so they
lost an average of two bits of precision per operation and only got
one back from the smaller exponent. Also, they truncated rather than
rounded, which lost another bit. And finally, with binary floating
point one can do the hidden bit trick and not store the high bit of
the fraction which is always 1, but not in hex.

> The advantage of HER was that it took a fraction of the time
> that the Divide instruction took (to divide by 2).

Hex floating point certainly got fast wrong results.
--
Regards,
John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Re: Micros as number crunchers [message #393634 is a reply to message #393615] Mon, 20 April 2020 11:58 Go to previous messageGo to next message
scott is currently offline  scott
Messages: 4237
Registered: February 2012
Karma: 0
Senior Member
Peter Flass <peter_flass@yahoo.com> writes:
> Quadibloc <jsavard@ecn.ab.ca> wrote:
>> So RISC was an idea... the good parts of which have succeeded. RISC is basically
>> the new standard; x86 and zArchitecture are still around today representing
>> CISC, but they're legacy architectures; new commercial designs are almost always
>> RISC.
>>
>
> They’re both RISC, or some variant of RISC, under the covers, it’s just not
> user-accessible. It’s just sad that CISC architectures like VAX aren’t
> currently commercial, a worse architecture than x86 is hard to imagine,

Then you don't have a very good imagination, or you're assuming that the 8086 was
the be-all and end-all of intel architecture.

Hint: The current i5/i7/i9 processors are much closer to the VAX archticturally than you
think and far more capable than the vax ever was. 8086 segments haven't been used
for almost forty years now.
Re: Micros as number crunchers [message #393635 is a reply to message #393631] Mon, 20 April 2020 12:01 Go to previous messageGo to next message
scott is currently offline  scott
Messages: 4237
Registered: February 2012
Karma: 0
Senior Member
John Levine <johnl@taugh.com> writes:
> In article <r7k08o$9gh$1@newsreader4.netcologne.de>,
> Thomas Koenig <tkoenig@netcologne.de> wrote:
>>> The PDP-11 was indeed byte oriented, but Unix originally started on the
>>> 18-bit PDP-7.
>
> It hopped to the PDP-11 very early, wasn't any 18-bitisms I could see in 1975.
>
>> ... which is why octal plays such an important role in Unix and C,
>> instead of hexadecimal.
>
> Nope. DEC always used octal in their PDP-11 software and
> documentation. Since it had 8 registers, three-bit octal digits in
> the opcodes were handy.

I suspect that the PDP-11 choice of Octal was a function of the fact
that both PDP-10 and PDP-8 (and -5, et alia) had word bit-lengths congruent
to zero modulo three.
Re: Micros as number crunchers [message #393638 is a reply to message #393561] Mon, 20 April 2020 15:28 Go to previous messageGo to next message
usenet is currently offline  usenet
Messages: 556
Registered: May 2013
Karma: 0
Senior Member
On Sun, 19 Apr 2020 10:46:23 -0000 (UTC), Thomas Koenig <tkoenig@netcologne.de>
wrote:
> Why are eight-bit bytes so common today?

I think that powers of two factors into it. Once you start building a machine
with binary elements, it seems expedient to continue with that model as you
aggregate them into larger and larger structures. I suspect it simplifies some
of the logic needed.
Re: Micros as number crunchers [message #393639 is a reply to message #393638] Mon, 20 April 2020 17:09 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Douglas Miller

On Monday, April 20, 2020 at 2:27:59 PM UTC-5, Questor wrote:
> On Sun, 19 Apr 2020 10:46:23 -0000 (UTC), Thomas Koenig
> wrote:
>> Why are eight-bit bytes so common today?
>
> I think that powers of two factors into it. Once you start building a machine
> with binary elements, it seems expedient to continue with that model as you
> aggregate them into larger and larger structures. I suspect it simplifies some
> of the logic needed.

More likely, it is related to the size of a character - typically the smallest unit of data a computer deals with. Once 7-bit ASCII became established - and possibly even before that - also 8-bit EBCDIC - the 8-bit byte became a convenient unit of data. It's really just what makes sense, based on how things evolved. There have been a lot of different ways of representing characters, but these days everything revolves around bytes. Once you settle on 8-bit characters, everything else falls into place. Back when a company designed and built every component in a computer system, you could make more-arbitrary decisions about such things. But these days, everything needs to interconnect with pre-existing components.
Re: Micros as number crunchers [message #393641 is a reply to message #393627] Mon, 20 April 2020 19:21 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: J. Clarke

On 20 Apr 2020 11:09:15 GMT, Niklas Karlsson <anksil@yahoo.se> wrote:

> On 2020-04-19, Douglas Miller <durgadas311@gmail.com> wrote:
>> On Sunday, April 19, 2020 at 9:21:29 AM UTC-5, J. Clarke wrote:
>>>
>>> I suspect Unix had something to do with it--while Unix was originally
>>> developed on an 18 bit architecture, by the time it had been revised
>>> into a portable form it was pretty much locked into multiples of 8
>>> bits.
> ...
>> I could be wrong, as I never worked on the architecture, but the
>> PDP-11 may have had an 18-bit address width (originally 16, then 18 or
>> 22 with MMU hardware), but it was still a byte (8-bit) oriented
>> machine. 8, 16 and 32-bit data.
>
> The PDP-11 was indeed byte oriented, but Unix originally started on the
> 18-bit PDP-7.

It started as hand-coded assembler and was hardly portable. Unix
didn't really become popular until after the C rewrite that made it
portable. And that wasn't on the PDP-7.
Re: Micros as number crunchers [message #393642 is a reply to message #393561] Mon, 20 April 2020 19:52 Go to previous messageGo to next message
Quadibloc is currently offline  Quadibloc
Messages: 4399
Registered: June 2012
Karma: 0
Senior Member
On Sunday, April 19, 2020 at 4:46:24 AM UTC-6, Thomas Koenig wrote:

> A problem with scientific calculation today is that people often
> cannot use 32 bit single precision (and even more often, they do
> not bother to try) because of severe roundoff erros.

> There was a good reason why the old IBM scientific machines had
> 36 bit floating point, but that was sacrificed on the altar of
> the all-round system, the 360.

> Why are eight-bit bytes so common today?

For a few more hours, due to a domain name issue that should be cleared up soon,
my web site is unavailable. Otherwise, I would point you to my web site, which
has a section talking about ways to make it practical for a computer to have 36-
bit floating-point numbers, just for the reasons you mention.

But to answer your question:

Before April 1964, and the IBM System/360, computers generally stored text in
the form of 6-bit characters.

This was fine - as long as you were content to have text that was upper-case
only.

Now, there was a typesetting system built around the PDP-8, with 6-bit characters in a 12-bit word; you could always have a document like this:

\MY NAME IS \JOHN.

where the backslash is an escape character making the letter it precedes appear
in upper-case instead of lower-case.

And the IBM 360, while it had an 8-bit byte, had keypunches that were upper-case
only, and lower-case was a relatively uncommon extra-cost option on their video
terminals; however, the 2741 printing terminal, based on the Selectric
typewriter, offered lower-case routinely. However, the 360 was mostly oriented
around punched-card batch.

For that matter, the Apple II with an 8-bit byte had an upper-case only
keyboard, and there was a word-processor for it that let you use an escape
character if you wanted mixed case.

So byte size isn't everything. But the 8-bit byte does make lower-case text a
lot easier to process. That is _one_ important reason why, when the hugely
popular (for other reasons) IBM System/360 came out, everybody started looking
upon computers with 6-bit characters as something old-fashioned.

IBM, when it designed the 360, wanted a very flexible machine, suitable to a
wide variety of applications. The intention was that microcode would allow big
and small computers to have the same instruction set, and this would serve both
business computers sending out bills with their computers, and universities
doing scientific research with them.

Because 32 bits was smaller than 36 bits - and IBM made things worse by using a
hexadecimal exponent, and by truncating instead of rounding floating-point
calculations - people doing scientific calculations simply switched to using
double precision for everything.

The IBM 360 was very popular. SDS, later purchased by Xerox, made a machine
called the Sigma which, although not compatible with the 360, used similar data
formats, and which was designed to perform the same types of calculations. (In
some ways, it was a simpler design aimed at allowing 360 workloads to be handled
with 7090-era technology.) And there was the Spectra series of computers from
RCA, which were partly 360 compatible.

But the effects of the 360 were felt everywhere. The PDP-4 had an 18 bit word, but minicomputers from other companies, like the HP 211x series or the Honeywell 316, went to a 16 bit word. And DEC, which had minis with 12-bit and 18-bit words with the same basic design as the HP 211x and Honeywell 316 - single-word memory-reference instructions were achieved by allowing them to refer to locations on "page zero" and the *current* page, with indirect addressing used to get at anywhere else - decided it needed something modern too.

And so DEC came up with the *wildly successful* PDP-11. It was a minicomputer
with a 16-bit word. But unlike the minicomputers I've just mentioned, it had a
modern architecture. The only indirect addressing was register indirect
addressing. Memory wasn't divided into pages.

The PDP-11 transformed the world of computing. It solidified the dominance of
the 8-bit byte. It also made the "little-endian" byte order a common choice..

John Savard
Re: Micros as number crunchers [message #393643 is a reply to message #393597] Mon, 20 April 2020 20:01 Go to previous messageGo to next message
Quadibloc is currently offline  Quadibloc
Messages: 4399
Registered: June 2012
Karma: 0
Senior Member
On Sunday, April 19, 2020 at 7:04:34 AM UTC-6, Thomas Koenig wrote:

> Was it the Arpanet? Octet-based from the start, as far as I know.

The octet became ubiquitous long before the Arpanet.

My previous post recounts some of the history - the IBM 360 set the stage for
the PDP-11.

One thing I didn't mention was that before the 360, IBM made a high-performance
scientific computer called the STRETCH. One unusual feature it had was that some
instructions actually addressed memory by the *bit* rather than by the word or
the byte.

Therefore, since it was a binary machine, it had a word size of 64 bits, so that
bit addressing would work easily: the last six bits pointed to the bit within a
word. While the System/360 didn't have that feature, they may have thought they
might need to add it later.

When designing the System/360, another thought was its use for business
computations. The IBM 1401 stored numbers as printable character strings. This
wasted two bits out of every six, since you only need four bits to represent a
decimal digit.

The IBM 360 could handle both binary numbers and packed decimal numbers, where
two decimal digits, each four bits long, were in every 8-bit byte. When you
*unpack* a decimal number, you get its printable character form as a string of
EBCDIC digits.

So this meant it could do decimal arithmetic and instead of changing 33% waste
to 50% waste, it eliminated the waste entirely. (Well, not _entirely_, as ten
possible digits don't use all sixteen possibilities of four bits. IBM got around
to dealing with that, through Chen-Ho encoding, and its later variant Densely
Packed Decimal, DPD, but that is another story.)

That was the rationale used for the 8-bit byte at the time the 360 was being
designed. Perhaps because of the STRETCH and bit addressing, a machine with a
48-bit word that could use _both_ 6-bit characters and 8-bit characters was not
considered.

John Savard
Re: Micros as number crunchers [message #393644 is a reply to message #393623] Mon, 20 April 2020 20:10 Go to previous messageGo to previous message
Quadibloc is currently offline  Quadibloc
Messages: 4399
Registered: June 2012
Karma: 0
Senior Member
On Sunday, April 19, 2020 at 7:36:54 PM UTC-6, John Levine wrote:

> For that matter, if 36 bits was enough why did the CDC supercomputers have
> 60 bit words?

36 bits was enough some of the time, but IBM's 32 bits on the 360 was almost
never enough. Whether or not a better-designed 32-bit float would be good enough
to bring back some of the applications lost is not clear to me.

If you look at pocket calculators, and old books of mathematical tables, a _lot_
of scientific work was done to 10 digit precision. And before the 60-bit CDC
6600, CDC made a number of popular advanced scientific computers with a 48-bit
word length, which is about enough for a 10 digit number.

This is what led me to think that it *would* be good to have computers designed
around the 6-bit character.

If a computer could have...

36-bit floating point
- a little longer than 32 bits, longer enough that, historically, it was
adequate for many scientific computations

48-bit floating point
- ten digit precision is what scientific calculators and mathematical tables
used a lot, and so this is probably the ideal precision for most scientific
computing

60-bit floating point
- but sometimes you need double precision. However, given that 64-bit double
precision is nearly always more than is needed, chopping off four bits (instead
of going to 72-bit double precision) would likely not hurt things at all)

then it seemed to me that such a computer, unlike the ones we have now, would
offer floating-point formats that are ideally suited to the requirements of
scientific computation.

John Savard
Pages (4): [1  2  3  4    »]  Switch to threaded view of this topic Create a new topic Submit Reply
Previous Topic: Was there ever a complete CPL compiler?
Next Topic: Re: SEL 810A computer
Goto Forum:
  

-=] Back to Top [=-
[ Syndicate this forum (XML) ] [ RSS ] [ PDF ]

Current Time: Thu Mar 28 22:12:54 EDT 2024

Total time taken to generate the page: 0.04539 seconds