Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP
Posting-Version: version B 2.10.1 6/24/83; site cornell.UUCP
Path: utzoo!watmath!clyde!floyd!vax135!cornell!hal
From: hal@cornell.UUCP (Hal Perkins)
Newsgroups: net.micro
Subject: Re: 4 -> 8 -> 8/16 -> 16 -> 16/32 -> 32  What next, 64 bit micros ?
Message-ID: <6989@cornell.UUCP>
Date: Mon, 19-Mar-84 19:29:03 EST
Article-I.D.: cornell.6989
Posted: Mon Mar 19 19:29:03 1984
Date-Received: Tue, 20-Mar-84 01:42:04 EST
References: <974@vax2.fluke.UUCP>
Organization: Cornell Univ. CS Dept.
Lines: 17

There's really two issues here... First, is there any good reason for an
64-bit architecture as seen by the programmer?  And second, is there any
good reason for 64-bit datapaths on a chip?  My own bias is that 32-bit
integers and registers are wide enough for most anything--of course, this
may just be because that's been good enough for most large machines for
a long time, and my cultural blinders may prevent me from seeing a
good reason for 64-bit general registers.  :-)

But 64-bit datapaths are a different story.  It might be a useful way
to build a high-performance chip (64 bits on or off the chip each cycle).
There's a number of mainframes (like the big IBM machines) that have 64-bit
implementations of 32-bit architectures.  The issue of what the programmer
sees is separate from (but related to) how the hardware designer implements it.


Hal Perkins                         UUCP: {decvax|vax135|...}!cornell!hal
Cornell Computer Science            ARPA: hal@cornell  BITNET: hal@crnlcs