Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP
Path: utzoo!mnetor!uunet!seismo!rutgers!labrea!glacier!jbn
From: jbn@glacier.STANFORD.EDU (John B. Nagle)
Newsgroups: comp.unix.wizards,comp.arch
Subject: Re: *Why* do modern machines mostly have 8-bit bytes?
Message-ID: <17137@glacier.STANFORD.EDU>
Date: Wed, 22-Jul-87 02:22:12 EDT
Article-I.D.: glacier.17137
Posted: Wed Jul 22 02:22:12 1987
Date-Received: Fri, 24-Jul-87 01:31:03 EDT
References: <142700010@tiger.UUCP> <2792@phri.UUCP> <8315@utzoo.UUCP> <2807@phri.UUCP>
Reply-To: jbn@glacier.UUCP (John B. Nagle)
Organization: Stanford University
Lines: 21
Xref: mnetor comp.unix.wizards:3347 comp.arch:1655


      The 8-bit byte was an IBM innovation; the term "byte" first appeared
with the IBM System/360 product announcement.  Much of the 8-bit trend 
stemmed from the desire to be IBM compatible.  But, more importantly,
the view of memory as raw bits whose form and meaning were determined by
the program started to replace the view that the CPU architecture determined
the way that memory was to be used.  

      Variable-width machines have been built; the IBM 1401 and IBM 1620
were 1960's vintage machines with variable-length decimal arithmetic as the
only system of arithmetic.  Burroughs built some machines with bit-addressable
memory and variable-length binary arithmetic in the late 1960s.  As memory
became cheaper, complicating the CPU to save some memory faded out as a goal.

      Power-of-two architecture is definitely an IBM idea, as is, for
example, K=1024.  (UNIVAC machines were quoted as 65K, 131K, 262K, etc. for
decades.)  If you want to sell iron, the notion that the next increment
of capacity after N is 2*N has considerable appeal.  Today everybody does
it, but it was definitely more of an IBM thing in the 1970s.

					John Nagle