Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP
Path: utzoo!mnetor!uunet!seismo!columbia!rutgers!mit-eddie!uw-beaver!apollo!perry
From: perry@apollo.uucp (Jim Perry)
Newsgroups: comp.arch
Subject: Re: *Why* do modern machines mostly have 8-bit bytes?
Message-ID: <36367aef.8e47@apollo.uucp>
Date: Wed, 22-Jul-87 10:42:00 EDT
Article-I.D.: apollo.36367aef.8e47
Posted: Wed Jul 22 10:42:00 1987
Date-Received: Fri, 24-Jul-87 04:28:11 EDT
References: <142700010@tiger.UUCP> <2792@phri.UUCP> <8315@utzoo.UUCP> <2807@phri.UUCP>
Reply-To: perry@apollo.UUCP (Jim Perry)
Organization: Apollo Computer, Chelmsford, MA
Lines: 16

In some old architectures the byte structure was more closely related to 
the input format -- Hollerith code.  Thus each byte corresponded to a column
on a card, with at least 12 bits: X, Y, 0..9.  The Honeywell 200 (to the best
of my memory, I played with this beast briefly in 1974) had these plus a
parity bit and two bits called "word mark" and "item mark"; they could both
be set resulting in a "record mark" (my recollection: permute well before
believing).  Most of the machine operations were oriented to decimal 
arithmetic and COBOL-style editing (zero-fill, etc), and words truly were 
variable length.  Some operations, as I recall, scanned right-to-left until
a -mark (arithmetic, presumably), others right-to-left until a
-mark.  I don't recall more so I won't pursue it.  It's interesting
in retrospect how different the use of the machine was: how many recent
arcitectures incorporate floating-dollar-sign leading-zero suppression,
with check-protect (asterisk fill)?

Jim Perry (perry@apollo)  Apollo Computer, Chelmsford MA