Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP
Path: utzoo!mnetor!uunet!seismo!husc6!rutgers!rochester!pt!k.cs.cmu.edu!lindsay
From: lindsay@k.cs.cmu.edu (Donald Lindsay)
Newsgroups: comp.arch
Subject: Re: machine word sizes
Message-ID: <1184@k.cs.cmu.edu>
Date: Fri, 24-Jul-87 11:16:58 EDT
Article-I.D.: k.1184
Posted: Fri Jul 24 11:16:58 1987
Date-Received: Sat, 25-Jul-87 15:52:14 EDT
References: <142700010@tiger.UUCP> <2792@phri.UUCP> <8315@utzoo.UUCP> <2807@phri.UUCP> <565@saturn.ucsc.edu>
Organization: Carnegie-Mellon University, CS/RI
Lines: 12


I believe that John Von Neumann chose 36 bits as giving the precision he
wanted for arithmetic calculation. This was in the early 1950's, when
floating point hardware was too expensive to be worthwhile.

Twelve bit machines, such as the PDP-8, were used as lab machines. The best
analog-to-digital converters (ADCs) were 12 bits at that time. Perhaps some
24-bit machines relate to this.

The PDP-5, 7, 9, and 15 were 18 bit machines ( 36/2 I'm sure ). The most
visible result of this is the silly tendency to use six-and-three characters
for filenames ( "myfile.bas" ).