Path: utzoo!utgpu!jarvis.csri.toronto.edu!mailrus!uflorida!uakari.primate.wisc.edu!ames!uhccux!munnari.oz.au!cs.mu.oz.au!ok
From: ok@cs.mu.oz.au (Richard O'Keefe)
Newsgroups: comp.lang.misc
Subject: Re: Fast conversions, another urban myth?
Message-ID: <2170@munnari.oz.au>
Date: 23 Sep 89 07:51:29 GMT
References: <832@dms.UUCP> <688@UALTAVM.BITNET> <1989Sep22.201906.10618@utzoo.uucp>
Sender: news@cs.mu.oz.au
Lines: 28

In article <1989Sep22.201906.10618@utzoo.uucp>, henry@utzoo.uucp (Henry Spencer) writes:
: In article <136@bbxsda.UUCP> scott@bbxsda.UUCP (Scott Amspoker) writes:
: >...do everything in C and require decimal arithmetic because of the
: >business nature of our applications.

: Can you explain why binary integer arithmetic is less accurate than
: decimal integer arithmetic for your applications?  Sounds like you're
: confusing the decimal/binary choice with the integer/floating-point
: choice; the two are orthogonal and independent.

Another possible confusion is COBOL (18 decimal digits + sign) with
``typical'' C or Fortran (31 binary digits + sign).
Suppose you want to do calculations on sums of money, carrying two
extra places of decimals beyond cents.  32-bit arithetic will only
let you represent up to +/- $200,000.0000 and if you're doing the books
of a company with as few as 30 or 40 people in it, that's just not going
to be anywhere near enough.  64 bit arithmetic (GCC's "long long int")
should suffice.

Another possible confusion is between decimal arithmetic and scaled
arithmetic.  An integer variable x may represent x*0.0001 or whatever.
The main disadvantage with that is that it is hard to keep track of
the scaling by hand; you really want the compiler to do it for you.
(It's rather silly to restrict scaling to be by powers of 10, but
that's what COBOL and PL/I do.)

Perhaps Amspoker could switch to C++, and write a package for doing
long and/or scaled arithmetic.  Or even (horror!) switch to Ada?