Path: utzoo!utgpu!jarvis.csri.toronto.edu!rutgers!ucsd!ames!uhccux!munnari.oz.au!cs.mu.oz.au!ok From: ok@cs.mu.oz.au (Richard O'Keefe) Newsgroups: comp.arch Subject: Re: COBOL Decimal Arithmetic Message-ID: <2168@munnari.oz.au> Date: 23 Sep 89 04:47:31 GMT References: <943@rd1632.Dayton.NCR.COM> Sender: news@cs.mu.oz.au Lines: 23 In article <943@rd1632.Dayton.NCR.COM>, otto@rd1632.Dayton.NCR.COM (Jerome A. Otto) writes: > (2) Convert to binary (using hardware instructions if > available), add binary, convert from binary (using > hardware if available) > Very seldom, if ever, is (2) fastest due to the problem of > converting from binary to decimal. This conversion requires > N divides or N+1 multiplies (N = number of digits). I'll say it again: conversion between N digit decimal and binary, when N is known at compile time, doesn't require **ANY** multiplications or divisions. I'll spell out how to do decimal->binary one digit at a time. Let Dn, ..., D1, D0 be decimal digits 0..9. (They might have been ASCII or EBCDIC originally, with the high bits masked off. Or they might have been nibbles.) Calculate X = D0 + pot[1][D1] + ... + pot[n][Dn] where pot[i][j] is *PRECOMPUTED* as j*10**i. Overflow is only possible in the last step. To support COBOL's 18-digit requirement, 18 * 10 * 64 bits of tables will suffice. At the price of larger tables, it is possible to use fewer memory references. There are other optimisations possible. Binary->decimal conversion is a bit harder, but it is possible to do it with ~N memory references and ~N binary adds or subtracts, and again, it can be optimised.