Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP
Posting-Version: version B 2.10.2 9/18/84; site harvard.ARPA
Path: utzoo!watmath!clyde!burl!ulysses!allegra!mit-eddie!genrad!panda!talcott!harvard!sasaki
From: sasaki@harvard.ARPA (Marty Sasaki)
Newsgroups: net.ai,net.lang.lisp,net.lang.ada
Subject: Efficiency of LISP
Message-ID: <417@harvard.ARPA>
Date: Sat, 2-Mar-85 13:35:17 EST
Article-I.D.: harvard.417
Posted: Sat Mar  2 13:35:17 1985
Date-Received: Mon, 4-Mar-85 06:53:13 EST
References: <417@ssc-vax.UUCP> <676@topaz.ARPA> <6982@watdaisy.UUCP> <3223@utah-cs.UUCP> <7016@watdaisy.UUCP> <306@talcott.UUCP>
Organization: Aiken Computation Laboratory, Harvard
Lines: 19
Xref: watmath net.ai:2574 net.lang.lisp:352 net.lang.ada:204

There was an informal experiment done in the early-mid seventies at MIT
that compared MACLISP with other programming languages. Basically a
translator was written to translate (say) FORTRAN into MACLISP. Both
versions were compiled, and the programs were run and the results
compared. The translators were simple and did no real optimizing of the
code, everything was up to the lisp compiler.

In every case the lisp versions ran faster and took up less memory. The
experiment was done on either a KA-10 or a KL-10. I remember being
amazed at the FORTRAN results. The programs being used were purely
computational ones, things like matrix handling and iterative modeling
simulations.

I don't remember much more. Could any of the principles shed further light?
-- 
			Marty Sasaki
			Havard University Science Center
			sasaki@harvard.{arpa,uucp}
			617-495-1270