Path: utzoo!utgpu!water!watmath!clyde!att!osu-cis!tut.cis.ohio-state.edu!cwjcc!gatech!rutgers!mailrus!cornell!uw-beaver!uoregon!will From: will@uoregon.uoregon.edu (William Clinger) Newsgroups: comp.lang.scheme Subject: space/time in byte code/native code (was dynamic compilation) Summary: factors of five matter Message-ID: <2850@uoregon.uoregon.edu> Date: 23 Sep 88 23:15:53 GMT References: <8809142159.AA28551@uicbert.eecs.uic.edu> <327@scaup.cl.cam.ac.uk> Reply-To: will@fog.UUCP (William Clinger) Organization: University of Oregon, Computer Science, Eugene OR Lines: 28 In article <327@scaup.cl.cam.ac.uk> adg@cl.cam.ac.uk (Andy Gordon) writes: > I found on small benchmarks that native code >was between four and six times bigger than interpreted byte codes, and >ran between one and seven times faster. Another data point: In MacScheme, native code is also four to six times as large as interpreted byte code, but is two to ten times as fast, with a factor of four or five being typical. >There appear to be two reasons for hybrid systems: (1) to give a variable >time/space tradeoff, i.e., between fast/bulky native code and slow/lean >interpreted code; (2) to allow fancy interpretive debuggers and tracers >in the presence of native code. > >I don't think reason (1) is very compelling these days, because the size of >compiled code is not an issue with today's computers... Here I have to disagree. On a Macintosh, a factor of five in program size can easily be the difference between fitting or not fitting on a floppy disk. The space required by a program is also a big issue under MultiFinder, since it determines how many simultaneous applications you can run. RAM accounts for about a quarter of the typical Macintosh II system cost, so five times as much RAM would double the cost. Similarly for disk space. I understand why some people don't count Macintoshes and IBM PCs and PS/2s and their ilk as "today's computers", but I don't think that's realistic. Peace, Will