Path: utzoo!utgpu!watmath!att!tut.cis.ohio-state.edu!brutus.cs.uiuc.edu!wuarchive!wugate!uunet!gistdev!flint From: flint@gistdev.UUCP (Flint Pellett) Newsgroups: comp.lang.c Subject: Re: optimization (was Re: C vs. FORTRAN (efficiency)) Message-ID: <484@gistdev.UUCP> Date: 17 Aug 89 15:42:09 GMT References: <3288@ohstpy.mps.ohio-state.edu> <225800204@uxe.cso.uiuc.edu> <14523@bfmny0.UUCP> <1613@mcgill-vision.UUCP> <14556@bfmny0.UUCP> Organization: Global Information Systems Technology Inc., Savoy, IL Lines: 57 In article <1613@mcgill-vision.UUCP> mouse@mcgill-vision.UUCP (der Mouse) writes: >In article <14523@bfmny0.UUCP>, tneff@bfmny0.UUCP (Tom Neff) writes: >>The fact is that every extra hour the applications wonk spends >> trying to get the #*$&@# compiler or linker or OS loader to work, or >> on the phone to some consultant, is worth billions of instructions on >> any processor of his choice. > >Yeah, so what? If I can shave 10% off of my compiles, that's a lot of >time. Suppose it takes me a week to save that 10%: then after I spend >a total of nine weeks waiting for compiles, I've broken even: that >would have been ten weeks. After that it's clear gain. ... > >Now, if the compiler is sold to eight thousand customers...I'll let you >work out the arithmetic for yourself. :-) There are a couple sides to this, and learning not just when to optimize, but WHAT to optimize, is a large part of growing up as a programmer. As an example, I once had a "consultant" forced on me by a client, who was to look a system of ours and tell us how to speed it up. (We'd just finished developing the thing following a "make it work first, then make it work fast" philosophy, so I felt I knew what to do to speed it up already, but sometimes it's hard to refuse "help".) After several days of poking around, the consultant came back and proudly displayed a routine he had found and rewritten: he said he had tested the routine and had achieved about 90% improvement: now it ran in about 10% of the time it used to use. I asked what the effect of that was on the overall system performance, and he replied that he had not measured that. So, we did, and there was no noticeable effect: the routine he'd spent a day improving was being executed about once a day. Meanwhile, I had spent my 2 days working on a routine which I knew was being run constantly: I managed to squeeze a mere 30% improvement out of that routine, with a resulting improvement of about 15% in overall system performance. In the other example you cite, with the compiler: it is often better to have the faster product, but the simple arithmetic of saying that if it takes 10% off the time to do the compiles that there will be a result of 10% higher productivity doesn't follow. The problem is that any system is no faster than the slowest part, and eventually the slowest part ends up being the human part: a certain amount of time is required for a person to think and plan, and often times the forced "break" in the action caused by waiting for a compile is used for that purpose. (I haven't studied this myself, but my theory is that if you could plot a graph of productivity vs time needed to compile, that curve would be bell shaped for many people: toward the bottom where compiles are instantaneous, productivity would be less because the programmer would be charging ahead without ever stopping to think about what they are doing, and would ending up wasting time doing things that some pre-planning would have eliminated. I don't think this would be true for every programmer though, only some of them: for some people, who don't even go to the terminal until they have figured out exactly what they are going to do, this does not apply: but my guess is that these people are a small minority of all programmers.) -- Flint Pellett, Global Information Systems Technology, Inc. 1800 Woodfield Drive, Savoy, IL 61874 (217) 352-1165 INTERNET: flint%gistdev@uxc.cso.uiuc.edu UUCP: uunet!gistdev!flint