Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP Posting-Version: version B 2.10.2 9/17/84; site opus.UUCP Path: utzoo!watmath!clyde!burl!ulysses!allegra!mit-eddie!genrad!panda!talcott!harvard!seismo!hao!nbires!opus!rcd From: rcd@opus.UUCP (Dick Dunn) Newsgroups: net.lang Subject: Re: Efficiency of Languages (and comlexity) Message-ID: <189@opus.UUCP> Date: Thu, 31-Oct-85 20:35:14 EST Article-I.D.: opus.189 Posted: Thu Oct 31 20:35:14 1985 Date-Received: Sun, 3-Nov-85 09:41:30 EST References: <15100004@ada-uts.UUCP> <15100007@ada-uts.UUCP> Organization: NBI,Inc, Boulder CO Lines: 53 > >> First of all, I don't know of any features of any languages which are > >> "inherently" expensive to implement, regardless of the machine > >> architecture. (me) > > > > Right, because any expensive feature can be done directly in > > hardware! :-) But simply compiling a call to "(sort x)" into > > "SORT (X)+,0" doesn't mean that the sorting won't still be O(n log n)... > > ...The proof > that I know of that no sort can run faster than O(n log n) > is based on an analysis of possible decision trees...The important point > is that this proof relies on assumptions about ARCHITECTURE. > It assumes that more than one comparison will NOT be done at > the same time. This is not true. I think it indicates a basic misunderstanding of "O() notation". If you change the architecture of your machine so that it can do, say, five comparisons at once, and you modify your sort program to take advantage of this, the time for the sort is STILL O(n log n)--the situation is little different than if you'd made the hardware five times as fast. Remember (or understand) that when you talk about this sort of "order of" an algorithm, you don't get to set a fixed upper limit on n. > I believe that one can find the N-th highest number in an > unsorted list in less than O(n log n) time (I'd have to check > my algorithms books to make sure, but...) Throw n different > processors at an unsorted array, one looking for the first > highest, the second looking for the second highest, etc. > Have these processors work concurrently. Voila! NO! You cannot throw "n different processors" at the array! N is (potentially) larger than the number of processors you have. Actually, there is an assumption in analyzing algorithms that one does not have an infinite number of computational elements (whatever they may be). If you throw out that assumption, you're nowhere--because (1) you can't build, or even emulate, the hardware implied and mostly (2) all the algorithms you're going to find interesting will take constant time! (If you have unlimited hardware, you just keep replicating and working in parallel.) > Look, I'm no algorithms expert. But separation of language > issues from architectural issues from implementation issues > is RARELY done and I'd like to see people be more careful > about doing so. On the contrary, analysis of complexity of algorithms is generally done with considerable care to separate these issues and identify the assumptions about architecture implied by the algorithm. If you're not accustomed to reading in the area, you may find yourself a little befuddled because you don't understand some of the assumptions commonly left implicit. -- Dick Dunn {hao,ucbvax,allegra}!nbires!rcd (303)444-5710 x3086 ...At last it's the real thing...or close enough to pretend.