Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP
Posting-Version: $Revision: 1.6.2.16 $; site ada-uts.UUCP
Path: utzoo!linus!philabs!cmcl2!harvard!think!ada-uts!richw
From: richw@ada-uts.UUCP
Newsgroups: net.lang
Subject: Re: Efficiency of Languages ?
Message-ID: <15100007@ada-uts.UUCP>
Date: Mon, 28-Oct-85 13:33:00 EST
Article-I.D.: ada-uts.15100007
Posted: Mon Oct 28 13:33:00 1985
Date-Received: Fri, 1-Nov-85 02:08:10 EST
References: <15100004@ada-uts.UUCP>
Lines: 70
Nf-ID: #R:ada-uts:15100004:ada-uts:15100007:000:3433
Nf-From: ada-uts!richw    Oct 28 13:33:00 1985


>> First of all, I don't know of any features of any languages which are
>> "inherently" expensive to implement, regardless of the machine
>> architecture.  (me)
>
> Right, because any expensive feature can be done directly in
> hardware! :-)  But simply compiling a call to "(sort x)" into
> "SORT (X)+,0" doesn't mean that the sorting won't still be O(n log n)...
> (stan shebs)

Apparently my original point about Turing machine implementations
increasing the run-time order of an algorithm didn't sink in.
YES, expensive features can be done in hardware.  The proof
that I know of that no sort can run faster than O(n log n)
is based on an analysis of possible decision trees (i.e. any
comparison-based sort will have a decision tree with n! leaves,
so must be O(n log n) depth, or something like that -- check
some algorithms text for the details).  The important point
is that this proof relies on assumptions about ARCHITECTURE.
It assumes that more than one comparison will NOT be done at
the same time.  Next time you come across an algorithms text,
think twice why they talk so much about RAM (random-access machines).

I believe that one can find the N-th highest number in an
unsorted list in less than O(n log n) time (I'd have to check
my algorithms books to make sure, but...)  Throw n different
processors at an unsorted array, one looking for the first
highest, the second looking for the second highest, etc.
Have these processors work concurrently.  Voila!

Look, I'm no algorithms expert.  But separation of language
issues from architectural issues from implementation issues
is RARELY done and I'd like to see people be more careful
about doing so.

> ...  The advantage of higher-level constructs is that they can be
> understood more easily and modified more safely.  CLU wins in some
> of these respects, but it has little advantage over a modern Lisp
> style.    (stan shebs)

Comparing Lisp(s) and CLU is a little like comparing oranges and apples.
CLU is more type-safe (because all type errors are uncovered
before run-time).  Lisp in more flexible and more extensible.
CLU is more readable (in my opinion -- I shudder to think of the
inane flames I'd have to fend off if I dared claim LISP was
unreadable).  Lisp is easier to learn.  And so on.  Neither is
a "better" language.

BTW, it is also hard to even judge LISP as a language because
of all of its variants: Common Lisp, MacLisp, Scheme, McCarthy's
original Lisp, Lisp with flavors (whatever its name is), etc.
I've found that any statement I make to criticize Lisp can
be countered with "but that's not a problem in -Lisp!".
This is just a manifestation of Lisp's extensibility -- great!
But extending Lisp to tailor it to particular needs doesn't make
Lisp any better.  How is extending Lisp any better than implementing
generally useful procedures and/or data-abstractions in CLU?
To me, they're both programming.  Unfortunately, when people
write nifty stuff for their Lisp, they have this need to name
the result after their dog, grandmother, or favorite composer.

If Franz Liszt can get his own language, why can't Richard Wagner?
That's what I wanna know...   :-)

-- Rich Wagner

P.S. Apologies to people that have reasonable respect for Lisp(s).
     After 5 years of listening MIT Lisp disciples rant and rave
     without knowing better triggers certain reflexes...
     They're almost as bad as those cock-sure CLUsers :-)