Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP
Posting-Version: version B 2.10.2 9/18/84; site wdl1.UUCP
Path: utzoo!watmath!clyde!cbosgd!ihnp4!qantel!hplabs!fortune!wdl1!jbn
From: jbn@wdl1.UUCP
Newsgroups: net.lang
Subject: Re: Re: Efficiency of Languages (and com
Message-ID: <841@wdl1.UUCP>
Date: Thu, 7-Nov-85 22:05:48 EST
Article-I.D.: wdl1.841
Posted: Thu Nov  7 22:05:48 1985
Date-Received: Mon, 11-Nov-85 05:20:09 EST
Sender: notes@wdl1.UUCP
Organization: Ford Aerospace, Western Development Laboratories
Lines: 27
Nf-ID: #R:rocheste:-1289000:wdl1:8600010:000:1075
Nf-From: wdl1!jbn    Nov  7 18:50:00 1985


On parallelism:

	The extent to which a computation can be speeded up by doing
	multiple operations in parallel is a very difficult subject;
	attempts to find formal upper bounds on potential parallelism
	is an area in which not much is known.

On sorting:

	The upper limit is of the order of n log n binary comparisons,
	but one can do better than n log n operations by using
	distribution techniques.  There was a recent article in
	Computer Language giving a sorting algorithm that beats n log n;
	heuristic techniques are used to equally divide the key space so
	as to evenly distribute the records into multiple buckets.
	If this can be done perfectly, it can be done in strictly linear
	time.  Consider sorting a set of records numbered 0 to 9999 by
	providing 10,000 buckets and just putting each record into the
	right slot as it comes along.  Linear time, right?  In practice,
	nearly linear performance is achieved by the high performance
	packages on mainframes, such as SyncSort; the distribution algorithms
	used to do this are proprietary.
	

					John Nagle