Path: utzoo!attcan!utgpu!watmath!iuvax!cica!gatech!hubcap!dinucci
From: dinucci@ogccse.ogc.edu (David C. DiNucci)
Newsgroups: comp.parallel
Subject: Re: New Bell Award
Message-ID: <6282@hubcap.clemson.edu>
Date: 18 Aug 89 13:19:50 GMT
Lines: 100
Approved: parallel@hubcap.clemson.edu

In article <6215@hubcap.clemson.edu> dinucci@cse.ogc.edu (David C. DiNucci) writes:
>In article <6202@hubcap.clemson.edu> notes@iuvax.cs.indiana.edu writes:
>>Here is an item that should be of interest to this group:
>>
>>
>>           1989 Bell Award for Perfect Benchmark Rules
>>
>>(2)  More than 16 Processors:  The measure is the same as in  1.,
>>     except  that the computer system has more than 16 processors
>>     and all processors must participate in the execution of each
>          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>>     benchmark.
>      ^^^^^^^^^
>
>This rule is non-sensical, and seems to be due to the belief in parallelism
>for its own sake, instead of as a means to solve a problem.  A processor is
>a resource to a computing system, just as memory is.  It is safe to say that
>most readers would find the above rule atrocious if it referred to memory
>(e.g. "all 4-MB of memory must be used in the execution of each benchmark").
>
>In fact, if there is insufficient parallelism in some of the benchmarks,a
>smart programmer would let some of the processors do senseless work, simply
>to meet the letter of the law.  Is this somehow guarded against?
>
>
>Perhaps someone can explain a rationale behind the above rule?

Well, along with some agreement from others, I did receive one personal
response from someone who helped with the development of the rules.  I
post it here with his permission, followed by some comments of my own
which have already been seen by Dr. Cybenko.

=======================================================================
Date: Tue, 15 Aug 89 12:41:38 CDT
From: gc@s16.csrd.uiuc.edu (George Cybenko)
Subject: Re:  Bell Award

Dave:
	If you want to post an "official" response to your question,
that will be difficult because quite a few people were involved in
drafting the "rules".  I can only offer my reasons for thinking that
the part in question was appropriate.  You can post the following
as a personal response.

*********************************************

A number of people have raised questions about Category (2)'s
requirement that "all processors must participate" in the
execution of each benchmark.  As someone involved in putting
those rules together, my thinking was to prevent calling
a supercomputer networked with 16 idle PC's a 17 processor
distributed system.  One should interpret that as "at least
16 processors must meaningfully participate" in the execution
of each benchmark.

It raises the larger question of why have two categories in the first place.
Certainly, if the time with more than 16 processors beats the
time with fewer than 16 processors, it wouldn't be interesting
to have two categories.  However, since most people feel that it
will be a few years before that happens, separate categories
on the basis of processors allows more people to compete  and 
helps gauge the progress being made in the application of
parallel computing to scientific problems.

In the end, no set of rules can replace common sense, consensus
and fair play.

George Cybenko
gc@uicsrd.csrd.uiuc.edu

=======================================================================

My [Dave D's] followup comments:

While this seems to suggest that some crumbs are being tossed to the
parallel processors, in fact it is certainly making them appear worse
than they might if they could be used in a more logical manner - i.e.
using only the processors that are needed to accomplish the task(s) at
hand - and as benchmarks, I would assume that they indeed are intended
to reflect the real world accurately.  (In other words, I suppose my own
personal view is that a Cray on a network with IBM PCs should, in fact,
win if it faster than any parallel processor.)

I'm afraid that the overall results will promote unwarranted comparisons
between (1) and (2), leading to cries of the poor state of parallel
processing.  But, then again, if the field is gaining strength, perhaps
the results in the "cost effectiveness" categories will counteract this
sufficiently.

Dave

Disclaimers:  I have not seen the Perfect Suite, and therefore do not know
how much parallelism is present therein.  Also, my interest is purely
academic, since I do not have the time to participate in the contest.
Also, as usual, I speak for myself, not OGC.

-- 
David C. DiNucci                UUCP:  ..ucbvax!tektronix!ogccse!dinucci
Oregon Graduate Center          CSNET: dinucci@cse.ogc.edu
Beaverton, Oregon