Path: utzoo!attcan!uunet!lll-winken!lll-tis!ames!amdahl!rtech!davek
From: davek@rtech.rtech.com (Dave Kellogg)
Newsgroups: comp.databases
Subject: Re: ORACLE on the cheap... questions
Message-ID: <2316@rtech.rtech.com>
Date: 13 Jul 88 03:33:41 GMT
References: <5165@dasys1.UUCP> <8208@ncoast.UUCP> <178@turbo.oracle.UUCP>
Reply-To: davek@rtech.UUCP (Dave Kellogg)
Organization: Relational Technology Inc, Alameda CA
Lines: 87

In article <178@turbo.oracle.UUCP> rbradbur@oracle.UUCP (Robert Bradbury) writes
>
>I *hate* statements like "wasn't too hot on the speed front".  Exactly
>*what* are you doing that gives you that impression?  We beat Informix
>and Ingres in a good percentage of the DeWitt benchmarks on a number of
>machines.  

I have several comments on Robert's recent posting, the least important 
of which is that what he says above is simply not objectively true.

I have neither seen a published report nor participated in any formal 
benchmark that would confirm his claim of DeWitt superiority. In fact, 
I've never participated in any informal benchmark that would confirm his 
claim, either. 

More important, however, are the two brief notes below.

A Note on Standards
-------------------

Just as the TP1 "standard" is elusive, so is the "DeWitt" (or Wisconsin)
benchmark.  In the case of TP1, vendors select whatever pieces of the well-
defined DebitCredit benchmark that suit them, and call that TP1.  In the case
of DeWitt, vendors tend to select whatever subset of queries of the DeWitt
test that suits them, and call that the "DeWitt" benchmark.

There are two points which should be noted here.  First, I am not suggesting
that Robert's firm has used a DeWitt subset, as I am not familliar with their
tests.  Second, and more important, I am neither suggesting nor do I believe
that there is anything inheritantly wrong with DeWitt subsets or TP1 tests
(which are essentially DebitCredit subsets).

DebitCredit is a tough benchmark which tests a system for a large number of 
factors that are important in Online Transaction Processing.  TP1 tests, on
the other hand, generally test only for basic transaction throughput against
fairly small databases.  However, this is not a question of good vs. evil 
as long as one *recognizes* the differences between the tests and is not
mislead into mistaking a test for its distant cousin.  The same argument 
holds true for DeWitt subset "X" vs. DeWitt subset "Y".

Different benchmarks test different things, and in general, those different
things bear little resemblance to any user's actual reality.  Bearing this 
in mind could prove quite useful, as I expect the coming months to bring a 
barrage of numbers from all the various DBMS vendors.

For further information on the "DeWitt" benchmark, see the paper entitled
"Benchmarking DBMS Systems, A Systematic Approach" by Dr. David DeWitt and 
(I believe now, Dr.) Dina Bitton of the University of Wisconsin.

I will investigate the copyright on this document and see if I am able to 
distribute copies to interested parties.  Please send e-mail to me and I'll
let you if/when I can send you one.


A Note On "The Net"
-------------------

I have always enjoyed comp.databases as an open, unpolluted forum for both
users of commercial database products and any persons interested in DBMS
systems.  From time to time users may express displeasure with a given system
(no major vendor has failed to be victimized in the last year or so), and 
the vendors have either remained silent, offered explanations, or posted re-
markably unbiased postings on the technical issues regarding their products.
I might add also, that I have always enjoyed Robert's postings for this
same candor.

This "WE beat vendor X" type quote, however, disturbs me.  If this continues,
and becomes standard operating procedure for this newsgroup, it could easily
mean that comp.databases will degenerate into nothing more than a forum for
vendor cross-fire.  This degeneration is not unfeasible given that companies
like Relational Technology, Informix, and Oracle employ literally thousands of
people, and each could easily place a designated "net monitor" to fire back 
accusations at the other vendors.

A little inter-vendor "teasing" is probably inevitable, but that's where 
I think it should stop.  Cluttering the net with "WE beat so-and-so" will
neither reward the readers of this group, nor, in actualitly, any given 
DBMS vendor.

David Kellogg

+--------------------------------------------------------------------------
| Relational Technology (INGRES) New York City
| (212) 952-1400 	...!ucbvax!mtxinu!rtech!davek
|
| The above opinions are merely my own and should not be 
| construed as any official statement of Relational Technology
+--------------------------------------------------------------------------