Path: utzoo!attcan!uunet!lll-winken!lll-lcc!ames!pasteur!ucbvax!hplabs!oracle!rbradbur
From: rbradbur@oracle.UUCP (Robert Bradbury)
Newsgroups: comp.databases
Subject: Re: ORACLE on the cheap... questions
Summary: Benchmarks and environmental information
Message-ID: <180@turbo.oracle.UUCP>
Date: 16 Jul 88 04:25:26 GMT
References: <5165@dasys1.UUCP> <8208@ncoast.UUCP> <178@turbo.oracle.UUCP> <590@hscfvax.harvard.edu>
Organization: ORACLE Corporation, Belmont CA, USA
Lines: 132

In article <590@hscfvax.harvard.edu>, pavlov@hscfvax.harvard.edu (G.Pavlov) writes:
> In article <178@turbo.oracle.UUCP>, rbradbur@oracle.UUCP (Robert Bradbury) writes (in response to another article): 
> > 
> > 
>   A while ago I attempted to arrange a set of cooperative benchmark runs
>   with other sites running other dbms's.  The two Oracle sites I talked to
>   claimed that their license agreements forbade publication of benchmark
>   results.  Don't know if this is completely true, but that is what I was
>   told.

This is true.  The license agreement prohibits publication of benchmark
results.  One reason for this is that some vendors will publish results
comparing apples with oranges and not make it completely clear to the user.
A benchmark was done last year at a Silicon Valley computer manufacturer
(who shall remain nameless); Unify later packaged and started publishing
these numbers showing it as having much faster insert rates than the other
RDBMS (without pointing out the much lower level interface it was using
to perform these inserts -- yes, Virginia; we can append records to the
database file faster than other people can parse and execute SQL statments...
:-) ).  Oracle threatened UNIFY with legal action due to the violation of
the license agreement and UNIFY has since that time stopped publishing
the numbers.  At the same time I heard some rumor that Informix was
taking Oracle to court involving "restraint of trade" involving this
license clause.

Yes, YOUR RDBMS dollars are paying lawyers to play these games.  sigh.

> 
>   The only independent comparison benchmark (Ingres, Informix, and Oracle)
>   was a rather extensive one performed by a Palmer & Associates, Inc. 
>   (Duluth, Ga).  This was performed on an IBM AT and involved apx. 130 sep-
>   arate query or update processes.  Several lines from the "executive sum-
>   mary" of the report:
> 
>    "In general (using a geometric mean of normalized data approach) Ingres
>     was from 1.63 to 2.86 times faster than Informix, and Ingres was from
>     10.56 to 29.00 times faster than Oracle."
> 
>    "Oracle was clearly outdistanced by both Ingres and Informix with 5 first
>     place, 40 second place and 65 third place finishes." (in retrievals)
>    "Oracle won 4 of 20 update record queries."
> 
>     - etc.
> 
>   This is not to say that this is the definitive word on these dbms's. But I
>   have to give it more credence than the "results" that have come from the 
>   individual vendors themselves.


This is *EXACTLY* the type of thing I was refering to in my original
posting.  You *fail* to mention the versions which were being compared
in this benchmark.  Oracle, Informix and Ingres have had a history of
leapfrogging each other in performance benchmarks over the last 5 years.
If you take production releases at one point in time you get one set of
results; at another point in time you get another set.  You don't mention
what operating system the benchmark was performed on, what the system
configuration was, how much time was spent tuning the systems, etc.
And of course numbers from a PC AT are generalizable to VAXes, Suns,
Sequents, etc. :-)

I'd suggest you contact our UNIX marketing dept and request results for
a recent benchmark done at PACBELL.  As Oracle was picked over the other
vendors one would probably presume that the benchmark results were not
as bad as the study you are quoting.

I appreciate the comments that we do not want this to decay into a
we-beat-so-and-so on the following tests debate.  At the same time
I as an individual who has spent many long nights running these
benchmarks in competitive situations (and in the early days losing
many of them) get a little testy when people present a picture of Oracle's
performance which reflects past history more than current reality.

I will agree that DeWitt/TP1 tests give a very narrow view of DBMS performance
and that given differences in the various systems involved that it is
difficult to make reasonable comparisons.  The bottom line is that
if you want to know how your application will perform on these systems
YOU MUST BENCHMARK YOUR APPLICATION.  And even then I'll bet that
in a good percentage of the cases I can take your best efforts
and beat the application, RDBMS and operating system with a stick and
make your initial results look fairly silly.  (This is not to say
that you don't know what you are doing, just that I haven't seen
a benchmark yet where a clever person couldn't bend the results
significantly.)

As an example of how poor benchmarking efforts are I'll make the
statement that no one who has ever benchmarked RDBMS systems on
UNIX has ever bothered to verify which RDBMS really guarantee that
when you commit your transaction the data is physically on the disk.
(I'm sure I'll get some comments on this - :-)).  I make that statement
because in 5 years of benchmarking comparisons no one has ever told me
they knew how to monitor system calls or go in and examine the UNIX
FILE table to determine blocks were being written through the cache to the
disk.  The best they usually do is unplug the machine to see if the database
got corrupted -- sorry folks but that doesn't provide *proof* unless
you do it a few thousand times when you are executing 30 transactions
a second.

Oracle in fact lost a number of benchmarks over the years because
other vendors were ignoring the issue of data integrity.


>   The glossies and demo disk that Oracle sent me show that the new report
>   writer is, indeed, a vast improvement over the old (comparable to the 
>   difference between a Model T Ford and a Taurus).  It is also an extra-
>   cost option.
> 
Oracle has responded to customer requests to divide products up into
seperatly priced items so you don't have to pay for things you don't
need.  (the flip side of that coin is that you get to pay more for all
the things you do need.)

>   Some, Like Ingres, though, use the termcap syntax for this purpose,
>   regardless of which operating system the product is running under.

Given how difficult TERMCAP is to maintain I wouldn't consider this
a plus.  AT&T converted to TERMINFO to make it faster; they missed the
boat though because terminal descriptions belong in a database.


Sorry this turned out so long but I like to try and present a balanced
view of things.  A current Oracle employee (former RTI employee) told
me recently that he used my rather candid comments about Oracle's
less than stellar performance a few years ago to RTI's great
advantage in some sales situations.  Now that I think the situation
is more even you can be sure that I'm going to make point of
pointing out any holes in any performance claims.

Of course I can't say whether or not this reflects the opinions of Oracle's
management (most of them don't know what to do with me).
-- 
Robert Bradbury
Oracle Corporation
(206) 784-9474                            hplabs!oracle!rbradbur