Path: utzoo!attcan!utgpu!jarvis.csri.toronto.edu!mailrus!ncar!ames!cs!shimeall
From: shimeall@cs.nps.navy.mil (Tim Shimeall x2509)
Newsgroups: comp.software-eng
Subject: Re: Software Failure Analysis
Summary: value of knight-leveson paper
Keywords: Software failure analysis, quantization errors, resolution
Message-ID: <302@cs.nps.navy.mil>
Date: 29 Sep 89 22:14:00 GMT
References: <10743@dasys1.UUCP> <34348@regenmeister.uucp> <592@halley.UUCP> <290@cs.nps.navy.mil> <27545@shemp.CS.UCLA.EDU>
Reply-To: shimeall@cs.nps.navy.mil (Tim Shimeall x2509)
Followup-To: comp.software-eng
Organization: Naval Postgraduate School, Monterey CA
Lines: 23

I have no desire to start a fresh round of the N-Version programming
flame wars here.  (Suffice it to say that there are at least two views
on the quality of the Knight-Leveson work and on the quality of the
works by Bishop and Avizienis -- readers are encouraged to exercise
their own judgement.)

I would like to clarify the point of raising
the Knight-Leveson experiment.  The value of that work in terms of
failure analysis is two-fold:
 a) the characterization of the faults that were detected in the
    programs involved -- in particular their demonstration that 
    groups working independently may make similar faults.
    (Their result is arguably stronger than this, but I agree with
     the point that readers should consult the paper and decide for
     themselves.)
 b) the examination of the run-time effect of the faults in increasing
    program failure probability. 

The fact that programs occasionally mask faults (i.e. a fault does not
ALWAYS produce a failure when it is executed) is no surprise to those
in the software testing community, who have been dealing with
"coincidentally correct" results for some time now.
					Tim