Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP
Posting-Version: version B 2.10 5/3/83; site utzoo.UUCP
Path: utzoo!henry
From: henry@utzoo.UUCP (Henry Spencer)
Newsgroups: can.politics
Subject: Re: problems with Star Wars #2 (part 2: the crux)
Message-ID: <5772@utzoo.UUCP>
Date: Tue, 9-Jul-85 17:44:39 EDT
Article-I.D.: utzoo.5772
Posted: Tue Jul  9 17:44:39 1985
Date-Received: Tue, 9-Jul-85 17:44:39 EDT
References: <1197@utcsri.UUCP>
Organization: U of Toronto Zoology
Lines: 150

[I had better get this done, or else everyone will have forgotten what
the original article was about!  I find myself with less and less time
to read and post on can.politics, and may drop off the list.]

Having dealt with some side issues, I now [do I hear a chorus of
"at last"? :-)] come to the heart of the matter:  the software situation
as it affects an SDI system.  I agree with many of Ric's comments, but
not with some of his conclusions.

The prospects for verifiably correct programs of the size involved are
dim, going on zero.  DoD's notion of automating the problem away is a
fantasy at this time; the technology is not up to it.  I find it amusing,
in a black sort of way, that DoD has been sold on the wonders of program
verification and AI program generators by the same community that is now
(in part) frantically trying to retract those claims.  [I should note that
I'm not thinking of Ric here.]

However, this is not necessarily a crippling problem, for several reasons.

The first is that absolute correctness is not required, if what we are
worried about is accidental initiation of war.  The "no accidental war"
requirement does not demand, for example, perfect discrimination of
decoys from real warheads.  It is sufficient that the system reliably
discriminate between "major attack" and "no major attack".  This is a
much looser criterion than complete correctness.

Furthermore, why should the decision to activate an SDI system against
a major attack have to be automated at all?  Yes, the decision times
are short.  So are the decision times involved when flying a plane or
driving a car!  There is a difference between the complexity of pointing
hundreds of defensive weapons at the right targets, and the complexity
of deciding that it is appropriate to do so.  The former may well need
to be mostly or totally automated; the latter does not.  It is important
to distinguish between DoD's well-known mania for automating everything,
and the degree of automation that is actually *needed*.  Looking at a
set of sensor displays and deciding, promptly, whether a major attack is
in progress or not does not sound as if it is beyond human capabilities.
I see no reason why the decision to hold fire or open fire needs to be
automated at all.  The inevitable possibility of human mistake or error
can be dealt with by the method already used for such decisions	as ICBM
launches:  simultaneous approval by multiple observers is required.

[begin brief digression]
(I would further speculate -- note that this is now speculation, which
I am not prepared to defend at length -- that if one were willing to accept
the costs of having large numbers of well-trained people on duty in several
shifts, it would not be impossible to use a largely manual control system
for a missile-defence system.  Terminal homing of interceptors would
probably have to be automated, as would some initial prefiltering of data,
but I speculate that the "battle management" functions could be done by
humans with adequate speed.  Complex, yes; impossible, maybe not.)
[end brief digression]

> ...  Artificial intelligence is
> the branch of computer science that  begins  with  the  full
> knowledge that the correct solutions to its problems are not
> feasible, and it seeks solutions that work pretty well  most
> of  the  time, and fail only occasionally.

This definition seems to me to be politically slanted, although it
is definitely based on fact.  Heuristic methods necessarily do not
give *optimal* solutions, by definition, but that is a far cry from
implying that they sometimes fail to give *correct* solutions.  The
inability to quickly compute an optimal solution for, say, the "travelling
salesman" problem does not imply an inability to quickly compute a valid
(although perhaps far from optimal) solution for it.  Furthermore, it is
an observed fact in a number of applications -- notably linear programming --
that the (relatively bad) worst-case behavior of the algorithm never happens
unless the input data is carefully concocted with that in mind.

Note that I am not saying that the computing problems of SDI are easy to
solve.  What I am saying is that a claim of impossibility which does not
examine the details of the problem is invalid.  Many existing systems
rely on suboptimal heuristics; a number of them involve risk to human
life if the heuristics fail badly.  It is often possible to construct
heuristics that quite reliably produce viable -- not optimal -- answers
for realistic inputs, regardless of how bad their theoretical worst-case
behavior is.

> And that [heuristic approach] is very
> valuable whenever the  benefits  of  success  outweight  the
> consequences  of  failure...

If one assumes an attack in progress, the benefits of success clearly
outweigh the possibility of failure.  I have dealt above with the matter
of uncertainty as to whether an attack is in progress or not.

> With further research, computers
> may be able to play chess so well that  they  almost  always
> make  good  moves,  and rarely make dumb ones.

This is perhaps an ill-chosen example, in that the best current chess
programs can fairly consistently slaughter the majority of human chess
players.  Several of the major programs have official (or semi-official;
I'm not sure of the politics involved) chess skill ratings that put them
well above the vast majority of the rated human players.  A player who is
good enough for major international competition will demolish any current
program, but few human players ever reach that level.

> But for SDI,
> the consequences of misinterpreting the data,  or  making  a
> dumb  strategic  move,  could  very  well  be the start of a
> nuclear holocaust.

This contention simply does not seem to be justified by the situation.
As mentioned above, there is no real need for an automatic "open fire"
decision; as I have discussed elsewhere, there is neither a requirement
nor much likelihood of SDI being coupled to offensive systems to the
extent of automatically activating them.

> ...   When  hardware  fails,  if the
> failure is detected, a backup  system  can  be  substituted.
> But  when  software  fails,  if the failure is detected, any
> backup copy has the same errors in it.

A backup copy of the same software, yes.  But the whole reason for
backup hardware systems is to get *different* hardware into action
if the first set fails.  This is why one of the Shuttle's computers
runs totally different software from the other four, duplicating
their functions but in a different way written by different authors.
It is *different* software, a software backup that does *not* share the
flaws of the primary system.

> ... scientific debate about the feasibility  of  Star  Wars
> misses  the main point.  The threat posed by nuclear weapons
> is a political problem, with an obvious, if not easy, polit-
> ical  solution.  When politicians propose a scientific solu-
> tion,  they  are  raising  a  distraction  from  their   own
> failures...

So the alternative to SDI is to magically turn those failures into
successes.  The contention that this is possible, where SDI is not,
seems to me to be unproven.  Not necessarily false, but unproven.
Certainly not obvious.

> ...   Even  if SDI were completely successful in its
> aims, countermeasures would soon  follow...

Examination of the history of disarmament efforts gives little cause
to think that this will not be true of them as well.

> ...  And lasers  that  can  destroy
> missiles  undoubtedly  have their offensive uses.  SDI is no
> solution to the arms race, but a further escalation.

I have discussed elsewhere the interesting reasoning: "because some
types of SDI systems would be very dangerous, any SDI system would be".
-- 
				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,linus,decvax}!utzoo!henry