Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP
Posting-Version: version B 2.10.1 6/24/83; site umcp-cs.UUCP
Path: utzoo!linus!philabs!cmcl2!seismo!umcp-cs!mangoe
From: mangoe@umcp-cs.UUCP (Charley Wingate)
Newsgroups: net.philosophy,net.math
Subject: Re: Mind as Turing Machine
Message-ID: <2137@umcp-cs.UUCP>
Date: Wed, 6-Nov-85 23:33:35 EST
Article-I.D.: umcp-cs.2137
Posted: Wed Nov  6 23:33:35 1985
Date-Received: Sat, 9-Nov-85 04:56:01 EST
References: <2031@umcp-cs.UUCP> <677@hwcs.UUCP>
Distribution: net
Organization: U of Maryland, Computer Science Dept., College Park, MD
Lines: 51
Xref: linus net.philosophy:2792 net.math:2124

In article <677@hwcs.UUCP> greg@hwcs.UUCP (Greg Michaelson) writes:

>> Well, the correct analogy in the first case is
>>   X = Build a Flying machine with flapping wings

>Have you not seen the flying elastic powered plastic pigeons with flapping
>wings?

Certainly, and those existed back before the Wright Bros. did their thing.
No sign of man-sized versions, though.  Anyway...

>> and in the second case
>>   X = Transmute a substance using alchemy
>> which fit well with the third
>>   X = Model the brain with a VonNeuman machine

>So VonNeuman technology = alchemy? Using current chemical/physical theory
>it can be proved that alchemical techniques cannot transmute substances. Can
>you provide an equivalent proof that VonN machines cannot be used to model
>the (admittedly vast) finite state machine inside human skulls?

My point here was not VonNeuman machines CAN'T do it-- it's that there's a
strong possibility that the V.N. archetecture is simply the wrong mindset
from which to approach the problem, much as flapping wings and alchemy were
to their problems.  Too often the voice I hear from the AI-ists is "V.N. (or
Parallel, or whatever-your-favorite-variation) is the only way we know to
attack the problem, so we will assume that it is the correct way."  The
notion that the mind is a great state machine is, I would contend,
dangerously close to that sort of thinking.  It's conveniently
unfalsifiable, it's patently unmodelable as it stands (2**(10**10)
states!?!), and thus allows you to work indefinitely on the problem without
the inconvenience of being put to the test.  What I don't hear these people
saying, though, is "What are we going to do if it turns out NOT to be like a
giant state machine?"

One of my professors the other night made the claim that everyone should be
a programmer, because that's the only way they are going to get what they
want done on a computer.  He persisted in an analogy between computer
programming and writing.  My personal opinion is that this is going to
acheive the same results as we commonly see with programmers writing
manuals; they supposedly know how to write, but they aren't really competent
to write effectively on any large scale.  But this is a side issue.  My
sociological comment on this is that it illustrates the sort of messianic
light which one commonly sees in the eyes of computer scientists these days.
Programming will change everyone's way of life.  AI will give us new
electronic brains.  It's in some respects similar to the situation at the
beginning of serious investigation into HTA manned flight; plenty of people
thought it was possible, but almost without exception they were wrong about
how it would be brought to pass.

Charley Wingate