Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP
Posting-Version: version B 2.10 5/3/83; site pyuxd.UUCP
Path: utzoo!linus!philabs!cmcl2!harvard!bbnccv!bbncca!wanginst!decvax!bellcore!petrus!sabre!zeta!epsilon!gamma!pyuxww!pyuxd!rlr
From: rlr@pyuxd.UUCP (Rich Rosen)
Newsgroups: net.philosophy
Subject: Moody on Rosen on Searle
Message-ID: <1987@pyuxd.UUCP>
Date: Tue, 29-Oct-85 23:27:32 EST
Article-I.D.: pyuxd.1987
Posted: Tue Oct 29 23:27:32 1985
Date-Received: Sun, 3-Nov-85 10:06:08 EST
References: <2447@sjuvax.UUCP>
Organization: Whatever we're calling ourselves this week
Lines: 127

>>I could care less about the exact type of machine that the human mind really
>>is, but I have no disagreement with the notion that the mind and brain are
>>represented as some sort of machine. [Rosen]

> Indeed, Rich Rosen should have no disagreement, since as long as "the
> exact type of machine" is not specified, agreement or disagreement
> would be without content.  As long as one is careless about the exact
> type of machine, *anything* can be represented as some sort of
> machine.  [MOODY]

My curiosity is piqued:  why is Moody going out of his way to make this sound
"bad"?  "Careless" about the exact type of machine?  One should note that
the central point of argument on this issue here has seemed to be the notion
held by some that the human brain CANNOT be represented in a mechanized
fashion as in a machine!  Amazing how Moody tries to make a concession on
his part (that the brain fits into the "*anything*" category he describes
above) be called "carelessness" on my part.  (The mark of a great philosopher?)

>>To throw yet another bone into this mix, I will quote from the oft-misquoted
>>(at least here) John Searle, from his "Minds, Brains, and Programs":
>  [Rosen, quoted material from Searle omitted]

As it had been conveniently omitted the first time around, as well.

> To my knowledge, the only people in this newsgroup who have been
> quoting Searle are Michael Ellis and me.  I have checked my archives against
> Searle's papers; neither of us has misquoted him.

Except in that his opinion on the issues presented have been directly at odds
with those presented BY you and Ellis (i.e., deliberate omission of those
sections I included as if they were irrelevant).

> |  "Could instantiating a program, the right program of course,
> |  by itself be a sufficient condition of understanding?"
> |
> |   This I think is the right question to ask, though it is usually
> | confused with one of the earlier questions, and the answer to it is no.
> |
> |  "Why not?"
> |
> |    Because the formal symbol manipulations themselves don't have
> |   any intentionality...
> |______________________________ [Searle, quoted by Rosen]
> 
>>I think at this point Searle destroys his own argument.  By saying that these
>>things have "no intentionality", he is denying the premise made by the person
>>asking the question, that we are talking about "the right program".  Moreover,
>>Hofstadter and Dennett both agreed (!!!!) that Searle's argument is flawed.
>>"He merely asserts that some systems have intentionality by virtue of their
>>'causal powers' and that some don't.  Sometimes it seems that the brain is
>>composed of 'the right stuff', but other times it seems to be something else.
>>It is whatever is convenient at the moment."  (Sound like any other conversers
>>in this newsgroup?)

> Now, let's look at Rich Rosen's argument.  The claim that formal
> symbol manipulations lack intentionality is the *conclusion* of
> Searle's arguments, which Searle recaps at the end of the paper.  Far
> from destroying his argument, Searle is merely summarizing its
> conclusions, in order to distinguish them from other positions.  The
> "right program" does *not* mean "the program that has intentionality";
> it means "the program that passes the Turing Test."

Now I see why you chose to omit the sections I quoted:  including them would
show the holes in your point of view and the fabrications surrounding it.
You deliberatele left out the questions (from that question/answer section)
that led up to that "ultimate" question, which in fact did not ONCE mention
the Turing test!  What was meant by being "the right program" WAS in fact
(despite your assertion to the contrary) having all the characteristics
necessary for "thought".  If intentionality (not present in the "Chinese room"
example) is one of them, so be it.  A program lacking that is NOT "the right
program" by Searle's OWN definition!

> The whole point of Searle's argument, of course, is that passing the Turing
> Test is not a sufficient condition of intentionality.

And more!  He asserts the fallaciousness of the claims of what he calls
"strong AI", which based on his own reasoning is nothing but an assertion.

> It's true that Hofstadter and Dennett do not accept Searle's
> arguments.  Rich Rosen proceeds to quote some of Hofstadter's
> responses, from _The_Mind's_I_.  Presumably, Rosen agrees with
> Hofstadter.  But Hofstadter's arguments are weak.  Rather than "merely
> asserting" that some systems possess intentionality in virtue of their
> causal powers, Searle has written several books on the subject (one
> was written after _The_Mind's_I_).

Odd that all Moody had to say about "Hofstadter's arguments" was an assertion
that they "are weak".  (Why?  Because he doesn't like them?)

>  Note that the purpose of Searle's
> "Minds, Brains, and Programs" was not to develop a general theory of
> intentionality, but to criticize the notion that intentionality is
> just a matter of instantiating a Turing Machine program.  Hofstadter's
> insinuation that Searle vacillates on whether minds need to be
> embodied in neural stuff is a straw man.  Searle makes no such claim.

A thorough reading would show a good deal of vacillation.

> The last two sentences of Hofstadter, quoted by Rosen,
> cannot be called counterarguments; they are mere counterassertions.
> Rich Rosen offers no arguments of his own.  Indeed, he never clearly
> states just what it is that he is claiming about this Turing Machine
> issue.

Odd that when *I* make statements, they are not (counter-)arguments but
(counter-)assertions.  Does the same rule apply to Moody's statements?

> I will grant that Hofstadter does offer *some* arguments in his
> remarks, but Rosen has not mentioned one of them.  Rosen also claims
> that those of us who have quoted him (Ellis and me) do so in defense
> of positions that Searle would reject.  Rosen does not name names, nor
> does he identify those positions, but it sure sounds good, doesn't it?

Perhaps it "sounds good" because it is true.  Note how Ellis was real big
on Searle, until it came to defining machine, at which point Ellis decided
to arbitrarily redefine things to suit his "needs" (i.e., desired conclusions).

> In short, the substantive content of Rosen's comments on Searle and
> the relation of Turing Machines to minds is vanishingly close to zero.

And the substantive content of YOUR comments (as evidenced here) is not
zero, not even negative, but rather, imaginary.
-- 
"to be nobody but yourself in a world which is doing its best night and day
 to make you like everybody else means to fight the hardest battle any human
 being can fight and never stop fighting."  - e. e. cummings
	Rich Rosen	ihnp4!pyuxd!rlr