Path: utzoo!attcan!uunet!mcvax!ukc!etive!aipna!rjc
From: rjc@aipna.ed.ac.uk (Richard Caley)
Newsgroups: comp.ai
Subject: Re: Artificial Intelligence and Intelligence
Message-ID: <350@aipna.ed.ac.uk>
Date: 28 Nov 88 03:15:46 GMT
References: <484@soleil.UUCP> <1654@hp-sdd.HP.COM> <1908@crete.cs.glasgow.ac.uk> <1791@cadre.dsl.PITTSBURGH.EDU> <819@novavax.UUCP> <1976@crete.cs.glasgow.ac.uk>
Reply-To: rjc@uk.ac.ed.aipna (Richard Caley)
Organization: Dept. of AI, Edinburgh, UK
Lines: 124
Dragon: Yevaud

In article <1976@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:

>   Intelligence is a social construct.  The meaning of the word is
>   defined through interaction.  Dictionary definitions are
>   irrelevant, and certainly never accurate or convincing.  

This is one argument. . .

>   Intelligence can only be acquired in social situations, since its
>   presence is only acknowledged in social situations.  


This is another. They are not in any way equivalent nor does one
necessarilly follow from the other. ( I agree with the first but not
with the second - the property of being a chair is also socially
defined, however a naturally formed object can be recognised as a chair
without having attained its form and function via social interaction ).


>   The meanings
>   are fluid, and will only be accepted (or contested) by humans in
>   social contexts.  AI folk can do what they want, but no one will
>   ever buy their distortions, nor can they ever have any grounds for
>   convincement in this case.

So what's new here. I should think you would have to look hard for an AI
researcher who didn't believe this. This is what the Turing test is all
about, putting a machine in a context where its being a machine will not
bias social interaction and seeing if it accepted as intelligent. What
distortions are you talking about here? This sounds like a straw man to
me.

>   What I am saying is that you cannot prove anything in this case by
>   writing programs.  Unlike sociology, they are irrelevant.

Now you argue against yourself. If intelligence can only be recognised
via social interaction then the systems which are puported to have this
property _must_ be built ( or programmed ) to be tested. Sociology can
not say yes or no, though it can point out hopeless paths. You have
yourself, if I remember correctly, said that AI workers lack training in
experimental design as would be given to psycology undergraduates - are
you now saying that experimentation is useless after all?

>   Also, even Prospector's
>   domain restricted, unlike smart humans.

Most "smart humans" also have restricted domains ( though admittedly
rather larger than that of Prospector ). I doubt many people have expert
level knowledge in, say, both 12th century history and particle physics.

Where people differ from so called "expert systems" is in their ability
to cope with non "expert" tasks, such as throwing bread to ducks.

>   Now brave freedom fighter against the tyranny of hobbyhorses, show
>   me my circular reasoning?

I am not the brave freedom fighter adressed, but . . . .

The argument seems to go something like the following -

	1 ) "Intelligence" can only be judged by social interaction with
		the supposedly intelligen system.

	2 ) I can not concieve of a computer system capable of
		succesfully interacting in this way.

	3 ) Therfore no computer system can ever be intelligent.

	4 ) Therfore (2)

Just saying that intelligence requires socialisation does not prove the
impossibility of machine intelligence without the lemma that machines
can not be social entities, which is at least as big an assumption as
the impossibility of intelligence.


>Until a machine can share in socialisation, as a normal human, it will
>not be able to match the best human capabilities for any task.

I agree with reservations ( there are tasks in which a machine can
exceed the capabilities of any human, take leveling a city as an
example.)

>And I don't think for one minute your machine you reflect on the
>morality of its action, as a group of children would. (no :-))

This would seem to be based on another circular argument - machines can
not be socialised, so machines cannot acquire a morallity, so I would
never accept a machine as a social entity . . .

>> Human intelligence is the example, not the  definition.
>Example for what?  I need to see more of the argument, but this
>already looks a healthier position than some in AI.

If I may once again answer for the person adressed ( I have managed to
delete their name, my appologies ), I believe he meant an example for
the kind of abilities and behaviours which are the target for AI. That
is, human beings are intelligent entities, but the reverse is not
neccesarilly the case.

>HCI isn't about automating everything (the AI mania), 

Except in a derivative sence, AI is not about automation. Although it
often procedes by trying to automate some task, in order to gain insight
into it, that is a research strategy, not a definition of the field.

>	{ paragraph about system design vs. implementation }
>
>Both roles MUST be
>filled though.  AI rarely fills either.

So what, AI is not doing HCI, it is also not doing biochemistry, why
should it be?

AI has created some tools which people are using to create computer
systems for various tasks, and you are quite at liberty to critisise the
design of such systems. However that is not critisism of AI any more
than critisism of the design of a television is critisism of the physics
which lead to the design of the electronics.
-- 
	rjc@uk.ac.ed.aipna	AKA	rjc%uk.ac.ed.aipna@nss.cs.ucl.ac.uk

"We must retain the ability to strike deep into the heart of Edinburgh"
		- MoD