Xref: utzoo comp.ai:2775 talk.philosophy.misc:1667
Path: utzoo!utgpu!watmath!clyde!att!rutgers!apple!bionet!agate!labrea!decwrl!sun!pitstop!sundc!seismo!uunet!kddlab!icot32!hawley
From: hawley@icot32.icot.junet (David John Hawley)
Newsgroups: comp.ai,talk.philosophy.misc
Subject: Re: Artificial Intelligence and Intelligence
Message-ID: <2082@icot32.icot.JUNET>
Date: 2 Dec 88 03:26:46 GMT
References: <562@metapsy.UUCP> <2732@uhccux.uhcc.hawaii.edu>
Reply-To: hawley@icot31.icot.junet (David John Hawley)
Organization: Fifth Generation Computing Systems (ICOT), Tokyo, Japan
Lines: 26

In article <2732@uhccux.uhcc.hawaii.edu> lee@uhccux.uhcc.hawaii.edu (Greg Lee) writes:
>From article <562@metapsy.UUCP>, by sarge@metapsy.UUCP (Sarge Gerbode):
>" ...
>" Do machines have the same subjective experience that we do when we
...
>" and the input data, Occam's Razor demands that we not attribute
>" subjectivity to them.
>
>A more proper application of Occam's Razor would be that it prevents
>us from assuming a difference between humans and machines in this
>regard without necessity.  What does explaining behavior have to
...

What are the criteria by which I may judge the suitability of an application of
Occam's Razor? I know the folktale is basically the KISS principle,
and I have heard that the actual criterion of simplicity is the number of
'blat' that need to be postulated (where a blat can be a thing, entity,
?property?, ...). Is this correct? 

This has something to do with theory formation, as per for example
David Poole's Theorist default-reasoning system.
Does anyone have any pointers to literature on theory preference,
relative strength of arguments, preferably in an how-could-we-build-it vein?

Yoroshiku (AdvTHANKSance)
	David Hawley