Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP
Path: utzoo!mnetor!seismo!rutgers!ucla-cs!zen!ucbvax!BFLY-VAX.BBN.COM!sas
From: sas@BFLY-VAX.BBN.COM
Newsgroups: comp.ai.digest
Subject: Don Norman's comments on time perception and AI philosophizing
Message-ID: <8707060837.AA02602@ucbvax.Berkeley.EDU>
Date: Thu, 2-Jul-87 09:55:35 EDT
Article-I.D.: ucbvax.8707060837.AA02602
Posted: Thu Jul  2 09:55:35 1987
Date-Received: Tue, 7-Jul-87 01:27:49 EDT
Sender: usenet@ucbvax.BERKELEY.EDU
Distribution: world
Organization: The ARPA Internet
Lines: 20
Approved: ailist@stripe.sri.com


Actually, many studies have been done on time perception. One rather
interesting one reported some years back in Science showed that time
and size scale together.  Smaller models (mannikins in a model office
setting) move faster.  It was kind of neat paper to read.

I agree that AI suffers from a decidedly non-scientific approach.
Even when theoretical physicists flame about liberated quarks and the
anthropic principle, they usually have some experiments in mind. In
the AI world we get thousands of bytes on the "symbol grounding
problem" and very little evidence that symbols have anything to do
with intelligence and thought. (How's that for Drano[tm] on troubled
waters?)

There have been a lot of neat papers on animal (and human) learning
coming out lately.  Maybe the biological brain hackers will get us
somewhere - at least they look for evidence.

					Probably overstating my case,
						Seth