Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP Path: utzoo!mnetor!seismo!lll-crg!styx!ames!ucbcad!ucbvax!XX.LCS.MIT.EDU!MINSKY%OZ.AI.MIT.EDU From: MINSKY%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU Newsgroups: mod.ai Subject: Searle, Turing, Symbols, Categories Message-ID:Date: Sun, 30-Nov-86 22:27:00 EST Article-I.D.: MIT-OZ.MINSKY.12259201929.BABYL Posted: Sun Nov 30 22:27:00 1986 Date-Received: Tue, 2-Dec-86 04:20:17 EST Sender: daemon@ucbvax.BERKELEY.EDU Organization: The ARPA Internet Lines: 159 Approved: ailist@sri-stripe.arpa Lambert Meertens asks: If some things we experience do not leave a recallable trace, then why should we say that they were experienced consciously? I absolutely agree. In my book, "The Society of Mind", which will be published in January, I argue, with Meertens, that the phenomena we call consciousness are involved with our short term memories. This explains why, as he Meertens suggests, it makes little sense to attribute consciousness to rocks. It also means that there are limits to what consciousness can tell us about itself. In order to do perfect self-experiments upon ourselves, we would need perfect records of what happens inside our memory machinery. But any such machinery must get confused by self-experiments that try to find out how it works - since such experiments must change the very records that they're trying to inspect! This doesn't mean that consciousness cannot be understood, in principle. It only means that, to study it, we'll have to use the methods of science, because we can't rely on introspection. Below are a few more extracts from the book that bear on this issue. If you want to get the book itself, it is being published by Simon and Schuster; it will be printed around New Year but won't get to bookstores until mid-February. If you want it sooner, send me your address and I should be able to send copies early in January. (Price will be 18.95 or less.) Or send name of your bookstore so I can get S&S to lobby the bookstore. They don't seem very experienced at books in the AI-Psychology-Philosophy area. In Section 15.2 I argue that although people usually assume that consciousness is knowing what is happening in the minds, right at the present time, consciousness never is really concerned with the present, but with how we think about the records of our recent thoughts. This explains why our descriptions of consciousness are so queer: whatever people mean to say, they just can't seem to make it clear. We feel we know what's going on, but can't describe it properly. How could anything seem so close, yet always keep beyond our reach? I answer, simply because of how thinking about our short term memories changes them! Still, there is a sense in which thinking about a thought is like from thinking about an ordinary thing. Our brains have various agencies that learn to recognize to recognize - and even name - various patterns of external sensations. Similarly, there must be other agencies that learn to recognize events *inside* the brain - for example, the activities of the agencies that manage memories. And those, I claim, are the bases of the awarenesses we recognize as consciousness. There is nothing peculiar about the idea of sensing events inside the brain; it is as easy for an agent (that is, a small portion of the brain) to be wired to detect a *brain-caused brain-event*, as to detect a world-caused brain-event. Indeed only a small minority of our agents are connected directly to sensors in the outer world, like those that sense the signals coming from the eye or skin; most of the agents in the brain detect events inside of the brain! IN particular, I claim that to understand what we call consciousness, we must understand the activities of the agents that are engaged in using and changing our most recent memories. Why, for example, do we become less conscious of some things when we become more conscious of others? Surely this is because some resource is approaching some limitation - and I'll argue that it is our limited capacity to keep good records of our recent thoughts. Why, for example, do thoughts so often seem to flow in serial streams? It is because whenever we lack room for both, the records of our recent thoughts must then displace the older ones. And why are we so unaware of how we get our new ideas? Because whenever we solve hard problems, our short term memories become so involved with doing *that* that they have neither time nor space for keeping detailed records of what they, themselves, have done. To think about our most recent thoughts, we must examine our recent memories. But these are exactly what we use for "thinking," in the first place - and any self-inspecting probe is prone to change just what it's looking at. Then the system is likely to break down. It is hard enough to describe something with a stable shape; it is even harder to describe something that changes its shape before your eyes; and it is virtually impossible to speak of the shapes of things that change into something else each time you try to think of them. And that's what happens when you try to think about your present thoughts - since each such thought must change your mental state! Would any process not become confused, which alters what it's looking at? What do we mean by words like "sentience," "consciousness," or "self-awareness? They all seem to refer to the sense of feeling one's mind at work. When you say something like "I am conscious of what I'm saying," your speaking agencies must use some records about the recent activity of other agencies. But, what about all the other agents and activities involved in causing everything you say and do? If you were truly self-aware, why wouldn't you know those other things as well? There is a common myth that what we view as consciousness is measurelessly deep and powerful - yet, actually, we scarcely know a thing about what happens in the great computers of our brains. Why is it so hard to describe your present state of mind? One reason is that the time-delays between the different parts of a mind mean that the concept of a "present state" is not a psychologically sound idea. Another reason is that each attempt to reflect upon your mental state will change that state, and this means that trying to know your state is like photographing something that is moving too fast: such pictures will be always blurred. And in any case, our brains did not evolve primarily to help us describe our mental states; we're more engaged with practical things, like making plans and carrying them out. When people ask, "Could a machine ever be conscious?" I'm often tempted to ask back, "Could a person ever be conscious?" I mean this as a serious reply, because we seem so ill equipped to understand ourselves. Long before we became concerned with understanding how we work, our evolution had already constrained the architecture of our brains. However we can design our new machines as we wish, and provide them with better ways to keep and examine records of their own activities - and this means that machines are potentially capable of far more consciousness than we are. To be sure, simply providing machines with such information would not automatically enable them to use it to promote their own development and until we can design more sensible machines, such knowledge might only help them find more ways to fail: the easier to change themselves, the easier to wreck themselves - until they learn to train themselves. Fortunately, we can leave this problem to the designers of the future, who surely would not build such things unless they found good reasons to. (Section 25.4) Why do we have the sense that things proceed in smooth, continuous ways? Is it because, as some mystics think, our minds are part of some flowing stream? think it's just the opposite: our sense of constant steady change emerges from the parts of mind that manage to insulate themselves against the continuous flow of time! In other words, our sense of smooth progression from one mental state to another emerges, not from the nature of that progression itself, but from the descriptions we use to represent it. Nothing can *seem* jerky, except what is *represented* as jerky. Paradoxically, our sense of continuity comes not from any genuine perceptiveness, but from our marvelous insensitivity to most kinds of changes. Existence seem continuous to us, not because we continually experience what is happening in the present, but because we hold to our memories of how things were in the recent past. Without those short-term memories, all would seem entirely new at every instant, and we would have no sense at all of continuity, or of existence. One might suppose that it would be wonderful to possess a faculty of "continual awareness." But such an affliction would be worse than useless because, the more frequently your higher-level agencies change their representations of reality, the harder for them to find significance in what they sense. The power of consciousness comes not from ceaseless change of state, but from having enough stability to discern significant changes in your surroundings. To "notice" change requires the ability to resist it, in order to sense what persists through time, but one can do this only by being able to examine and compare descriptions from the recent past. We notice change in spite of change, and not because of it. Our sense of constant contact with the world is not a genuine experience; instead, it is a form of what I call the "Immanence illusion". We have the sense of actuality when every question asked of our visual systems is answered so swiftly that it seems as though those answers were already there. And that's what frame-arrays provide us with: once any frame fills its terminals, this also fills the terminals of the other frames in its array. When every change of view engages frames whose terminals are already filled, albeit only by default, then sight seems instantaneous.