Consciousness and the Social Brain
guesser, and person 3 is a deceiver who tries anything to mislead the guesser. All three are locked in an intensive game of social intelligence. If you replace the deceiver with a computing machine, will the machine be able to win as often as a real person? If so, then the machine passes a specific, testable, and rather impressive criterion for human-like thinking. In retrospect, Turing’s test fits wonderfully with the approach that I take in this book. To Turing, the criterion for human-like thinking is the ability to hold your own in a game of intensive social interaction.
The “Turing test” as it has since developed in popular culture is generally a warped and simplified version of Turing’s original idea. In the standard Turing test, you have a conversation with a computer. If you can’t tell that it is a computer, and you think that it is an actual conscious human, then the computer passes the Turing test. Curiously, the Turing test is a test for consciousness type B. It is a test for whether you can attribute consciousness to it, not for whether it can attribute consciousness to itself. The Turing test, as it has been reinvented over the years, is less interesting than Turing’s original idea.
But can a computer have consciousness type A? Can it construct its own awareness?
In science fiction, when computers become complicated enough or amass enough information, they become conscious. This is the case according to well-known scientific scholars on consciousness such as Isaac Asimov and Arthur C. Clarke. Hal from the movie
2001
comes to mind. So does the machine world from the
Terminator
series, and the malevolent computers of
The Matrix
. Exactly why complexity itself, or increased information storage, or an increase in some other standard computer attribute, would eventually result in awareness is not clear. Computers are already extremely complex. The amount of memory in a supercomputer already rivals that of a human brain. The speed of computation is much greater for a computer than for a human. The Internet links so much information that it vastly exceeds the total amount of information at any one time in normal human consciousness. If consciousness is simply an inevitable result of “enough” complexity or “enough” information, then the prospects do not look bright for reaching enough. It hasn’t happened yet. It seems that awareness is not simply more of the same stuff that computers already have. Instead, it is a specific feature that has not yet been programmed into a computer.
According to the attention schema theory, to make a computer aware in the human-like sense, to give it consciousness type A, requires three things. First, the computer must sort its information and control its behavior using the method of attention. It needs to select and enhance signals with the same dynamics that a human does.
Second, the computer must have programmed into it an attention schema to track, simulate, and predict that process of attention. The computer’s attention schema must have the idiosyncratic properties of the human attention schema. It cannot be a computer scientist’s version, or an engineer’s version, an optimized or accurate log of the attentional state. Instead, it needs the metaphorical layer present in the human attention schema. It needs to depict attention as an ethereal substance with a general location in space, as an intelligence that experiences information, as an ectoplasmic force that can flow and cause actions.
Third, the computer needs to be able to link its attention schema (
A
) to other information, including information on itself (
S
) and information on the item (
X
) to which it is directing attention.For it to be aware of information
X
, it must be able to construct the larger chunk of information
S
+
A
+
X
.
The computer would then be in a position to access that larger representation, read the information therein, summarize it, and report:
X
comes with awareness, with conscious
experience
, attached to it, and I myself in particular am the one who is aware. When asked about what awareness “feels” like, the computer would again access that model and obtain an answer. It would provide a human-like answer because the information set on which it bases that answer would be similar to the information set on which we humans base our answer.
The attention schema theory therefore gives a specific prescription, a direction for engineers to follow in building a
Weitere Kostenlose Bücher