<< 前へ │ 次へ >>
# Strong AI & "Chinese Room" ## Strong AI Strong AI is derived from functionalist’s theories of the mind, which view our mental states as functions of a physical system or, more specifically, computational processes. The functionalist argues that any physical system can have a mind, as long as it has the same inputs and corresponding outputs as human beings. The Turing test was designed to show that computers could think. If a computer can fool someone into thinking that it is a person then we have just as much reason to say that it actually can think as we do to say that a person can think. This leads to the view that computers could have minds and real understanding. So, in strong AI, ‘computers given the right programs can be literally said to understand and have other cognitive states’, and the ‘programs thereby explain human cognition.’ (Searle, 353-4) It is these two claims which Searle discredits by his Chinese room thought experiment, and consequently we shall see that he discovers the Turing test does not truly demonstrate understanding. ## 「中国語の部屋」-strongAI批判- Searle invites us to imagine himself locked in a room, into which batches of Chinese writing are posted. He has a rulebook, written in English, which enables him to correlate these batches of Chinese symbols with other Chinese symbols and then post them out the other side of the room. It seems to the people on the outside that he is answering questions in Chinese. His answers are ‘absolutely indistinguishable from those of native Chinese speakers’ (Searle, 335), so, according to the Turing test, he passes and so must have understanding of Chinese. However, he does not understand Chinese at all; ‘Chinese writing is just so many meaningless squiggles’ (335). He is just producing answers by formal symbol manipulation (the squiggles have no meaning to him; they are identified purely by shape). ‘As far as the Chinese is concerned, I simply behave like a computer; I perform computational operations on formally specified elements. For the purposes of Chinese, I am simply an instantiation of the computer program.’(356) We can see by his example, which represents a computer’s processing of information, that the system lacks understanding. This proves the first claim of strong AI to be false - the computer does not literally understand, but it just simulates understanding. Here we discover that the Turing test overlooks a very important difference between computers and humans. The computer can work without assigning any meaning to the digits it uses in the functional process. They are just syntactic forms. Yet, when a person thinks, they assign meaning to the digits (words). Searle argues, for a person, understanding is more than a string of syntactic shapes. This is why we know there is something missing when Searle answers the Chinese questions - it is that the symbols do not have any meaning to him, the internal system. Similarly, in computer processes, the symbols have no meaning to the computer, and so there is no true understanding, just processes of symbol movement and manipulation. This counteracts the second claim of strong AI, that a computer program can explain human understanding as ‘we can see that the computer and its program do not provide sufficient conditions of understanding’. (Searle, p356) It could be argued that the program’s computational operations still could explain part of the story of human understanding. Yet, Searle rejects this suggestion, as he sees no reason ‘to suppose that they are necessary conditions or even that they make a significant contribution to understanding’ (357). When a human being understands something we do not attribute it to a computational process; ‘no reason has been given to suppose that when I understand English I am operating with any formal program at all’.(357) Searle argues that the computer understanding is not just to a different degree - 'it is zero’ (358). The computer does not even have a potential to understand. --- ## 注 --- ## 参考文献
First posted 2006/11/29
Last updated 2006/11/29
Last updated 2006/11/29