<< 前へ │ 次へ >>
# The language of thought (Fodor) #2Now we turn to the objections to the language of thought argument, most of which can be roughly divided into three categories: objections to representationalism, objections to computationalism and objections to the language of thought itself. Because of this essay's word limitation, I omit objections to representationalism. We first examine two outstanding arguments against computationalism: John Searle's (1980) “Chinese room” thought experiment, and the frame problem. [中略] Next, the frame problem is another representative argument for criticizing computationalism. It first emerged from the realm of artificial intelligence, and some philosophers, such as Dennett (1987), expand the problem into the philosophy of the mind. Dennett gives an example of this problem. This classical example is that a robot was programmed to take his own battery in a room, but there was a time bomb on the battery. Actually, he could not cope with this exceptional case, and he took the battery with the bomb. So his mission failed. Then a second robot was created to handle exceptional cases, such as a bomb on the battery. But the second robot tried to simulate all exceptional implications, the number of which is infinite. Then he stopped this useless calculation. So the second robot was a failure, too. Subsequently, a third robot was produced, and it was logically instructed not to cope with, and to ignore, irrelevant implications to the mission. The result is what Dennett (1987) states as follows: [I]t sitting, Hamlet-like, outside the room containing the ticking bomb, the native hue of its resolution sicklied o'er with the pale cast of thought, as Shakespeare (and more recently Fodor) has aptly put it. "Do something!" they yelled at it. "I am," it retorted. "I'm busily ignoring some thousands of implications I have determined to be irrelevant.” The third robot tried to deduce all of what was irrelevant to the mission. In other words, he endlessly considered what he should ignore for the mission. This is the frame problem. In our everyday lives, we naturally behave certain ways in response to several ordinary events because we have attained common sense. We quite naturally know that when facing a time bomb, inferences about breakfast or wallpaper must be ignored. However, robots (or AIs) cannot be given common sense about ordinary lives because we have an infinite amount of knowledge that instructs us how to behave in each case. Common sense is gained only through experiences in our normal lives and, according to Wittgenstein (suggested by Dreyfus, 1972), gaining common sense means that we adjust to a community. Therefore, it seems impossible that AI can gain such basic knowledge (Wittgenstein says the “bedrock” of all knowledge) because common sense knowledge is gained only by learning the rules of our own community, and cannot be given in the form of program language. Like embodied common sense understanding, cultural style is too embodied to be captured in a theory or passed on by talking heads. It is simply passed on silently from body to body, yet it is what makes us human beings and provides the background against which all other learning is possible (Dreyfus, 1972). Thus, because they think that innate representation is like language and can be translated into the natural language, computationalists never reach common sense knowledge. Subsequently, some direct objections have been raised to the language of thought. For instance, Fodor (1975) suggests that the natural language is learned, understood and given meaning by admitting and confirming the language of thought. For example (borrowing from Aydede, 2004), “snow is white” is true in English if and only P, where "P" is a sentence in the language of thought. But Blackburn (1984) points out that this explanation is insufficient, or that there are more sufficient alternative explanations to it without the language of thought. He suggests that learning the natural language through the language of thought generates “regress.” Another objection to the language of thought argument has been given by Dennett (1978). He shows an example of a chess playing program. Its designers sometimes predicate and understand that “[the program] thinks it should get its queen out early.” It can be said that this makes the computer program seem to have a propositional attitude or belief. However, Dennett points out that “for all the many levels of explicit representation to be found in that program, nowhere is anything roughly synonymous with 'I should get my queen out early' explicitly tokened.” In the language of thought argument, all propositional attitudes can be translated into the natural language, but the chess program example shows that even computer programs show propositional attitudes with no explicit representations. Although, Fodor suggests that “the language of thought is the only in game in town,” another influential argument has appeared concerning the question of the mind, which is called “connectionism.” Although computationalism and the language of thought argument see the mind as a computer, connectionists approach the mind by looking at the structure of neural networks in the brain. In the brain are numerous units (neurons) with connections (synapses) between them. According to connectionism, the weights of the connections between the units determine all the human intellectual abilities, and the weights of these connections can be adjusted by experiences. By the repetition of training, people gain the common cognitive ability called backpropagation. For example, an artificial neural network trained, by obeying backpropagation, to learn pictures of human faces could gain an ability to distinguish males from females in pictures that had never been presented to it before. This example shows that the neural network (even it is artificial) learns the general characteristics of each male and female face. This capability can be called common sense or, in Wittgenstein's term, the bedrock of all knowledge or “rule-following.” Therefore, connectionism can cope with, as mentioned above, the frame problem, and it may be able to ignore irrelevant implications for a specific event. Nevertheless, computationalists, such as Fodor and Pylyshyn, point out connectionism's inability to explain the systematicity of our thought because it does not mention that mental representations do not have a syntactic structure. In response to this criticism, some connectionists, such as Smolensky (1987), argue that the activation of neural networks could potentially have a syntactic structure. But this suggestion might be no different from the argument of the language of thought. In conclusion, as mentioned above, both the language of thought argument and its basis, computationalism, have been harshly criticized. Anti-computationalists sometimes argue that there must be irreducible mental states or knowledge in linguistic propositions. And some counter-argue that our cognitive states are not quasi-language, as Fodor argues. Eliminativists (for example, Churchland, 1981) radically insist that we have no propositional attitudes at all. In any case, Fodor's suggestion that “the language of thought is the only in game in town” seems hardly acceptable, although I suppose that the language of thought argument has an important role as a starting point for specific investigations into the mind in the present day. --- ## 注 --- ## 参考文献 - Aydede, M. (2004) The Language of Thought Hypothesis [online] http://plato.stanford.edu/entries/language-thought/ - Blacburn, S. (1984). Spreading the Word, Oxford, UK: Oxford University Press. In Aydede, M. (2004) - Churchland, P. (1981) Eliminative Materialism and the Propositional Attitudes, Journal of Philosophy 78, 67-90 - Dennett, D. (1978) "A Cure for the Common Code?" Brainstorms, Cambridge, MA: MIT Press. - Dennett, D. (1987) Cognitive Wheels: The Frame Problem in Artificial Intelligence. In Aydede, M. (2004) - Dreyfus,H. (1972) What Computers Can't Do: a Quritique of Artificial Intelligenc.New York, Harper & Row - Fodor, J. (1975) The Language of Thought. New York: Crowell - Fodor, J. (1981) "Propositional Attitudes," Representations, Cambridge, MA: MIT Presss - Fodor, J. (1987) Why there still has to be a language of thought - Fodor, J. & Pylyshyn, Z. (1988) “Connectionism and Cognitive Architecture: a Critical Analysis,” Cognition, 28: 3-71. In Aydede, M. (2004) - Garson,J. (2007) Connectionism [online] http://plato.stanford.edu/entries/connectionism/ - Searle, J. (1980) ‘Minds, Brains and Programs’, Behavioral and Brain Sciences, 3:417-57 - Shanahan, M. (2004) the Frame Problem [online]http://plato.stanford.edu/entries/frame-problem/ - Silby, B.(2000) Revealing the Language of Thought [online] http://www.def-logic.com/articles/RevealLanguageOfThought.html - Smolensky, P. (1987) “The Constituent Structure of Connectionist Mental States: A Reply to Fodor and Pylyshyn,” The Southern Journal of Philosophy, Supplement, 26: 137-161 In Shanahan, M. (2004) - Wittgenstein, L. (1953) Philosophical Investigations, translated by G. E. M. Anscombe
First posted 2008/01/23
Last updated 2008/01/23
Last updated 2008/01/23