Saturday, October 16, 2004
Machines and thought
In debating whether or not a machine can think, I think it is important to ask, as Turing asked, what the “…meaning of the terms ‘machine’ and ‘think’” are. The definitions for these terms are vague, and more often than not linked with the capacity to emote and be capable of originality, rather than a straightforward ability to process data. I believe that it is important to separate the two – feeling and emotion are a human capacity. Thinking is not something that is restricted to humans, but is something that humans would like to believe, thus preserving their own sense of superiority. The onion analogy that Turing describes illustrates this point. Most of a person’s capacities to think can be described in a rather mechanical way, and it is not a given that upon stripping these layers of mechanical function away that we will discover where ‘true thought’ lies. Thus, thought can be described as the composite of all these parts, which make up the thought process, with significance attached to the functional organisation, as opposed to the originality of the outcome.
John Searle, however, objects to claims that a machine could think, basing his claim on a definition of ‘thinking’ in which ‘understanding’ would be implicit. He makes an analogy of an English-speaking man who is asked questions in Chinese writing, and in turn has a certain set of responses to choose from. For the system to be a ‘thinking’ entity, some component of it should understand Chinese. However, he does not understand Chinese, but he is given a set of instructions (‘the program’) that enables him to match the questions with a set of responses. This does not require that he is, in fact, capable of understanding Chinese. Thus, understanding Chinese cannot simply be a process of matching one set of data to another. He is, however, capable of passing the Turing test, and convincing the native Chinese speaker that he is a thinking being. Thus, Searle’s argument can be set out as: P1. Certain objects are incapable of understanding (eg, Chinese), P2. The person, the symbols and the room are all incapable of understanding of this kind, P3. If all these items are incapable of an understanding of Chinese, then so must be the system that is comprised of them, and C. Therefore, there is no understanding occurring in the system. I find that whilst P1 and P2 follow, the leap from P2 to P3 is unjustified, and there is no evidence to support that there is not some form of understanding occurring either within or independently of the system.
Descartes’ famous line “I think, therefore I am” is an argument for a system of artificial intelligence not being capable of thought, given that this system must have a continuous, uninterrupted flow of thought in order to qualify. Since a system can be paused and restarted at will, and it’s stream of consciousness interrupted, the system, cannot, therefore be considered to be a thinking being, according to Descartes. As a rational, thinking being, with a continuous flow of thoughts, nothing can be done to convince one that one is not in existence. The very fact that a program is dependent on the computer for it’s existence means that it can be temporarily ‘paused’, rendering it inexistent for any period of time, and for that period of time, also incapable of thought. This dependence on a system for existence would render the program incapable of passing the Turing test, as all that would be required would be for somebody to restart it. Descartes argument, however, does not centre around this, but rather on the finiteness of such a program. He did not believe that a finite machine could mimic the infinite complexities of a human mind. It is one thing to write a sonnet – this machine however, would not be able to pause and reflect on the fact that it had just done so.
It should be fully possible, and already is, in the field of A.I. to construct a machine that is capable of learning and being subject to experience, and thereby to developing character of some form. Also, given specific tasks, machines are already capable of outthinking humans. Computers are already used in many fields of science to come up with solutions that the human brain has been incapable of finding, thus refuting the claim that machines are incapable of any original conclusions. So, given the advances technology has already made, I have no qualms in agreeing with Turing that it is, indeed, possible to create a machine that is capable of thought. A machine would arguably not be capable of expressing emotion, but there is no reason to assume that thought is not thought simply because it does not include emotional responses. Perhaps this thought could not be classified as human thought, which normally includes emotional responses to stimulus, but thought nonetheless.
fon @ 4:12 AM link to post * *