philosophical writings: October 2004
It's Always Raining...(filosofia)
Saturday, October 16, 2004
Language and reality

Whorf supports his claim that we “cut up nature, organise it into concepts, and ascribe significances as we do, largely because we are parties to an agreement to organise it in this way” by comparing the tendency of English speakers to organise our language into verbs and nouns by temporal delineations, and also by pointing out the semantic differences in our terms for labelling events.

When Whorf describes the categorisation of certain words, such as ‘run’, ‘hide’, and ‘strike’ he points out that these are verbs because they are used to denote events that are short-term. He then asks why it is, then, that certain words, such as ‘fist’ or ‘stance’ would be considered nouns, when they also describe temporary events. Whereas some concepts, which would be thought of as long-term in the English language, such as ‘house’ is in the Nootka language a verb, ‘it houses’ or ‘a house occurs’. This argument, detailed in ‘Science and Linguistics’ (Whorf: Language, Thought and Reality, pp.207-19) supports his claim that what governs our categorising of the world is in fact an arbitrary agreement we have come to as speakers of the English language.

Another argument by Whorf suggests that we use different semantic descriptions for the same events is hinted at in ‘Science and Linguistics and detailed in his essay ‘Languages and Logic’ (Whorf: Language, Thought and Reality, pp.233-45). He argues that since we use different concept to describe the same events, we must also understand the world in entirely different manners. When he contrasts the English sentences ‘The boat is grounded on the beach’ with ‘The boat is manned by picked men’ to their Nootka counterparts, he shows that whilst the two sentences are very similar in English, the two sentences bear little resemblance in Nootka. He suggests that this is because English focuses on reporting the event as is, whilst in Nootka, there is an implicit ‘why’ in the sentences, causing the sentences to focus rather on the reason the boat may be grounded on the beach and what for the men are in the boat.

In quoting Whorf in her article ‘Man Made Language’ Dale Spender seeks to explore the manner in which the English language uses certain biases to construct a sexist reality. Whorf claims that it is impossible to invent new terms outside of the accepted system, likening it to making “fried eggs without eggs” (1976:256). This supports her main thesis, that men have the power of ‘naming’ and use the biases already existent in English in a seemingly objective manner to invent names that perpetuate male supremacy and female sub-ordinance. In stating, “names are essential for the construction of reality” (Dale Spender: Man Made Language, pp. 163-71), she argues that without names, we are unable to perceive the world around us. She claims that without the aid of names, we exist in a world where all thoughts, ideas, objects and feelings are a chaotic mass, and it is only through labelling these items that we are able to think and act. She argues that since men have traditionally been the ‘namers’ it is now almost impossible to name things in a manner which is positively biased towards females or neutral without being ‘political’. She shows how in history, the rewriting and editing of the bible was a process whereby male intellectuals omitted positive female images over time. To her, this is a cycle which can only be deconstructed with time and careful ‘naming’.

Whilst Dale Spender uses Whorf’s claims to support her thesis, Steven Pinker is highly critical of them. Whorf argues from a basis of experimental thought, showing very little scientific evidence for any of his claims, while Pinker uses logic and evidence to argue his point. Whorf’s style may be said to be largely narrative, whereas Pinker’s arguments are generally logical. The central point to Pinker’s essay, Mentalese (Pinker: Ch.3 of The Language Instinct, pp. 55-82) is an antithesis to Whorf’s biased and circular manner of argument. In stating that people who speak differently from English speakers must also think differently, Whorf is unable to back this up with anything other than examples in language itself. Hence, Pinker is right in pointing out that the argument, ‘speakers of language X use different grammars and different words to label the world than do speakers of language Y, and therefore speakers of language X and speakers of language Y think differently’ is a false one, as there is no clause in this argument that directly links language with thought. As Pinker points out, Whorf’s argument is one based entirely on grammar. He does not, at any point, outline how it is, exactly, that our thoughts differ from that of speakers of Apache, Shawnee or Nootka. Whorf has designed his examples to perpetuate our pre-existing images of the ‘different’ or ‘exotic’ people speaking different and exotic languages. He only shows that the languages differ and concludes, perhaps too rashly, that our thoughts must also be intrinsically different, relying entirely on the reader’s pre-conceived biases towards a group of people that they are unfamiliar with.

Steven Pinker shows that there are alternatives to thinking that a language necessarily defines thought. His most compelling example is that of the Turing machine, which shows, through a system of symbols, that language is merely a system which thought can ascribe to. The machine that does not have any capabilities of thought is able to formulate conclusions and construct sentences which would be logical to speakers of any language, given that symbols are used in consistency, and there is a prescribed syntactic rule which does not mean anything to the machine. This shows that language is not the basis of our thought, but rather a system through which we process our thoughts.

He illustrates, then, examples of beings which show signs of thought despite the lack of a language system, such as babies, deaf adults who have not been exposed to language, and also primates, all of which are able to link and make connections between sets of objects without the aid of language. He goes on to argue that if these beings were unable to naturally differentiate between objects or ideas in the world, they would also be unable, then, to learn anything new. This is shown in the manner in which people with no language, such as Ildefonso and Helen Keller, learn by requesting names for already existing concepts in their minds.

While Steven Pinker does make a strong case against thought relying entirely on language, he fails to explore fully the concept of whether or not language then affects thoughts or culture to any extent. As a native speaker of three unrelated languages (Thai, Finnish and English), I would say that my own view of the world does not in any way change upon switching from one language to another. This supports Steven Pinker’s claim of there being an ingrained form of mentalese. However, Pinker’s arguments are based around the capabilities of the individual to think outside a language, and he fails to explore the impact that language might have on culture. While, as individuals, we can be made to understand concepts outside of our own language and culture, there is no denying that there are certain meaningful units used in some languages that have no direct translation in others. While this does not necessarily imply that those who do not think in this language are unable to conjure up images of those things once they are explained to them, it does show that some groups of people are more inclined to think about certain concepts more actively than those who have no names for certain concepts. The largest flaw in Steven Pinker’s argument is that of removing individuals from context. He may make a case for individuals being able to see the world in any light, given the proper tutoring, but he does not, however, make a case against language influencing thought, insofar as this is viewed from an anthropological viewpoint.

fon @ 4:13 AM link to post * *


Machines and thought

In debating whether or not a machine can think, I think it is important to ask, as Turing asked, what the “…meaning of the terms ‘machine’ and ‘think’” are. The definitions for these terms are vague, and more often than not linked with the capacity to emote and be capable of originality, rather than a straightforward ability to process data. I believe that it is important to separate the two – feeling and emotion are a human capacity. Thinking is not something that is restricted to humans, but is something that humans would like to believe, thus preserving their own sense of superiority. The onion analogy that Turing describes illustrates this point. Most of a person’s capacities to think can be described in a rather mechanical way, and it is not a given that upon stripping these layers of mechanical function away that we will discover where ‘true thought’ lies. Thus, thought can be described as the composite of all these parts, which make up the thought process, with significance attached to the functional organisation, as opposed to the originality of the outcome.

John Searle, however, objects to claims that a machine could think, basing his claim on a definition of ‘thinking’ in which ‘understanding’ would be implicit. He makes an analogy of an English-speaking man who is asked questions in Chinese writing, and in turn has a certain set of responses to choose from. For the system to be a ‘thinking’ entity, some component of it should understand Chinese. However, he does not understand Chinese, but he is given a set of instructions (‘the program’) that enables him to match the questions with a set of responses. This does not require that he is, in fact, capable of understanding Chinese. Thus, understanding Chinese cannot simply be a process of matching one set of data to another. He is, however, capable of passing the Turing test, and convincing the native Chinese speaker that he is a thinking being. Thus, Searle’s argument can be set out as: P1. Certain objects are incapable of understanding (eg, Chinese), P2. The person, the symbols and the room are all incapable of understanding of this kind, P3. If all these items are incapable of an understanding of Chinese, then so must be the system that is comprised of them, and C. Therefore, there is no understanding occurring in the system. I find that whilst P1 and P2 follow, the leap from P2 to P3 is unjustified, and there is no evidence to support that there is not some form of understanding occurring either within or independently of the system.

Descartes’ famous line “I think, therefore I am” is an argument for a system of artificial intelligence not being capable of thought, given that this system must have a continuous, uninterrupted flow of thought in order to qualify. Since a system can be paused and restarted at will, and it’s stream of consciousness interrupted, the system, cannot, therefore be considered to be a thinking being, according to Descartes. As a rational, thinking being, with a continuous flow of thoughts, nothing can be done to convince one that one is not in existence. The very fact that a program is dependent on the computer for it’s existence means that it can be temporarily ‘paused’, rendering it inexistent for any period of time, and for that period of time, also incapable of thought. This dependence on a system for existence would render the program incapable of passing the Turing test, as all that would be required would be for somebody to restart it. Descartes argument, however, does not centre around this, but rather on the finiteness of such a program. He did not believe that a finite machine could mimic the infinite complexities of a human mind. It is one thing to write a sonnet – this machine however, would not be able to pause and reflect on the fact that it had just done so.

It should be fully possible, and already is, in the field of A.I. to construct a machine that is capable of learning and being subject to experience, and thereby to developing character of some form. Also, given specific tasks, machines are already capable of outthinking humans. Computers are already used in many fields of science to come up with solutions that the human brain has been incapable of finding, thus refuting the claim that machines are incapable of any original conclusions. So, given the advances technology has already made, I have no qualms in agreeing with Turing that it is, indeed, possible to create a machine that is capable of thought. A machine would arguably not be capable of expressing emotion, but there is no reason to assume that thought is not thought simply because it does not include emotional responses. Perhaps this thought could not be classified as human thought, which normally includes emotional responses to stimulus, but thought nonetheless.

fon @ 4:12 AM link to post * *