Edward Sapir's Language: An Introduction to the Study of Speech
Annotated OutlineBelow is my commentary on the chapters in our excerpt. Where there is nothing under a heading, it means I have not developed a commentary yet.
Chapter 1. Introductory. Language Defined
Sapir starts by making his first claim and explaining it: "Walking is essentially innate. Language isn't--it is cultural and learned". [my paraphrase rather than an actual quote]. Compare the view of Chomsky, Pinker and followers: "Human language is innate." How much actual direct disagreement is there in this difference? How does it depend on what you mean by "language"?
Famous quote from p. 4:
Speech is a human activity that varies without assignable limit as we pass from social group to social group, because it is purely a historical heritage of the group, the product of long-continued social usage.
(1) "without assignable limit" sounds very UN-universalist--as though there is no constraint at all on possible differences among languages. This is one of the quotes that are typically cited to illustrate Sapir's belief in the eponymous "Sapir-Whorf hypothesis" - the name sometimes given to an extremely un-universalist, particularist view of language, which holds that languages can vary without limit.
My own view is that this entire book shows his universalist bent. In it, Sapir tells us how languages are all similar in expressive capacity; tells us that they all have phonemes and morphemes, words, and sentences; tells us some of the kinds of grammatical categories languages often have; the kinds of morphological processes; the kinds of relations between morpheme and word structure (many morphemes per word; few; and all in between). To me it looks like he is laying out the universal properties of human language, which can be understood as the broad outlines of the limits of variation in human language; but at the same time he is showing us, based on his own experiences, just how much languages can differ in the particulars of their categories and forms.
I think Sapir hits the mark exactly in drawing a balance between attention to the universal, and attention to the language-specific, aspects of language.
I will make a conjecture about what Sapir would have said to anyone complaining that the focus of linguistics should really be entirely a search for constraints on variation, and asking him why he didn't attempt to find an "assignable limit" on the grammatical categories of language: I bet he would have said that the descriptive study of the languages of the world was only in its infancy, and it was way too premature to make claims about constraints on what grammatical categories there would or could be, or what syntactic structures. (Syntax was hardly studied in his day, that is, by the new crop of general linguists who were a subtype of anthropologists. These linguists were busy figuring out phonology and morphology of types they had never seen before. There were many people who studied the syntax of particular languages, specifically the European-language grammarians, who were working within the historical traditions of grammatical analysis in their languages. Unlike the linguist/anthropologists, these grammarians were usually prescriptive in orientation. Their language traditions were within a small western Indo-European band of languages, so they were not representative enough to be used for investigating the possible variation and limits on it in grammar.)
The focus in post-Sapirian linguistics, however, did come to be on universal constraints. And the study of the range of variation in the languages of the world took a back seat in the mainstream of linguistics. It came back to the fore with the rise of functional-cognitive typology as practiced by Greenberg and Comrie and their followers. (e.g. Givón, Matt, me...)
Generative linguistics, though, has kept its focus on universal constraints on the form of language. Universals are studied by investigating English or small samples of languages, not broad-based samples of language like Greenberg and Comrie used. And the universal constraints that are claimed to exist are extremely abstract in generative theory. Their relation to so called "surface" form is so indirect and empirically unconstrained that it makes such claims, in functional linguists' view, unfalsifiable.
(2) It becomes clear especially in the second part of the quote that Sapir is talking about individual languages--language systems, or Langues in Saussure's terminology -- as being non-innate and culturally transmitted. By "language" he does not mean some general universal blueprint for language, which is what Chomsky means. In fact Chomsky means something even more specific: a general universal blueprint for SYNTAX, which he sees as set of constraints on formal structure that are essentially independent of the structures' meanings or function in communication.
I can't really fault Sapir for not making a scientific distinction between a language system vs. human language in general. Even Saussure didn't make that distinction. Saussure identified the concept of Langue, but he did not oppose this concept to human language as a general phenomenon. He only opposed it to the processes involved in using language, i.e. the processes of speaking and understanding, i.e. Parole.
It seems to me that Saussure and Sapir simply never found it necessary to talk about language as a general phenomenon over and above individual languages. The idea of and interest in language as a general human capacity and phenomenon came about later, I believe with Hockett, who introduced the idea of "design features of human language", which we will look at later.
Sapir does refer to "language", with no definite or indefinite article; but by this he seems to mean "any particular language". "Elements of language" support this reading: there are only elements (whether forms, meanings, or form-meaning pairs i.e. signs) in individual languages. "Elements of language" must mean elements in whatever language you please; the particular one you might choose doesn't matter for the specific point he is making when he uses the unmodified term "language".
Sapir says that there are certain observations (e.g. expression of instinctive emotions via vocalization) that sometimes "prevent the recognition of language as a merely conventional system of sound symbols" (p. 4).
The nominal part of the above sentence can be rephrased as a definition: Language (any language that is) must be recognized as a merely conventional system of sound symbols. Take away the contrast with the possibility (false as Sapir argues) that language (any language) is innate, and we get "Language is a conventional system of sound symbols".
This sounds very Saussurean. Both Sapir and Saussure also come to the recognition that SOUND isn't even crucial. As long as we have SOME external observable form that is linked to a meaning, and a whole system of such form-meaning links, then we have a language or Langue in Saussure's terms.
Interjections like the ah and oh used by English speakers belong to the system of language, and not to the realm of automatic vocal cries. So do sound-imitative words like meow. Why? Although they are partly non-arbitrary, they still are arbitrary enough to vary from language to language--sometimes extremely. So they can't be pure vocal expressions, entirely and exclusively naturally connected with what they express. They are conventions like other elements of language.
Sapir then comments on the origin of language and points out that trying to trace language back to imitative origin, or a spontaneous exclamation origin, doesn't make much sense (despite the fact that such attempts are very popular!). These explanations of origin don't hold because pure imitations and exclamations lack what is crucial about all human languages: that they are composed of arbitrary relations of form and meaning. Languages, and hence human language in general, has a fundamentally symbolic nature. The units of language are conventional form-meaning correspondences, i.e. symbols.
A more complete and "serviceable definition of language" comes on p. 8:
Language is a purely human and non-instinctive method of communicating ideas, emotions and desires by means of a system of voluntarily produced symbols.
He admits that language is in most cases a vocal-auditory phenomenon. But the vocal and auditory parts of language, he says, are not in their nature an essential part of the linguistic system. Articulatory perceptual and motor routines may well be part of the the cognitive processing in language use, but alone they don't make it as elements of language per se: (p.10)
A speech sound localized in the brain, even when associated with the particular movements of the "speech organs" that are required to produce it, is very far from being an element of language.
(As mentioned above, for "an element of language" read "an element of any language".)
To be a part of a linguistic system, there has to be a meaning or function (a general term including linguistic meanings too abstract to state easily) associated conventionally with the vocal and auditory aspects of the sign. The meaning is some element of experience that has been associated repeatedly with the vocal/auditory aspects that signal it--so that the two together form a unit of the linguistic system. (p. 10)
[Language] consists of a peculiar symbolic relation--physiologically an arbitrary one--between all possible elements of consciousness on the one hand and certain selected elements localized in the auditory, motor, and other cerebral nervous tracts on the other.
Language is a "fully formed functional system" within the mind (which he refers to as "man's psychic or spiritual constitution"-- a way of referring to the mind that is now very outmoded, for various reasons).
The idea of [a] language as a functional system that exists not localized anywhere in particular in the brain, but as part of the human mind or mental capacity points to the ideas of a) Saussure, who carved out the symbolic system of a Langue as a thing in its own right worthy of study; and b) Chomsky, who believes that language is a "mental organ".
Metaphorically I might perhaps in some contexts feel justified in calling a language an organ, to highlight its functional unity, but Chomsky means that "language in general" idea I referred to above, rather than a particular system, and I don't buy that "language in general" resides in anybody's head. Even more importantly, I think it is too easy to take the organ idea way too literally, even if you say loudly that language is not a physical organ. Chomsky's idea that human language is not only a functionally complete system but also a formal system that is INNATE, and therefore genetically encoded, means that some "genes of language" are there to make proteins of language, much as genes make the proteins that develop a heart or any other physical organ, or blood, or any functional system in the body. But there are no "linguistic proteins". Language only exists in connections and activations. The connections of language are too plastic to be hardwired, and activations only happen in experience and so by definition cannot be hardwired.
It's clear at any rate that Sapir knew enough neurology to know that language is not localized in the brain in the sense of being "in" the parts of the cortex involved in auditory perception of speech and motor production of speech. Broca's area has a lot of this cortex, and a lesion there can wreck your speech production and perception. But that is not all there is to language and Sapir saw that. Speech sounds without meaning are not language.
So, Sapir has backgrounded the "concrete mechanism" of linguistic perception and production in favor of the meaningful system of symbols: (p. 11)
Our study of language is not to be one of the genesis and operation of a concrete mechanism; it is, rather, to be an inquiry into the function and form of the arbitrary systems of symbolism that we term languages.
All of this sounds very, very Saussurian.
Sapir goes on to describe the symbolic system as a system of categorization. Rather than being limited to symbolizing very specific contextualized experiences, each symbol must abstract away from the particular and designate a more general CLASS of experiences. Otherwise it would not be very useful. Experiences are unlimited, and languages is our way of limiting them, effectively sorting them into types, so that we can communicate about experiences. The symbols of a particular linguistic system give us general categories to put our experiences in, but then in actual usage we can communicate more than the abstract decontextualized symbols because context enriches our symbols with more specific conceptual content.
Then comes the issue of whether thought is possible without speech (for speech, read language); the ways that linguistic systems can be mapped and remapped onto other systems of external forms (thus forming layers of symbolic mappings); and finally the universality yet diversity of language, and its implications for the antiquity of language.
Chapter 2. The Elements of Speech
Sapir introduces the important root-affix distinction found in all grammatical systems. He refers to roots as "radicals", which is not the usual typological terminology now, but is still found in the descriptive traditions of specific language families.
He represents a root-affix combination as A + (b). The capital letter for the root is meant to indicate the primacy of the root: there is an asymmetry between these two types of elements in which one is more important. For one thing, the root has more conceptual content, or as Sapir puts it "more easily apprehended significance", while affixes have less content and instead a more abstract function, often a relational one (i.e. functioning to relate different elements of a sentence to one another, like case endings).
An illustration of the importance of roots as the major carriers of conceptual content is the following.
If you take a sentence at random from an English text and blot out all the affixes, you will probably be able to say something about what the sentence is about. If you instead leave the affixes and blot out the roots, you have total gibberish.
If we look at the two kinds of units in terms of how conceptually independent they are, we get a similar primacy of the root over the affix. Roots are conceptually more "stand-alone": We can think about them as meaningful elements, without having to think about other units to interpret them. They are conceptually complete by themselves and do not need to be modified by anything.
But affixes, in terms of function, are not conceptually complete. They need another element to conceptually "lean on". They mereley SUPPORT the root: they either relate the root to something else in the sentence, as for example case endings or agreement morphemes; or else they delimit the root semantically, or as Sapir says, "qualify" it, another way of saying modify it.
We call this property of semantic isolability conceptual autonomy: it is the ability to be processed and thought about in isolation, without necessary qualification by other concepts. Roots have more conceptual autonomy than affixes.
An example would be a plural morpheme. It is hard for anybody but linguists and logicians to think of plurality independent of something that can be plural. A plural morpheme delimits the root noun that it attaches to: it tells something about the number of the things categorized by that noun--i.e. whether there is only one or whether there is a larger multiplicity of such things.
There seems to me to be an iconicity operating here at a very general level across languages (not however yielding predictions about what specific concept will be a root or an affix in any given language.)
To show this we have to first look at typical and also the allowable FORM of roots and affixes.
From a formal standpoint, it is easy to see that affixes are NOT privileged: They never stand alone: they need a root in order to occur at all.
Roots on the other hand again have the dominant status. They can, at least in many languages, stand alone formally. It is true that roots in some languages, like affixes, don't stand alone either: languages like Latin and Greek require morphological endings on most of their words, especially verbs. But in other languages, like English, roots can and do stand alone. If a language allows free-standing morphological units, then these are going to be the roots, i.e. the ones with greater conceptual content. You don't find languages where the more grammatical morphemes can stand alone but the morphemes with more conceptual content have to be bound to another element.
So roots have greater conceptual content and greater functional autonomy; and they also have greater formal autonomy. They also have generally speaking more formal substance: on average, the roots will be longer than the affixes. Here's were iconicity comes in.
Greater conceptual content and greater functional autonomy correlate with greater phonological content. We could also extend this formal content idea to a notion of "morphological content": a root is morphologically more like a full word than an affix is because it sometimes IS a full word whereas an affix never is. A root then has more morphological "substance", since it possibly has word boundaries.
And, as we saw, roots have greater formal autonomy.
So: Greater conceptual content and greater functional autonomy correlate with greater phonological and morphological content, or more generally speaking, formal content; and greater formal autonomy.
It looks like degree of conceptual substance is being matched by degree of formal substance; and degree of conceptual autonomy is being matched by degree of formal autonomy.
The similarities are almost pictorial: more stuff and more autonomy in meaning/function is "illustrated" by more formal stuff and autonomy. Less stuff and less autonomy in meaning and function is "illustrated" by less substance and less autonomy in form.
It seems like the form actually mirrors the meaning/function in terms of both substance and autonomy.
This "mirroring" idea is the essence of iconicity. Similarities between properties of meaning and analogous properties of form are found in other areas of language as well as shown by John Haiman, an important functional linguist.
Back to what Sapir actually said, rather than what we can derive from looking at roots and affixes and their relations in terms of his basic observations on the form and the conceptual content typically associated with each of the two types of morpheme.
KIn the formula A + (b) above, the (b) is placed in parentheses to indicate that it is formally dependent on A--it doesn't occur without certain class of roots, showing that affixes are even more restricted in occurrence than we at first realize. Roots, on the Oother hand, can occur with a range of different affixes. From a cross-linguistic perspective they are distibutionally much more privileged than affixes.
There are a lot of difficult issues still about the basic root-affix distinction, for example that there is not always a neat line-up of high conceptual content, ability to stand alone, and general freedom of distribution, and their three opposite qualities. English prepositions are abstract and relational and in many cases perform the kinds of grammatical relations-indicating work that case endings do in case languages. Are they roots or affixes? Some of them seem to convey information while standing alone, like over and in and out. But what about of? We can't really get a sense of what it means because it is so general. If we take of as standing alone, it might just be because of its orthographic representation.
Sapir doesn't go into the above issue but he does discuss the even thornier question of whether some words really represent not a general root concept, but a concept that is limited by an "invisible" morpheme that specifies a particular category. Cats for example is clearly morphologically CAT + (pl). But the word cat seems to be understood as an opposition to cats--that is, semantically, cat in a sentence like I saw a cat stalking a bird really represents CAT + (sg) and is expressed formally as cat + (0).
The (0) above is called a zero morpheme. It represents a contrast with a marked morpheme, that is, a morpheme of opposite meaning that has an overt form. The structuralists that came after Sapir, both European and American structuralists, incorporated zero morphemes into their theories, where the concept of the zero morpheme formed a coherent part of the theory. When zero morphemes began to appear in struturalist theories, more traditional linguists/grammarians did not really like the idea, because the notion of an "invisible morpheme" was seen as too abstract--the same reason people later resisted invisible "deep structures" in syntax. But zero morphemes were so useful in the structuralist systems of oppositions that they were allowed in although they rejected other invisible elements like underlying structures.
The typology of root - affix combinations in words in the languages of
the world in Sapir's analysis is:
- A + (b) - ordinary root plus affix combination;
- A + (0) - root plus invisible affix, i.e. the word is an unmarked (unmodified)
form of the root that carries a meaning in opposition to a marked form;
- (A) + (b) - the case of words with bound roots , as found in Latin
and finally, he adds another possibility found in languages, namely
- A + B - the case of compounds in which two independent roots are
combined in a word.
To be complete I will add one more:
- (A) + (B) which is the case of a unit composed of two bound roots, like in photograph and words of similar pattern in Latin and Greek. These are often called compounds too.
Sapir is now ready to define word, in a series of related definitions. Word is a very difficult concept to define adequately because cross-linguistically, words are not characterized by a single set of criteria that are either all present or all absent.
There are many specific kinds of borderline cases. Clitics are the classic example of an element in between an independent word and a dependent affix, but compound words also can lead to difficulty in deciding on word boundaries. Is a compound word one word or two? Naive speakers of languages with alphabetic writing systems will typically make the decision entirely on the orthography - whether or not a space occurs between the roots. However, linguists know know this property to be arbitrary and hence useless as a deciding factor. Compare for example fire hydrant with fireplace: the conventional orthography is different but the two expressions are similar in every other criterion that might be applied to compounds vs. separate words. They both have stress on the first of the roots, inability of the second root to take the same kinds of modifiers as an independent word can take; the application of any modifiers to the whole compound rather than just to the first root, etc.
The word is one of the smallest, completely satisfying bits of isolated "meaning" into which the sentence resolves itself.(S.K. will add more definitions here)
Sapir's definitions of word are schematic enough to be useful as a basis for understanding language in general. But they are not, and cannot be, cut-and-dried, with sharp enough boundaries to decide on borderline cases. The phenomenon is too gradient for that. Modern linguistics has proposed various criteria for "levels" of word-hood, like phonological word vs. syntactic word. (A clitic is defined as a syntactic word that is nevertheless only PART of another phonological word. How's that for a good boundary definition.) Even within these levels there are sometimes conflicting criteria, involving such things as word stress, tone, combinatory possibilities, etc., which when applied give different answers to the question: "Is this element a word in language X?" If you take a morphology class you will see what I am talking about much more clearly.
However, I think a lot more cross-linguistic psychological work needs to be done to really see what aspects of wordhood are universal and what are not. Most psychological work takes the idea of 'word' as a given. But this work is focused on English and a few other langugages with alphabetic writing systems. The relation of reading to the cognitive processing of language is much investigated, but existing research takes a lot for Ogranted about the psychological nature of LINGUISTIC rather than orthographic words--and the psychological literature also tends to confuse these, because the researchers are not linguists, and apparently only linguists can see clearly, like Sapir, that the linguistic sign that is a word is not the same as an orthographic symbol that represents it.
There is also a definition of sentence, which is also identified as an important psychological unit. On page 35:
Its definition is not difficult. It is the linguistic expression of a proposition. It combines a subject of discourse [for this read "topic" --s.k.] with a statement in regard to this subject [i.e. topic].
This again is rather schematic, and there are many boundary line phenomena. Some linguists have made the claim that our notion of sentence is really a function of written language once again, which sharply divides sentences as it divides words.
Are 'truncated' utterances sentences? Consider the following. (These are just examples and not to be taken as turns in a coherent discourse!)
If these are going to be taken as propositions, we have to enrich them a lot with contextual information to show the whole proposition. Saying they are "truncated", in fact, already pre-assumes the existence of a full proposition "underlying" them semantically.
An important point that Sapir gets to about the sentence in language is that languages have a very wide diversity about how they express propositions in sentences, in terms of the number of words in those sentences. Some will have one or two words to express a whole proposition, while others will have many words. It is parallel to the relation of morphemes in words--some languages will have one or a few morphemes in a word, while others will characteristically have many morphemes in most of their words.
Chapter 3. The Sounds of Language
Chapter 4. Form in Language: Grammatical Processes
Sapir identifies the major types of formal grammatical processes, that is, modes for the expression of grammatical categories. These are the relatively small number of derivational and inflectional processes that are used in word formation in the languages of the world.
The word formal in the phrase formal grammatical processes refers to the fact that we are observing changes/alternations in the form part of the sign, rather than in the meaning/function. So the process that relates goose and geese involves a change in the form, or more neutrally stated, a difference between one form and another. Notice that this change/difference is associated with morphemes, i.e. signs. It is not PURELY about form, but specifically about the form of related signs.
I think that Sapir identified all of the existing types of morphological processes; at least I cannot think of any that he missed. He also noted the large number of different ways the processes can play out in the specific linguistic systems of the world.
He runs through the various possibilities for kinds of morphological processes and their interaction with what he calls "sequence" which actually seems to mean order of constituents specifically in the sentence, not in the word. (Personally, I would use "sequence" to mean any kind of sequence not just in elements larger than a word.)
He points out on p. 67 that affixing is far and away the most frequent type of process for combining morphemes. Language of every morphological type except very isolating languages, in which just about every meaningful element is a separate word, show affixing. Of the affixing processes, Suffixing is by far the most common. This makes me immediately think of processing: are suffixes intrinsically easier to process than prefixes?.
What Sapir terms "composition" on pages 62 and 64 ff. is what is now called compounding: the occurrence of two or more roots in one word. In a compound the roots play the role of subordinate parts to a larger word. ("Composition" is a technical term in Cognitive Grammar and it refers to the combining of morphemes of any type, whether roots or affixes.)
It came as a bit of a surprise to me when I first read this that there are many languages in which compounding of roots does not occur at all. Compounding is so important in Germanic languages like English and German, and it also occurs in the classical languages, and also in noun-incorporating languages. Like Sapir says, compounding seems like it would be a universal process. But no!
(By the way, I was confused as to his claim that Eskimo does not have compounding, because I thought that Greenlandic Eskimo at least has a lot of noun incorporation, which folds a noun root (usually corresponding to a patient) into a verb. By that definition noun incorporation is a subtype of compounding. I'll try to clear this up. )
Chapter 5. Form in Language: Grammatical Concepts
[ jump to top ]