language acquisition upside down

One place where the thorny problems of linguistic theory become most obvious and demand the most determined engagement is in the area of child language acquisition. (The other, I think, is language variation and change, unless I just say that because these are what I find the most interesting.)

Take the concept of language structure. The belief that language has structure is, naturally, fundamental to the discipline of linguistics. But it is possible to understand this in radically different ways.

According to Hirsh-Pasek and Golinkoff in their book, Origins of Grammar, theorising about child language is generally done according to one of two broad approaches, which they characterise as “outside-in” versus “inside-out”. “Outside-in” includes social-interactional theories and cognitive theories; “inside-out” includes the various permutations of nativism.

One of these approaches, they say, “contends that language structure exists outside the child, in the environment.” If I didn’t tell you any more, would you be able to say which of the two options – ‘interactionist’ or ‘nativist’ – was being described here?

In fact, HP&G are referring to social-interactional/cognitive theories as believing that language structure exists outside the child (nativist theories rely instead on the innate language-specific knowledge).

Now it is quite possible that some theorists on the interactionist side do believe in language structure as having some sort of real-if-‘abstract’, independent existence. This would betray itself by, for example, the use of terms like “finding” or “discovering” things like “units” (or the boundaries between units) such as segments, morphemes, phrases, clauses in the ambient language. Such interactionists would then share with nativists the view that (spoken) language embodies or comprises real-if-‘abstract’ units organised in a real-if-‘abstract’ structure, and as the job of speakers is to produce speech with these properties, so the job of the listener is to recognise or calculate the identity of the units in what they hear and the relations between these units.

But a much more interesting prospect is the type of ‘interactionist’ approach that does not impute such reality to language structure at all. That is the view that the raw data of spoken language must be clearly distinguished from the analysis which an observer (lay or specialist) might undertake of it. In other words, there is no implicit structure lurking there in speech, whether phonological or syntactic: structures are inferred by analysts and act as handy descriptive/analytical tools, but they’re not really there. It is a serious criticism of some schools of thought that they treat the analysts’ analysis as being in fact what language is composed of – as though analytical constructs such as noun, verb, IP, DP, etc, actually are somehow or somewhere embodied in utterances. It’s one thing to say that when linguists want to get a handle on what people produce/hear they need to identify units and categorise things – these units and categories are convenient as technical descriptions in order that specialists can spot patterns and talk to each other about them. It’s another thing to say that spoken language consists of these units and categories such that the linguist’s task is to discover them (rather than impose them).*

As Joseph et al (2001: 60) put it, “whereas for the psychologistic structuralist speech comes about through implementation of the speaker’s knowledge of a systematic linguistic structure, for Firth the systematic structure is a linguist’s fiction, resulting from the attempt to understand speech.”** Thus (for example) the nativist scours the child’s productions in order to establish which aspects of linguistic structure must have unfolded in their mind by that point – the more interesting varieties of interactionism make use of structure, on paper, in the analysis, only as a tool to understanding what the child understands.

If both sides in the field of language acquisition, the interactionist and the nativist, share the conceptualisation of the linguist’s task as being one of discovering linguistic structure that actually exists out there/in language, then the differences between the two approaches shrink rather dramatically. But when this conceptualisation is not shared, it makes the ‘interactionist’ approach much harder to evaluate on ‘nativist’ terms, for one thing, and more importantly it keeps the idea of “language structure” where it belongs, in the realm of open questions needing discussion. Linguistic descriptions are convenient (-to-the-linguist) if not indispensible ways of categorising bits of utterances, but they have no life of their own.

*Some books/articles talk about things like Ross’s “discovery” of his island constraints: it would be better to think of things like this as inventions, not discoveries.

** Note the F-word. Amazing chap, obviously, this Firth. I was mightily relieved and heartened to come across that section of Joseph (2001) shortly after tortuously writing an essay labouring to express this point in an essay many moons ago.


anybody’s guess

Everyone blames phonological representations for language-related impairments, or deficits in phonology-related tasks like nonsense-word repetition. But what is a phonological representation? What do impaired phonological representations look like? In what specific ways do they differ from unimpaired representations, and how can you tell? What does it all mean?!

Munson (2006) in a commentary on Gathercole’s keynote article in Applied Psycholinguistics expatiates thus, and I can only concur:

Although there are many different perspectives on the factors that drive nonword repetition performance, we can all agree that the relationship between nonword repetition and word learning is due to the association of these constructs with phonological representations. The relevant question to ask, then, concerns the nature of phonological representations themselves. What are they? Textbook descriptions of these generally posit that they look something like the strings of symbols that we are taught to transcribe in phonetics classes. However, phonetic transcriptions, even narrow ones, are abstractions of the signals that are being transcribed. The level of detail that they code is ultimately related more to the perceptual abilities of the listener, the degrees of freedom in the symbol system, and a priori assumptions about the quantity of detail that is relevant for transcription than to the signal being transcribed and its associated phonological representation.

What, then, do “real” phonological representations encompass? What is being represented? The answer to that is anyone’s best guess. Representations themselves are latent variables. We can never see them, we can only posit them as explanations for the sensitivity that people have to variation and consistency in the speech signal in different tasks. (p578)

A welcome reality check in perhaps a slightly unexpected place, even though, of course, it still doesn’t solve the fundamental problem. Everybody’s preferred solution for testing the true nature of implicit phonological representations is different, and inadequate to different degrees and in different ways, but in the nature of the concept of phonological representations itself, that is simply how it has to be.


Munson, B. (2006). Nonword repetition and levels of abstraction in phonological knowledge. Applied Psycholinguistics 27: 4

plodding up and down the vocal track

This is the first time for several years that I don’t have any December marking to do.

In celebration, here are some snippets of brilliance from previous years’ efforts.

Give two examples of words which have regular plural forms in English.

  • floor and happy
  • cat’s and dog’s
  • walk > walk’s and sing > sing’s

Give two examples of words which have irregular plural forms in English.

  • what and when
  • beautifully and happily
  • sheep, sheep; ox, oxens
  • you and child

And from a short-essay answer:

  • The weakening of function words is affluent in this dataset.

These students are our future!

the humanness of language

Vern Poythress has a new book out – In the Beginning was the Word: Language – a God-Centred Approach (thanks to Jeremy Walker for flagging it up).

It’s available for sale here, accompanied by a publisher’s description which induced some raised eyebrows, I admit, from a linguistic point of view (what can be meant by the specification of the meaning of every word in every language? in what way does language reflect and reveal the glory of the Creator, other than in the trivially true way in which everything in the Creator’s creation does? doesn’t the publisher care about gender-specific pronouns, or is it only Christian men who are supposed to read this book? isn’t the publisher aware of the difference between language and speech? am I, possibly, being too harsh?).

Let’s just overlook all of this and put it down to a non-technical presentation of what must be, at least if you read the endorsements, an insightful, profound, compelling, significant piece of work.

Instead, I’m more interested in what you can see inside the sample pages.

Specifically, this paragraph from p18:

The New Testament indicates that the persons of the Trinity speak to one another. This speaking on the part of God is significant for our thinking about language. Not only is God a member of a language community that includes human beings, but the persons of the Trinity function as members of a language community among themselves. Language does not have as its sole purpose human-human communication, or even divine-human communication, but also divine-divine communication. Approaches that conceive of language only with reference to human beings are accordingly reductionistic.

Now, I find almost everything in here questionable (apart from the first two sentences, I suppose). One – terminology – I’m more familiar with the term ‘speech community’ than ‘language community’ (although I don’t suppose much hangs on the difference; correct me if I’m wrong). I find it odd to say that God is a member of a language community that includes human beings. Surely, it is odd to think of God as being a member of any kind of community that includes human beings: he is infinite, humans are finite; he is eternal, humans are created; he is infinite, eternal, and unchangeable in his being and his attributes; humans are not. If he so much as notices humans, it is infinite condescension on his part – and yet he does more – and even so, he is not part of our communities. Great fear, in meeting of the saints, is due unto the Lord, even and especially when he reveals himself most condescendingly. Certainly he speaks, and we must listen. And through the Mediator we have access to the Father to speak to him in prayer, which in his grace he hears. But this does not a speech community make.

Two – I fail to see how it is reductionistic to conceive of language as merely serving human-human communication. Partly, there’s no ‘merely’ or ‘only’ about it – language is a beautiful, rich, elegant, effective, complex, amazing tool, which only humans out of all creatures have, for communicating with each other. It doesn’t belittle language to say that only humans have it. But partly too – only humans have it! Humans use language for all sorts of meaningful reasons – to convey or take in indexical, social, affective, and propositional kinds of information, and so on. Animals have no way of using such a tool. But also, to speak reverently, the Trinity has no need of such a tool. The Scriptures present the persons of the trinity as taking counsel together and speaking one to another (this is one of the reasons, after all, how we know there are distinct persons in the Godhead). But the three persons of the trinity have always existed in a fellowship of love and harmony with each other. The Spirit searches the deep things of God. The Son knows the will of the Father. As the Father is omniscient so is the Son and so is the Holy Spirit. The purposes of the Father are the purposes of the Son and the purposes of the Spirit. Everything is always present before God. Thus, on the propositional front, he doesn’t need to be told anything for information. Indexical? Each person knows the other persons thoroughly; there is no question about the identity of any person or the relationships each person stands in to the other persons. Affective – he has no parts nor passions: it doesn’t even apply. Or think of the stuff of language – syntax, morphology, phonology – with imagination straining at the limits of what is reverent, without the physical production of some word, spoken through a vocal tract (or gestured by hand in signed languages), there can be no phonology, and without a word, no morphology, and without concatenations of words, no syntax. How sad, to have a concept of the communion between the persons of the Trinity that doesn’t even rise above the possibility that language such as humans have is the only conceivable manner or method of it.

Pages 18-19 do (I should point out) contain discussion of two passages of scripture which are used in support of the position that part of the purpose of language is for communication within the Trinity. One is John 16:13-15, where the Spirit is said to hear (from the Father) of the things of Christ. The other is John 17, the intercessory prayer: “John 17 presents not merely human communication but also divine communication between the divine persons of the Father and the Son. That communication takes place through language. And so language is something used among the persons of the Trinity.” But caution is needed. It cannot be a literal hearing, just as it cannot be a literal speaking – speaking and hearing involve physical, motor and sensory, processes. Further, things are true of the incarnate Son which are not true of the other persons of the Trinity. It is not in question that Christ speaks in John 17 as a divine person, but he speaks as a divine person with a human nature. There is no doubt that the Father heard him (as he “hears” prayer) as he spoke with human language, but the fact that the communication between Christ in his time on the earth and the Father naturally included human language does not automatically license the conclusion that the pre-incarnate Son and the Father and the Spirit communicated with each other using language that is somehow the same means of communication as human beings use among themselves.*

So: I think the case is overstated. There is no doubt that there is communication between the persons of the Trinity. There is no doubt that God speaks to humans using language. There is no doubt that language, which humans use to communicate with each other, is a gift from God (although of course affected by the Fall). But a more compelling case needs to be made – from scripture – that the communication between the persons of the Trinity is by way of language. Language is a special gift for humans – it is suited to human capacities and human needs. By conflating ‘language’ with ‘communication’, you fail to take the opportunity to explore exactly how unique and special language is, you bring divine communication within the trinity down to the level of the finite and frail efforts at interaction which creaturely and fallen humans make, and you make linguists grouchy.

All of which, it turns out, I said before, better, here.

Note too the argumentation in the following pages from the possibility of translating ruach as ‘breath’; and the notion of breath “carrying speech to its destination”; this concept does not strike me as particularly salient in how phoneticians would understand articulation, nor in how semanticists would conceive of the creation or accessing of meaning, although on both fronts I remain open to correction. Phoneticians and semanticists, needless to say, are prone to mistake – but if there is a mistake here, or elsewhere, in how linguists understand language, this needs to be demonstrated through serious engagement with the principles and concepts that are current. Even in something aimed at a lay audience, there could still be a nod to the concerns of anyone with more specialised knowledge.

on phonematic units

Firthian Prosdic Analysis provides a way of thinking about language and phonology which is fundamentally different from approaches in the ‘American’ and/or generative tradition.

As Anderson’s overview points out, “While one might be tempted to compare the phonematic units of the former with the phonemes of the latter [ie phonemicist analyses], for example, this would be a clear mistake. Both are essentially segment-sized units, it is true, and form systems of paradigmatic contrasts, but the similarities end there” (Anderson, 1985: 189).

The extremely helpful (clear and informative) JL article by Ogden and Local (1994) makes the same point very forcefully – it is thoroughly misguided to use the concepts and categories of generative approaches as a way of understanding Firthian ones, as though the differences between the analyses were simply terminological, or as if Firth was merely fumbling, in isolation from the American mainstream and in a quaintly eccentric English gentlemanly way, towards the same understanding as SPE-style analyses ended up with.

“Phonological units are, according to FPA, in syntagmatic and paradigmatic relations with each other. Syntagmatic relations are expressed as prosodies. Prosodies can also be in paradigmatic relations; this is what it means to be ‘in system’. Thus one can talk equally well of a ‘prosodic system’ and a ‘phonematic system’ (such as ‘C-system’ or a ‘V-system’). Both prosodies and phonematic units must also be stated in relation to ‘structure’ which in turn expresses syntagmatic relations” (Ogden & Local, 1994: 480).

“In making a Firthian Prosodic statement, the analyst typically begins by paying attention to the syntagmatic ‘piece’ and stating the prosodies relevant to the description of the piece under analysis; but the information is explicitly not thereby ‘removed’ or ‘abstracted away’, and the phonematic units are not ‘what is left’: in particular, phonematic units are not ‘sounds’ (Goldsmith 1992: 153), since phonological representations according to FPA  are not pronounceable; nor are they merely the ‘lowest’ points on which all else hangs, like the skeletal tier. Phonematic and prosodic units serve to express relationships: prosodies express syntagmatic relations, phonematic units paradigmatic relations. All else that can be said about them depends on this most basic understanding” (Ogden & Local, 1994: 481).

It may possibly be worth adding that when Anderson speaks of phonematic units being ‘segment-sized’, this likely needs to be qualified by saying that in a Firthian-inspired approach, establishing the size of a segment is actually part of the analysis – segments and phonemes are emphatically not equivalent – a syllable or a foot could equally well be a “segment” in a Firthian analysis, if descriptive or analytical adequacy called for these units to be the terms in the paradigm. Hear Lodge:

“there is nothing that tells us a priori that paradigmatic relations that establish the meaningful contrasts of a language have to be between segment-sized entities at the phonological level any more than at any other level. In syntax, for example, a ‘segment’ is usually word-length, and certainly morpheme-length; the ‘segment’ is the smallest bit of the speech chain suitable for describing the patterns of a particular level. We segment speech in different ways for different purposes. Such segments include syllable places: onset, rhyme, nucleus and coda, the foot, the intonation group, the morpheme, and so on” (Lodge, 2007: 80).


(Post inspired by the surprising discovery that “phonematic units” is a search term that leads to this blog.)

(Also in the back of my mind being the Friendly Humanist’s talk about silos – phonologically speaking, the Ogden & Lodge article is superb for such a purpose, not that I would particularly claim to be anything more than firth-sympathetic.)

Anderson, SR (1985). Phonology in the Twentieth Century: Theories of Rules and Theories of Representations.  Chicago: University of Chicago Press.

Lodge, K (1997). ‘Timing, segmental status and aspiration in Icelandic.’  Transactions of the Philological Society 105: 66-104

Ogden, R & Local, JK (1994). ‘Disentangling autosegments from prosodies: a note on the misrepresentation of a research tradition in phonology.’ Journal of Linguistics 30: 477-498

when is a word not a word

Words, roughly speaking, in the psycholinguistic sense of ‘items in the mental lexicon’, consist of a phonological form coupled with semantic content. They mean something, and they have a sound structure, and these two properties can theoretically be analysed and discussed independently of each other. To give a phonological description of a particular word, for example, you would want to discuss what kind of consonants and vowels it was composed of, how many syllables, the structure of the syllables, the stress pattern, and so on; what the word actually means in the language can be treated as a separate question altogether.

You can also manipulate certain characteristics of the phonological properties of the words of a given language. You could, for example, observe that English allows the sequence “pr” at the start of words (prince, press) and “nd” at the end of words (wind, sand), and so construct the sequence “prend”. It sounds a bit like “friend”, and “pretend”, but it isn’t really related to either, and it doesn’t actually mean anything. It’s a pseudo-word, or a non-word – a phonological form which is legitimate according to the rules governing English sound sequences, but which has no meaning associated with it.

This would be just so much abstruse blether, except that non-words have been put to use in practical real-life contexts, with intriguing consequences. There exists a particular kind of language impairment in which, out of all a child’s cognitive abilities, only their language development seems to be impaired (in the absence of factors such as brain damage, hearing impairment, and so on). This is called Specific Language Impairment, or SLI. It runs in families. It has a genetic component. And geneticists have demonstrated that there is a linkage between particular regions of particular chromosomes, and particular language-related skills – most interestingly, the ability to accurately repeat lists of “nonsense words”, in tests known as nonword repetition tests.

What these tests consist of is, generally, a pre-recorded list of non-words, such as “doppelate” and “ballop”. The child hears these items played one at a time, with enough of a pause in between for them to attempt to repeat what they’ve just heard. Children with SLI not only show less accuracy in producing these items (dokkelate, toppelate, toppate might be the kind of errors you’d elicit), but performance on this kind of test is, as they say, a good marker of a heritable phenotype.

The idea behind using nonword tests was, at least originally, that it would allow us to see what the child had really mastered of the English sound system, or what his or her phonological skills were really like, once divorced from the messiness attached to their production of real words (all sorts of factors affect a child’s acquisition of real-language vocabulary, and it’s quite possible for a particular sound to be mis-pronounced in one word but produced accurately in another word). If we’re interested in “pure phonology”, then seeing how children handle phonological forms which have no semantic, pragmatic, or lexical baggage would seem to be the ideal method.

Unfortunately, large numbers of practical difficulties very quickly emerged as soon as researchers started using nonword repetition tests. One is that you need to control exactly how similar a non-word is to real words: it matters that the nonword “ballop” is really quite reminiscent of both “gallop” and “ballot”. You also need to control what combinations of sound-segments appear in your nonwords: the sequence /mf/ is legal in English (“triumph”), but much rarer than the sequence /st/, and so much harder to repeat accurately. Longer nonwords are of course more difficult to remember and repeat than shorter ones, so if your set of nonwords includes many three-syllable items with rare sound sequences and many four-syllable items which are highly reminiscent of real words, it becomes much more difficult to pin down whether a child’s poor performance is due to specifically phonological issues (such as the rarity of the sound-sequence), versus more general memory-related issues such as the number of syllables they have to remember.

This, I think, feeds into a further problem which needs to be addressed, especially in the context of trying to design new sets of nonwords which would steer clear of these early problems and allow hypotheses to be tested to distinguish between what is “phonological” and what is general “memory” (or whatever). That is the question of what, precisely, are the aspects of phonology which are of most interest to researchers investigating language impairments with a genetic component. Taking an overview of the lexicon of, say, a typically developing 7-year-old, what are the specifically phonological properties of the lexical items which we can use to test the phonological competence of language-impaired children and their family members? Or, from the other direction, what are the properties, or hypothesised properties, of the putatively phonological impairments in SLI which would allow nonwords to be designed so as to elicit, or elucidate, error patterns of theoretical importance?

In other words, for example, should a good set of nonwords rely on CVCV structures only to the extent that these exist in the two-syllable words in the lexicon? Is it useful to include presumably articulatorily complex sequences such as triconsonantal clusters, or rare consonant sequences across syllable boundaries? What is the relationship between the relative frequency of particular consonants (eg dh) and their being late-acquired?
And what exactly would a specifically phonological impairment look like? Should errors be predicted mainly in one natural class, such as fricatives (but how would you differentiate a phonological difficulty with a natural class from an articulatory or perceptual difficulty with fricative production or perception?), or mainly in syllable structure, or stress assignment? Would you predict that a nonword where all the consonants were voiceless stops would be easier or harder than one where all the consonants were nasals, and if so, why? would it be useful to have multisyllabic items with all front vowels, or all back vowels, rather than a mixture?

This matters because presumably, the usefulness of nonword repetition tests is the light which they are supposed to shed on phonology – but of course speech sounds can only be described as phonological to the extent that they mirror the properties of real words as really used in a real language. (You can’t use nonword minimal pairs to demonstrate a phonemic difference, for example: minimal pairs can only be drawn from the lexicon.) So nonwords have to reflect in some way the actual characteristics of the items in a person’s or a population’s actual lexicon. Phonology can’t exist without a lexicon, but while on the one hand nonwords that are too similar to real words undermine the rationale behind using non-words in the first place, on the other hand nonwords that are too dissimilar from the lexicon make the task into one of attempting to pronounce non-native sound sequences, rather than plausible-but-non-existent native word. Erring in either of these directions will no doubt leave us better off than with stimuli which are poorly controlled for phonological properties, but there are still plenty questions which need an answer.

coarticulation is not a design flaw

I’m just back from a talk* where it was argued that language is far from a perfect or optimal system, but something that happens to work, most of the time, in spite of being bodged together in a clumsy and inelegant way (it could never have been designed to be this way, but with a bit of tinkering it comes to have the properties which make it at least functional).

The argument itself is coming from a background in American generative syntax, and so most of the argumentation was directed towards showing that human minds don’t and can’t represent entire trees for complicated syntactic structures. Which is actually, and thankfully, not even slightly controversial in many linguistics departments today, although apparently not all.

The phon-link, however, came in the discussion session following the talk, when one example of a clumsy solution to the language problem was drawn from speech production. According to the speaker, it’s not ideal that speech is produced via a single-tube system (ie the vocal tract) – because it gives rise to problems such as coarticulation.

For a single-sentence tutorial on coarticulation, consider the way that you say the word ‘ten’ on its own, and the way that you say the word sequence ‘ten past’ – the end of the word ‘ten’ becomes more similar to the start of the word ‘past’ when you say them together, particularly in fast speech. It might sound a bit more like ‘tem past’, in other words.

But coarticulation isn’t a problem. It’s not a problem for speakers, it’s not a problem for hearers – if it’s a problem for anyone, it’s only for people who adopt the troublesome assumption that the components of words have their own form in some sense independently of the words they belong to, and that this form somehow changes to take on the shape of adjacent or nearby segments when the segments are all assembled in order to be articulated. The problem in speech analysis is not how coarticulation can happen, but how segmentation can be motivated, for what is an inherently continuous (non-segmented) stream produced by the overlapping movements of the tongue, lips, jaw, and so on – and it’s in precisely the “transitions” between what could be thought of as “segments” that so much of the information that is most valuable for hearers is located.

To paraphrase someone else’s slogan – coarticulation in speech is not noise, but information! and the perception of inelegance and clumsiness is very much just in the eye of the beholder.

* Actually, I’ve just discovered this wee rant languishing as a draft in a folder somewhere – the talk was so long ago I can barely remember what the speaker looked like. But I need to post it, if only for my own phon-related health. On account of unavoidable weekly commitments I haven’t been to the departmental phonetics/phonology seminar for weeks – months even – and the p-side of my brain (p-centre?) is getting worryingly undernourished.