back from baap

And what a fascinating time it was. I went with the expectation of finding out about lots of new ideas, and there was certainly a lot of new findings, new measurement methods, new and refined analyses.

But by far the most engaging sessions (I thought) were the ones that looked back to the early days of phonetics and linguistics. The phonetics crew at UCL have recently discovered some forgotten film reels dating right back to the 1920s, and took the opportunity to show the conference what this collection consisted of. The films showed everything from early x-ray images of the vocal tract, to the first machine which could recognise speech, to the exciting kymography techniques which feature so prominently in some of Firth’s papers. (Wikipedia on kymograph; in the 20s they also used the sensitive flame, described in Wikipedia in its application in the Rubens’ tube.)

There was also a fascinating account of the work that was done in Japan in the 1940s. Somehow the groundbreaking work from the Japanese labs had featured in some of the reading I did for my thesis (completely unconnected to my thesis, just like lots of the most interesting stuff I read those years), but Michael Ashby and Kayoko Yanagisawa’s presentation of the London-Tokyo links also brought in some intriguing detective work as they tracked down the source of their collection of glass lantern slides, and threw light too on the development of the stylised “head diagram” used by everyone from Daniel Jones onwards for illustrating the articulators (see here, eg, p79 onwards).

Which got me thinking. On one hand, it was amazing how technologically advanced they all seemed to be in the early days – they had all sorts of innovative techniques for observing and imaging the production of speech, and they had no hesitation in making use of the newest technology available in order to apply it to questions of articulation and acoustics. That spirit, I think it’s fair to say, is still alive and well in phonetics, with people using all sorts of technologies to investigate different aspects of articulation (electropalatography, laryngoscopy, ultrasound, not to mention electromagnetism…), and so we continue to increment our knowledge of what goes on in the vocal tract when people speak.

On the other hand, a lot of the theoretical understandings were also in place about what speech means, or is, or does, in the context of human communication more broadly considered. Knowing what acoustic effects arise from air flowing across articulators arranging themselves in particular ways is one thing – knowing what contribution these sounds make in the enterprise of making each other understood, is a different matter. Yet for people like Firth and his direct intellectual descendants, their views on the phonological system (and other parts of the language system) grew out of the best understanding they had about phonetics, both in terms of their explicitly stated principles and to a large extent also in their descriptive and analytical practice.

Compare this to a talk I was at last week (not at BAAP) where a valiant attempt was made to integrate changing conjunctions of formant values into the generative understanding of what phonology is (ie, to allow phonological grammars to accommodate – even ‘predict’ – sound variation and change). I am tenatively, but increasingly, of the view that there is simply no way to validate the staples of the generative apparatus (is that a mixed metaphor?) on the basis of speech data. It may be possible to tweak a generative grammar so that it becomes something that can handle variation and change, but that’s what it becomes – it doesn’t start, from its first principles, with that capability. If you believe that “sounds”* can be decomposed into distinctive features, what aspects of the speech stream can you offer as evidence for such features? Increasingly, the defence that phonological features need not make reference to the speech stream by virtue of existing on an altogether different plane of being is unconvincing, particularly when it is coupled with an expressed wish to make allowances for phonetic variation within the phonological system.

In one of the presentations, Michael Ashby mentioned that the 1930s was the decade of international congresses (the first three ICPhS’s!) and commented, quite rightly, on what an exciting time it must have been, in terms of who was meeting who, and when, and what ideas influenced who, and the impacts of all of these developments right down to the present day. You can’t help feeling that even though the scientific study of speech sounds is so relatively young, we could be in danger of falling prey already to a sorry historical amnesia. Keep alive the sensitive flame of phonetics, the man said, but keep alive too the story of where we’ve come from, not just to make sense of the present, but to equip us for the future too! (Best read to a particularly jubilant trumpet fanfare, I would suggest.)

________________

* Always bearing in mind Roy Harris’s immortal analogy, “To ask ‘How many sounds are there in this word?’ is to ask a nonsense question (for the same kind of reason as it is nonsense to ask how many movements it takes to stand up).” Precisely.

7 thoughts on “back from baap

  1. May I ask the “so what?” question? To the ordinary person on the street, what is the practical relevance of what you study and do? (If it turns out that your particular field of endeavor has no practical use – sort of like the higher theoretical mathematics – that DOESN’T make it invalid, of course.) In other words, what is the benefit to us regular folks? I’m not trying to be snarky here, trying to put down what you do – I’m interested in your answer. Sometimes what seem to be arcane scientific studies turn out not to be so arcane after all.

    Like

  2. You mean it isn’t self-evidently gripping, exciting, exhilarating? :-)

    Phonetics has the luxury of being the area of linguistics which has to make least effort to show its relevance to the wider world – partly, purely because it’s the study of human speech – something that everyone does all the time, apparently effortlessly and almost always successfully.

    Actually producing speech is a highly skilled process, when you thikn that in normal conversations, words only take fractions of seconds to produce, and yet producing comprehensible words requires you to manoeuvre not just your tongue, lips, and jaw, but also less obvious articulators like the velum at the back of your mouth, and the vocal folds (somewhere down in your throat). When you watch the intricate dance of all these articulators using some imaging method like ultrasound, it’s amazing how fast they all move, and how fluidly they all cooperate, and yet when we speak, we have no idea really of how complex and skillful all this activity is. So studying it partly lets you see how it works as a mechanical process (if i expel this much air out from my lungs and arrange my articulators in these particular configurations, these are the sounds that result), but also shows how fascinatingly, admirably intricate this most ordinary of human behaviours actually is.

    That’s quite apart from implications in, eg, the clinical field. One of the most useful things that emerged when speech scientists started using electropalatographs to study people with speech impairments was that there can be a big difference between what speech-impaired speakers do with their articulators, and how that is perceived by their clinicians/therapists/interlocutors. Electropalatography is where you fit sensors to an artificial palate that sits inside the roof of your mouth – it allows you to observe when and where the tongue hits the roof of the mouth when you speak. There is, eg, a speech impairment where the speaker makes no audible distinction between “s” and “sh”, but when they studied these speakers using electropalatography, it turned out that some of these speakers were making different tongue configurations for the “s” sound compared to the “sh” sound, even when this couldn’t be heard as different. So then you can use this information to show impaired speakers the outputs from their electropalatographs, to help them to see how their tongues are moving, to help them make movements with their tongue which will make the auditory effect of their tongue movements more clear to their hearers.

    I could go on and on … :-)

    Like

  3. Have linguists studied people who have suffered strokes which left them with impaired speech to see just HOW their speech has been impaired, from this technical point of view, and how these folks find work-arounds (if need be) to get their speech back?

    Like

  4. Quite a bit, I think, although I don’t actually know a huge amount about it. There is a whole little field called clinical linguistics & phonetics, but strokes aren’t something I’ve dabbled in much. Speech/language therapists who work with adults must spend a lot of their time working on strokes (although I’m sure I’ve heard papers on Parkinsons as well, and there’s sure to be other things that I can’t just remember right at the minute) (damage to the voice due to misuse/over-use as seen in, eg, teachers/lecturers, people who work in call centres, etc). There is also a great deal of neurological interest in strokes because the idea is that if you can see what part of the brain has been damaged in a stroke (leading to disruption of the speech &/or language systems) that would indicate that that part of the brain must play a role in that particular sub-system in healthy language use. This is mainly hand-waving though – if perchance any SLTs happened to drop by, your thoughts would be most welcome!

    There’s also a subfield of forensic phonetics, on the topic of applications (or, applications that I don’t know much about!). Eg knowing “speaker-specific” characteristics of speech could help with identifying an anonymous recording as having been spoken by Suspect A rather than Suspect B, etc.

    Like

  5. Speaking of misuse (from the technical angle) of the voice: one wonders how, after nearly 50 years as a rock and roll singer, someone like Paul McCartney has any voice at all left. Yet, he sounds perfectly normal when he speaks.

    Like

  6. I suppose you could say the same about Olympic sprinters, or anyone else who spends a lifetime maximising the capabilities of different aspects of their physiology, if that’s how to put it :-)

    Like

  7. I used to chat sometimes with a physicist-turned-somekind of DNA torturer in my undergrad days. He said first you see what you can do, and then people find uses for it. Of course, sometimes it’s the other way round – you have a use, and then you need to find what you can do for that use!

    Like

Leave a comment