And what a fascinating time it was. I went with the expectation of finding out about lots of new ideas, and there was certainly a lot of new findings, new measurement methods, new and refined analyses.
But by far the most engaging sessions (I thought) were the ones that looked back to the early days of phonetics and linguistics. The phonetics crew at UCL have recently discovered some forgotten film reels dating right back to the 1920s, and took the opportunity to show the conference what this collection consisted of. The films showed everything from early x-ray images of the vocal tract, to the first machine which could recognise speech, to the exciting kymography techniques which feature so prominently in some of Firth’s papers. (Wikipedia on kymograph; in the 20s they also used the sensitive flame, described in Wikipedia in its application in the Rubens’ tube.)
There was also a fascinating account of the work that was done in Japan in the 1940s. Somehow the groundbreaking work from the Japanese labs had featured in some of the reading I did for my thesis (completely unconnected to my thesis, just like lots of the most interesting stuff I read those years), but Michael Ashby and Kayoko Yanagisawa’s presentation of the London-Tokyo links also brought in some intriguing detective work as they tracked down the source of their collection of glass lantern slides, and threw light too on the development of the stylised “head diagram” used by everyone from Daniel Jones onwards for illustrating the articulators (see here, eg, p79 onwards).
Which got me thinking. On one hand, it was amazing how technologically advanced they all seemed to be in the early days – they had all sorts of innovative techniques for observing and imaging the production of speech, and they had no hesitation in making use of the newest technology available in order to apply it to questions of articulation and acoustics. That spirit, I think it’s fair to say, is still alive and well in phonetics, with people using all sorts of technologies to investigate different aspects of articulation (electropalatography, laryngoscopy, ultrasound, not to mention electromagnetism…), and so we continue to increment our knowledge of what goes on in the vocal tract when people speak.
On the other hand, a lot of the theoretical understandings were also in place about what speech means, or is, or does, in the context of human communication more broadly considered. Knowing what acoustic effects arise from air flowing across articulators arranging themselves in particular ways is one thing – knowing what contribution these sounds make in the enterprise of making each other understood, is a different matter. Yet for people like Firth and his direct intellectual descendants, their views on the phonological system (and other parts of the language system) grew out of the best understanding they had about phonetics, both in terms of their explicitly stated principles and to a large extent also in their descriptive and analytical practice.
Compare this to a talk I was at last week (not at BAAP) where a valiant attempt was made to integrate changing conjunctions of formant values into the generative understanding of what phonology is (ie, to allow phonological grammars to accommodate – even ‘predict’ – sound variation and change). I am tenatively, but increasingly, of the view that there is simply no way to validate the staples of the generative apparatus (is that a mixed metaphor?) on the basis of speech data. It may be possible to tweak a generative grammar so that it becomes something that can handle variation and change, but that’s what it becomes – it doesn’t start, from its first principles, with that capability. If you believe that “sounds”* can be decomposed into distinctive features, what aspects of the speech stream can you offer as evidence for such features? Increasingly, the defence that phonological features need not make reference to the speech stream by virtue of existing on an altogether different plane of being is unconvincing, particularly when it is coupled with an expressed wish to make allowances for phonetic variation within the phonological system.
In one of the presentations, Michael Ashby mentioned that the 1930s was the decade of international congresses (the first three ICPhS’s!) and commented, quite rightly, on what an exciting time it must have been, in terms of who was meeting who, and when, and what ideas influenced who, and the impacts of all of these developments right down to the present day. You can’t help feeling that even though the scientific study of speech sounds is so relatively young, we could be in danger of falling prey already to a sorry historical amnesia. Keep alive the sensitive flame of phonetics, the man said, but keep alive too the story of where we’ve come from, not just to make sense of the present, but to equip us for the future too! (Best read to a particularly jubilant trumpet fanfare, I would suggest.)
* Always bearing in mind Roy Harris’s immortal analogy, “To ask ‘How many sounds are there in this word?’ is to ask a nonsense question (for the same kind of reason as it is nonsense to ask how many movements it takes to stand up).” Precisely.