Here’s a talk by some people from my department! There’s too much technical terms for me to make it sound terribly accessible on the fly so I’ll just tell you what they’re saying.
They basically want to know if segmental and suprasegmental information is processed together, or independently of each other.
The study used nonsense sequences differing in both segmental information and prosodic information,
embedded in taken out from carrier sentences and spliced into the preposition ‘in’ each other; it was a two-choice classification task – participants had to decide if the stimuli contained ‘d’ vs ‘g’, or consisted of one word vs two words.
The predictions were subtle and clever enough to have intelligent members of the audience nodding quietly and with satisfaction, althoug
The results of the study confirmed neither of the predictions about whether processing was integral versus
spontaneous independent. When there was an F0 (pitch) cue to word boundary the stimuli seem to have been processed in parallel – using it perhaps to anticipate when the target consonants would appear, although this was only a tentative conclusion.
In the question/discussion session, one questioner worries that the acoustic cues are inadequate because they don’t perform the same function out of context as they do in context. The presenter agrees that the stimuli don’t sound very natural, but this
was not relevant for the purposes does not change the conclusion of the experiment.
And now I need to press publish before my very feeble battery runs out.
[Edited by the speaker herself!!!]