New article published in JASA-EL @AcousticalSocietyofAmerica! We used a new technique to explore how listeners segment continuous speech sounds into words. A thread and a demo ⬇️

pubs-aip-org.insb.bib.cnrs.fr/
@psycholinguistics @psychology @linguistics

We used ambiguous speech stimuli in french, like the sentence "c'est l'ami" (this is the friend) which can also be understood as "c'est la mie" (this is the crumb). The two sentences have exactly the same phonetic content. Still, listeners are generally able to distinguish between them, which indicates that there should be some subtle acoustic differences they can rely on. (2/X)

To reveal this acoustic difference (or "segmentation cue"), we simply generated many new utterances with a random prosody and had participants to categorize them as "c'est l'ami" or "c'est la mie". Then, by relating the exact prosody in each trial and the corresponding response of the observer... (3/X)

...we were able to measure the typical prosody interpreted as "c'est l'ami" vs. the one interpreted as "c'est la mie". This is our main result. In a nutshell, the fundamental frequency and duration of the initial vowel ("a") determine if you will hear the sound as one word ("l'ami") or two ("la mie"). And this works with other pairs of words too! (4/X)

For the French speakers here, a little demo of the effect. By modifying the prosody of a sentence we can radically change its meaning. dbao.leo-varnet.fr/demo-cest-l (5/X)

Cherry on the cake, all experiments are fully reproducible and all analyses entirely replicable using our home-made toolbox fastACI github.com/aosses-tue/fastACI, and the data is openly available on zenodo.org/record/7865424 (6/6)

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.