We used ambiguous speech stimuli in french, like the sentence "c'est l'ami" (this is the friend) which can also be understood as "c'est la mie" (this is the crumb). The two sentences have exactly the same phonetic content. Still, listeners are generally able to distinguish between them, which indicates that there should be some subtle acoustic differences they can rely on. (2/X)
...we were able to measure the typical prosody interpreted as "c'est l'ami" vs. the one interpreted as "c'est la mie". This is our main result. In a nutshell, the fundamental frequency and duration of the initial vowel ("a") determine if you will hear the sound as one word ("l'ami") or two ("la mie"). And this works with other pairs of words too! (4/X)
For the French speakers here, a little demo of the effect. By modifying the prosody of a sentence we can radically change its meaning. http://dbao.leo-varnet.fr/demo-cest-lamie-cest-la-mie/ (5/X)
Cherry on the cake, all experiments are fully reproducible and all analyses entirely replicable using our home-made toolbox fastACI https://github.com/aosses-tue/fastACI, and the data is openly available on #Zenodo https://zenodo.org/record/7865424 #OpenScience #OpenData #OpenAccess (6/6)
To reveal this acoustic difference (or "segmentation cue"), we simply generated many new utterances with a random prosody and had participants to categorize them as "c'est l'ami" or "c'est la mie". Then, by relating the exact prosody in each trial and the corresponding response of the observer... (3/X)