Summary: A song’s lyrics have an impact on our ability to process pitch, but not necessarily because of the meaning of the words.
Source: University of Montreal
Have you ever noticed when someone sings out of tune? Like when you’re at a karaoke bar and your best friend is belting out her favorite Adele song, but misses the mark miserably? Have you ever wondered how you know instantly that she sings flat?
Well, Michael Weiss might have an answer for you.
As a postdoctoral fellow of Professor Isabelle Peretz at the International Laboratory for Brain, Music, and Sound Research at the Université de Montréal, Weiss initiated a series of listening experiments to see how humans process pitch.
He chose well-known songs like “Happy Birthday” and “Over the Rainbow” and wanted to know if the presence of lyrics plays a role in detecting wrong notes: If there are lyrics, do we have more trouble detecting pitch changes? If so, is it because we are processing the meaning of the words, or simply because texts have many changing syllables?
The results of Weiss’ study were published in the journal in May psychology in music. We asked him to tell us more about it, with some musical examples.
First, can you define what pitch is?
Pitch is a psychological phenomenon related to the frequency of a sound wave and is the basis of how we hear melodies, which are just a series of pitches. We hear lower frequencies as lower sounds and higher frequencies as higher sounds, but there’s a lot more to it. In music, pitch is also what allows us to tell if a note is “in tune” relative to the other notes.
How do people know if something is right or wrong?
Most of the time we just know it, it’s not something we consciously think about. That’s why we sometimes refer to an out-of-tune note as “sour” — it’s a visceral experience. We all grow up in a musical culture – or more than one – and learn the “rules” of that music just by listening, things like scales or keys (e.g. “C major”) that are really just descriptions of our expectations.
That being said, sensitivities to pitch vary from individual to individual, and even some individuals, those with what is known as innate amusia, have great difficulty noticing an out-of-tune note. (You can test yourself for Amusia on our lab’s website.)
What did you want to find out in your study?
We had a very narrow question: when you hear a melody sung by a voice, is it harder to track the pitch when there’s lyrics? So we made short snippets of well-known tunes and asked a singer to sing them in three ways: (1) with lyrics, (2) without lyrics but with alternating syllables, namely “scat” singing, as in “doo bah dee bah”, and (3) with invariable syllables (“la la la”). In all cases, the pitch information was the same.
However, lyrics add “extra” information for the listener to process – the meaning of the words. Scat singing also has some additional information to process because of the changing sounds, but it doesn’t mean anything.
So, under these conditions, we can see if adding text makes the task harder, and if so, if it’s because of processing meaning or simply because of changing sounds.
For each experiment, you had two to three dozen student participants listen to some pop songs specially recorded by an amateur singer: which ones, for example?
We chose extremely well-known songs because we wanted our listeners to have expectations of how the tune should sound when perfectly in tune. That way, if we mistune a note, they’ll notice. So we chose songs like “Happy Birthday”, “Over the Rainbow” and “Brother John”.
And you’ve come to the conclusion that some songs are easier to digest than others? The ones with fewer lyrics or just repeated lyrics?
The results showed that it was more difficult to detect an out-of-tune note in the songs with lyrics or scat singing than in the songs simply sung with “la la la”. Importantly, we realized that there isn’t much of a difference between lyrical singing and scat singing. Taken together, this means that lyrics affect our ability to process pitch, but that’s not necessarily due to the power of the words.
Who do you think will benefit from your research?
This is research in the fundamental sense, meaning that it helps us understand more about the psychological phenomenon of musical pitch processing and has no immediate use or application. However, it could be interesting for those who sing in choirs or in music therapy, for example. But in general, I hope it raises more research questions about how we process pitch. The singing voice is not sufficiently studied in musical perception, although it is the “primal instrument” of our species.
And if we look beyond music, could all of this help us better understand how the brain works?
That is the goal of cognitive psychology – to understand how our brain processes information and enables us to act on it. I think the perception of music is such an important area because we are moved by music from the earliest moments of life – in lullabies, play songs and the like – and because music is present in all societies in different forms.
Humans have had music longer than we can collectively remember — the earliest physical musical instruments are tens of thousands of years old, and we may have been singing long before that. So how we learn the rules of music and how we respond to that information is as fundamental to me as how we acquire and use language.
About this news from music and neuroscience research
Author: Jeff Henry
Source: University of Montreal
Contact: Jeff Heinrich – University of Montreal
Picture: The image is in the public domain
Original research: Closed access.
“Detecting Pitch Errors in Known Songs” by Michael W Weiss et al. psychology of music
Detection of pitch errors in known songs
We examined pitch error detection in well-known songs sung with or without meaningful lyrics.
In Experiment 1, adults heard the opening phrase of familiar songs sung with lyrics or repeated syllables (la) and judged whether they had heard an out-of-tune note. Half of the renditions had a single pitch error (50 or 100 cents); half correct. Listeners were worse at detecting pitch errors in songs with lyrics.
In Experiment 2, intra-note pitch fluctuations were eliminated by automatic tuning in the same performances. Again, pitch error detection was worse (50 cents) for renditions with text, indicating adverse effects of semantic processing.
In Experiment 3, songs with repeated syllables or were sung skateboard syllables to determine the role of phonetic variability. Performance was worse on scat than syllable repeat, indicating adverse effects of phonetic variability, but overall performance outperformed Experiment 1.
In Experiment 4, listeners rated songs of all styles (syllable repeat, scat, lyrics) within the same session. Performance was best with repeating syllables (50 cents) and did not differ between scat or text versions.
In short, pitch tracking of well-known songs was impaired by the presence of words, an impairment that was primarily due to phonetic variability and not interference from semantic processing.