Brow Beat

The Problem with Scientific Studies of Music

1789 drawing of Mozart by Doris Stock

Last week, Time treated readers to an explanation of what makes scary music scary. The short answer: our monkey brains. Taking the famous two-note theme from Jaws as Exhibit A, the article unleashed a whole lot of Science: “Those irregular minor chords trigger the same instinctual response a mama marmot feels when her babies are threatened.”

Mama marmots? Really?

The story went on to cite “biologically-ingrained reasons why sudden, dissonant sounds and minor chords make us apprehensive,” and to propose a link between John William’s “chilling, crescendoing minor chords” and “the screeches of young frightened animals.”    

The piece’s claims aren’t helped by the carelessness with which it tosses around terms from musical theory. Jaw’s spine-tingling refrain does not have traditional “minor chords,” exactly, and traditional minor chords are consonant, not dissonant. More importantly, though, the idea that our emotional response to music can be chalked up to simian reflexes ignores the extent to which our experience of music is shaped by the culture we grow up in. What’s more, this is a problem that many scientific studies of music overlook.

Consider two studies cited right here on Brow Beat. Back in February, the Wall Street Journal spoke to neuroscientists and psychologists who claimed that Adele’s “Someone Like You” made people cry because it started soft and then got louder, featured the “abrupt entrance of a new ‘voice,’” and “contained unexpected deviations in the melody or harmony.” And just a couple weeks ago, the same paper reported on a study which argued that because pop music was slower and more frequently in a minor key than it used to be, it was clearly becoming “sadder” and more “emotionally ambiguous.”

Despite the 21st-century methods employed by some of these scientists, their ideas sometimes have a rather dated feel—especially when journalists get a hold of them and start “translating” those ideas for a more general audience. Since at least the Renaissance, Western musical convention has freighted certain combinations of whole steps and half-steps with emotional valence. In 1682, the French baroque composer Marc-Antoine Charpentier published a tract, Regles de Composition, drawing out the affective resonance of 17 keys. G Major, he said, was “serious and magnificent”; b-flat minor, “obscure and terrible.” And in 1806 the German poet Christian Schubart carried the practice into the Romantic era. His descriptions read as elaborate psychoanalyses of the 24 tonalities, each of which takes on a mind and agency of its own. D-Flat Major, for instance, is “a leering key, degenerating into grief and rapture. It cannot laugh, but it can smile; it cannot howl, but it can at least grimace its crying.” (Nice to meet you, D-Flat Major.) 19th century German physicist Hermann von Helmholtz was also influential in ascribing human characteristics and emotions to key signatures.

But as Philip Ball, author of The Music Instinct: How Music Works and Why We Can’t Do Without It, explains, Western perceptions of major modes as “happy” and minor modes as “sad” don’t have any basis in genetics. The best demonstration of this is the way other cultures use modes and intervals that are close to our major and minor. In many cases, these pitch patterns create atmospheres opposite to what we’d expect. In one experiment, Western people listening to Balinese music were asked to form a view about the pieces they heard. They tended to describe one sound fragment as melancholy because it contained a leap resembling a minor third—and yet the Balinese considered that snippet of melody perfectly jolly. The same effect prevails when some Mozart buffs eavesdrop on minor-tinged gypsy folk tunes from Spain and Arabic maqams from the Muslim world. “I don’t see any evidence for anything intrinsic in our responses,” Ball concludes, “though I’ve heard some great arguments.” Like what? “Some think the narrowing of an interval from major to minor corresponds to a kind of emotional squeezing.” 

Meanwhile, whether perceptions of consonance and dissonance are learned or innate is more of an open question. To a Western ear, more consonant intervals—for instance, the octave, perfect fourth, or perfect fifth—are characterized by simpler frequency ratios between tones (The A above middle C on a piano, for example, vibrates at 440 Hz, so the next octave up is 880 Hz, or more simply expressed: 2:1). . Dissonant intervals like the diminished fifth or “tritone” (once known as the “devil’s chord” for its grating effect) have more complex frequency ratios (i.e. 64:45). Ball notes that just about every musical culture he’s studied awards priority to the octave. And evidence that infants prefer the simpler ratios suggests that at least some part of our preference for consonance over dissonance is biologically ingrained. (One caveat, though: Babies can hear from within the womb, so it’s hard to completely rule out any form of learning in these studies.)

Back to the Time article. By way of evidence for her baby animal theory, the author points to a recent study by Colorado-based scientist Daniel Blumstein, who joined film score composer Peter Kaye and communications professor Greg Bryant to test the notion that scary music chills us by echoing mammalian distress calls. In fact, the study itself is a lot more intelligent—and a lot less controversial—than its write-up implies. Researchers added “harsh, unpredictable, nonlinear sounds” to music samples and then measured the arousal levels of a group of listeners. When placed in scary situations, humans and other animals often produce erratic noises, especially at high pitches, so Blumstein and his team thought that replicating these sounds in music might induce a stress response. In fact, subjects did report more emotional arousal after hearing the tracks with the nonlinear elements, which included static and chaotic up-and-down shifts in pitch. (On the other hand, that effect softened when the frightening music was paired with videos of people doing boring things, like drinking coffee or reading.)

As one Slate colleague observed, it’s not too surprising that both humans and non-human animals balk at harsh and unexpected noises. To the extent that music incorporates these noises, we’re likely to find it alarming. The study did yield a suggestive result, though—that rapid upshifts in pitch created more arousal than downshifts, perhaps because they “would naturally be associated with a sudden increase in vocal cord tension; something that might happen when a mammal is suddenly scared.”

So what’s the takeaway here? It’s hard to argue with the scientists’ claim that “Most of the perceptual and cognitive machinery underlying music processing likely evolved for a variety of reasons not related to music—many related to emotional vocal communication.” Yet it seems foolish, too, to downplay the cacophony of other factors that inform our experience as music listeners. Music is drenched in culture, intentionality and—corny as it sounds—humanity. We may be animals underneath it all, but there are still some things that separate man and marmot.