Does Music Compute?: Computational Approaches to Emotional Expression

mathmusic

In looking for critical approaches to emotional expression in music,  I discovered the work of Patrik N. Juslin, Anders Friberg and Roberto Bresin, who propose a computational model to analyze emotion is music. This computational model simultaneously de-emphasizes cultural meanings of music while providing a vocabulary to describe emotional expression in music.

In “Toward a Computational Model of Expression in Music Performance: The GERM Model,” Juslin, Friberg and Bresin note that there are several approaches used to study the expression of emotion in music, including “generative rules,” “essentic forms,” ” cues from vocal expression of emotion,” “composers’ pulses” and “physical motion” (64).  In response, the authors propose a computational model that integrates these approaches, the GERM Model:  “The general aim of the GERM model is to describe the nature and origin of patterns of variability in acoustic measures shown over the time-course of a human music performance” (65-6).   The authors go on to describe the way these approaches come together in a way that allows them to draw meaning from a computational analysis.

On one hand, the computational approach does not seem to be compatible with my inquiry, which focuses on how English-speaking audiences of K-pop determine emotional meaning from music which contains lyrics in a foreign language, namely, Korean.   Attempting to apply a computational model to creative expression seems odd, in that such an approach uses something akin to the scientific method to reduce artistic nuances to numbers, formulas and algorithms.  Indeed, Juslin, Friberg and Bresin use just that language to describe computational methods, for they note, “It is commonly suggested that the central act of the scientific method is to create a model,” which “is a simplified representation of a phenomenon in terms of its essential points and their relationships” (65).  From a cultural studies point of view, this still leaves certain questions unanswered. Moreover, the classical music and subsequent performances under investigation in this study may be more amenable to a computational approach than popular music, which, by its very nature, is structured differently.

On the other hand, the authors do provide a vocabulary that I can use to describe the emotional expression of music. In describing the GERM model, the authors note scholarship that shows that “performers are able to communicate specific emotions to listeners” by using “a code which involves a whole set of acoustic cues (i.e. bits of information)”(71).  They proceed to summarize such cues and link them to certain emotions. For example, the authors associate “fast mean tempo” and “bright timbre” with happiness, a positive valence, but “slow mean tempo” and “dull timbre” with sadness, a negative valence. Because many of us listen to music so often, we may be so familiar with such cues that we do not pay them much attention.  However, such a summary is helpful in that it can help me to reveal these common cues in K-pop and understand how listeners make meaning out of them.

Image

“Math in Music Project.” The Mathinator. 12 Oct 2012. Web. 2 Jan 2014.

Source

Juslin, Patrik, Friberg, Anders and Roberto Bresin. “Toward a Computational Model of Expression in Music Performance: The GERM Model.” Musicae Scientiae 5 (2001-2002): 63-122.