Photo: Courtesy of USC Viterbi School of Engineering |
We all know that
music is a powerful influencer. A movie without a soundtrack doesn’t
provoke the same emotional journey. A workout without a pump-up anthem
can feel like a drag. But is there a way to quantify these reactions?
And if so, could they be reverse-engineered and put to use?
In a new paper,
researchers at the University of Southern California mapped out how
things like pitch, rhythm, and harmony induce different types of brain
activity, physiological reactions (heat, sweat, and changes in
electrical response), and emotions (happiness or sadness), and how
machine learning could use those relationships to predict how people
might respond to a new piece of music.
The results, presented at a conference
last week on the intersections of computer science and art, show how we
may one day be able to engineer targeted musical experiences for
purposes ranging from therapy to movies...
The researchers then fed the data, along
with 74 features for each song (such as its pitch, rhythm, harmony,
dynamics, and timbre), into several machine-learning algorithms and
examined which features were the strongest predictors of responses. They
found, for example, that the brightness of a song (the level of its
medium and high frequencies) and the strength of its beat were both
among the best predictors of how a song would affect a listener’s heart
rate and brain activity.
Source: MIT Technology Review