The Facts About Music Testing

One of our competitors announced that they’re finally following our lead and moving their library music tests from hotel meeting rooms to the internet. We’re glad they’ve caught on. At NuVoodoo we’ve been doing all of our research online since we set up shop at the end of 2010. We’ve been working with recruiting and samples online to learn what works best and what gets samples that replicate. We’re now using respondents culled from myriad sources: heading for 100 online panels, social and, when needed, telephone. When it comes to experience conducting research for radio stations online, we’ve simply been doing it longer and better than any other company.

Of course, our experience goes far beyond the five years we’ve been working NuVoodoo. Carolyn started Critical Mass Media in the 80’s to serve two radio stations, which became a dozen, then dozens and then over a thousand when CMM was rolled into then Clear Channel in 1999. Across more than thirty years, she’s had the opportunity to try all sorts of things when it comes to testing music.

The fact is that after decades of collecting and analyzing music research data on zillions of songs we believe asking about fit is useless, because respondents don’t know whether or not a song fits on your station. They’ll give you an answer, because they’re respondents. But, they really don’t know whether or not the song would fit … because they really don’t know. That’s not their job. It’s job of the programmer to decide which songs to use to compose the station.

Respondents know what songs and types of songs they’ve heard on your station. So, they’ll respond with what you’ve already taught them fits on your station. By using such information, you create a tautology and reinforce the too-narrow, same-sounding, repetitive playlists that have been giving radio a perceptual black eye for decades.

Worst of all, asking the additional questions about what fits gets respondents back into their heads – instead of giving immediate responses to questions they can answer instinctively. In the car, listeners aren’t making reasoned choices concerning whether to turn a song up or switch to another station. They’re reacting. Immediately. No analysis.

We try to elicit that same kind of muscle-memory, automatic response in our music testing. To do that, we make it simple. Don’t recognize the hook? Fine, it’s unfamiliar. Love it? Great. Like it? Fine. Tired of it? Happens. Don’t like it? We understand. Keep the interview simple. Give them easy-to-answer questions. Make it pleasant for the respondent. Get actionable information for programmers.

Our data will tell you which songs are familiar and which songs people really like. From there, it’s up to expert programmers to make decisions using their experience and intuition. Online services like Pandora do the best they can with algorithms and rubrics, but human-curated playlists and schedules augmented with actionable information from consumers to ground programmers in how people feel about songs have created magic for many stations over the years.

Now you have the facts.