Music Test Screening: The Benefits (and Liabilities) of Montages

Anyone who’s pored over music test results has wondered at some point, “Do we have the right people in the sample?” Lower-than-expected scores, results that diverge from expectations, concerns about a competitor’s airplay all play into the worries of any program director making decisions using music research.

In pursuit of increased confidence that the “right people” are in the sample, some use a montage of music hooks to ensure that respondents could be P1’s to the type of station imagined by the station’s programming team. Culling respondents in this manner leads to higher test scores and makes the results easier to implement. Montage screening makes perfect sense for a new station with no significant audience base, or one undergoing significant repositioning, where the opinions of the existing audience may not lead the station in the desired direction.

If montage screening is employed with on-going music testing, like NuVoodoo OMR (Online Music Research) or its telephone-based counterpart, “callout,” montages need to be updated regularly to ensure that they remain in-step with the desired programming archetype. At least two formatically-identical montages (but with no titles in common) should be randomized or rotated for respondents, to make sure you’re not inadvertently screening in people attracted by just one or two of the most-appealing songs in one montage.

It’s wise to avoid using monster-testing titles in the montages that could play in several different formats. We remember a montage-screened auditorium test many years ago where the montage ended with “Hotel California.” That title – and every other Eagles and Don Henley title – showed up in the top 30 titles of the results from that auditorium test.

Since music testing is, ultimately, a tool used to wring the most-possible TSL from listeners, it’s wise to stay within the station’s cume when screening. The exceptions, of course, are new stations and those undergoing significant repositioning. In either situation, there should be marketing plans at the ready to attract the desired listeners once the station’s programming is set.

Using montages to screen out existing station core listeners is perilous. Sure, many P1’s will make it through well-designed montages that represent the station’s format. But, screening out any existing station P1’s, perhaps because one title in a montage didn’t excite her, risks eventually losing TSL from the portion of the station’s P1 based that would be represented by those screened-out respondents. Similarly, limiting P1’s from direct competitors to be just those who make it past montages risks understanding which titles can be used to lure these (presumably cross-cuming) competing-station P1’s to spend more time with yours.

Montage-screening sometimes makes sense for the outer cumers in a sample (those who cume the station, but aren’t P1 to it or a direct competitor). However, requiring them to declare that a station like the one in the montage could be their favorite or the one they listen to most, may be placing the bar too high. While we may never convert a P1 from, perhaps, a competing Alternative-format station to be P1 to our Hot AC, we may extract more TSL from her, if we include her opinions in our music test. Securing that outer cume respondents in a test would listen regularly or that the hypothetical station could be among the stations they listen to most would seem to be a more reasonable screen.

Music testing isn’t about getting high test scores. It’s about using the available science to artfully construct a playlist and music schedules to use one programming stream to keep the widest-possible portion of your cumers satisfied.