Ratings: Estimates or Opinions?

That funny thing about radio ratings interpretation: good ratings mean the station is doing a great job connecting with listeners, while bad ratings mean Nielsen’s sample sucks. Clearly there are situations where bad ratings are caused by mistakes and missteps in programming. And, just as clearly, there are situations where good ratings are caused by problems with Nielsen’s sample. Bakersfield is just the latest example of that trouble with radio ratings methodology.

2.1.16

It’s great that Nielsen found the duplicated diaries in this case and it’s appropriate that they issued revised ratings. The scary part is how much shift occurred when they deleted the two diaries. Of course the market samples have been too small for decades. As more stations have become more competitive and more listening options siphon off TSL from radio, the margins between the top stations in most markets have tightened considerably.

These changes should have samples growing sharply instead of incrementally. Radio needs stouter samples for greater reliability. As advertisers are faced with more options of where to place their dollars, radio sellers are faced with needing to present stronger evidence of how their station or cluster can deliver specific consumers in tight demos within narrowed geographies at specific times of day. All of which is impossible to do with any reliability when faced with a sample where removing two diaries causes turmoil at the total week level.

The same potential exists in PPM markets. A sudden decline in PPM ratings might mean nothing more than a panelist went out of town or was put off his or her normal routine. If the decline persists more than one week, it’s possible the panelist left the sample. Some stations alter their programming in reaction to such declines, destabilizing their relationships with other meter-wearers who listen to the station and, in the worst cases, causing further reductions in the ratings.

Again, it’s our belief that removing a few diaries or changing out a few panelists shouldn’t cause mammoth shifts in the ratings.  As providers of research, we often joke that if we provided research with samples as found in Nielsen’s Book, we’d be laughed out of business by the same folks who wait to see what the Nielsen gods will bring every month.

But, as it says in the book, “PPM ratings are based on audience estimates and are the opinion of Nielsen and should not be relied on for precise accuracy or precise representativeness of a demographic or radio market.”

It’s difficult to wrap one’s head around the millions of dollars of advertising revenue, station profits and the careers that hang in the balance of their “opinion.” Their opinion that “should not be relied on for precise accuracy or precise representativeness of a demographic or radio market.”

Incrementally larger samples are a nice start, but there’s a long, long way to go to bring these opinions up to the precision that media sellers and programming decision-makers need in 2016.