Arbitron Rolls Snake Eyes

Arbitron came under increased scrutiny from Harker Research in this post on Radio InSights entitled “Arbitron PPM Trends vs a Coin Toss. Who Wins?” The research done by Harker seems to indicate the Arbitron numbers — which public radio suits are using to dictate station policy — are nothing more than a crapshoot:

Those who rely on Arbitron’s ratings as their report card want to believe ratings mean something, that good programming is rewarded with improving numbers, while bad programming is punished with declining numbers. Unfortunately that doesn’t seem to happen very often. You’ll have a good trend, then a bad one, maybe another good one, and after that who knows….

Our analysis suggests that monthly PPM trends are directionally meaningless. Trends change direction frequently and randomly. Furthermore, PPM trends are no more consistent than diary based trends. Both are inconsistent indicators of the direction of a station. As it turns out, the best way to predict your next trend is to bet against the last one. If last month the station went up, bet that this month the station will go down, and you’ll be right four out of five times.

Even more troubling is the fact that even a persistent trend can be a poor predictor of the health of a station. Four, five, and even six month trends can reverse course and leave a station right where it started.

Much space is then devoted to the methodology used in the study, with this glaring conclusion: “Were you to flip a coin to predict the direction of ratings next month, you would have a 50:50 chance of being correct. Ironically, these odds are better than the probability that your ratings will continue to move in the same direction they did last month.”

One interesting point comparing the diary method to the Purple People Meter: “Quarterly diary trends reverse 77.8% of the time compared to PPM’s monthly 79.8% reversal. In other words, trend flip-flops are about the same in PPM and diary.” A second interesting conclusion mirrors what critics have long held as the biggest problem — small sample size, which lends itself to what’s called “statistical noise”:

Both monthly PPM share estimates and quarterly diary share estimates have so much statistical noise in them that it drowns out genuine changes in month to month ratings. The only way to overcome statistical noise that makes monthly trends unreliable is to increase the the number of panelists. The larger the sample, the lower the noise, the greater the predictive value of trends.

For both PPM and the diary we have to go out three or more reports to find that either service becoming even slightly more predictive than flipping a coin. Even then, long lasting trends can be a trap. There is a statistical phenomenon called reversion to mean, where a spurious trend simply nullifies a previous spurious trend.


Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: