The KPFK Audience Size didn't change

The KPFK Audience Size didn't change

by David Adelson Friday, Mar. 29, 2002 at 2:42 AM
dadelson@ucla.edu

A frequently heard claim is that KPFK’s audience size doubled under Schubb. It is not true. In fact, there was no statistically significant increase in total audience size under Schubb.

errorKPFK’s total audience didn’t increase sifnificantly during the Schubb years

A frequently heard claim is that KPFK’s audience size doubled under Schubb. It is not true.

Since the change in administration at Pacifica and KPFK, it has been possible for the first time in years to get access to the actual listenership data There is no statistically significant difference in total audience size between the four years before Schubb’s arrival at KPFK, the first three years after his arrival, and the last three years of his tenure. The values for the Fall Quarter are 162,550 +/- 5,962 before Schubb, 155,200 +/- 16,921 for the first three years after Schubb, and 176,433 +/- 2,477 for the last three years. (All values are mean +/- S.E.M.). As is obvious, these numbers are essentially identical, and there is no statistically significant difference between these values (one-way ANOVA, p>0.37). The values for the Spring Quarter are 140,725 +/- 13,881 for the four years before Schubb, 163,000 +/- 7,590 for the first three years after Schubb, and 171,700 +/- 22,247. Although the Spring numbers show an increase of ~31,000 (or 22%) between the years before Schubb and the last three years of his tenure, these differences are also not statistically significant (one-way ANOVA p>0.37). This means that the changes in the values seen are not great enough to exclude the possibility that the differences are due to random sampling.

So where does the claim that audience size doubled under Schubb come from? It comes from selecting the highest value that occurred during Schubb’s tenure in a single quarter and comparing it one of the lowest years prior to his arrival. It makes for good PR, but it’s statistically unjustified…a falsehood. One can always demonstrate a jump by comparing the highest value of a distribution to the lowest value of that distribution, or by comparing the mean to the lowest value, for that matter. The size of the difference will depend upon the variance of the population of values. But again, this is not a meaningful way to detect a change.

SO: bottom line – the claim that audience size doubled under Schubb is not supported by the data.

It should be noted that this fact was explicitly pointed out to Ella Taylor in an email which I sent her while she was composing her LA Weekly piece but she chose to ignore the data and print the unsubstantiated assertion that audience size doubled under Schubb.

Details of the analysis

The concept of audience size is actually a bit ambiguous, because there are different ways of defining the total size. The Arbitron ratings system uses several metrics to define audience size. The major measures reported include:

CUME – the total number of (distinct) listeners in a week


- analogy: how many people visited the “store” in a week
AQH (average quarter hourly) – the number of persons listening in an average 15 segment
-analogy: how many people were in the “store” on average
TSL (time spent listening) – the average time a listener spends listening
- analogy: how long people stay in the “store”

Listernership data are reported by quarter (Winter, Spring, Summer, Fall) since listening patterns vary seasonally.

Generally speaking, audience size refers to CUME. As an interesting side-note, Corporation for Public Broadcasting minimum ratings criteria used to determine matching grant eligibility (these criteria were enacted in 1996) do not specify CUME, the number of people in a week who find something worth listening to on the air, but rather on AQH, how many people listen on average. This tends to bias toward serving core, or target, markets.

However, as indicated above, when talking about the number of listeners or the size of the audience, if no further specification is given, usually what one is talking about is the CUME.

Immediately below are the summary data, while the raw data are reproduced at the end of this post. There is a lot of variability in the data year to year, and so single values are not meaningful for the purpose of detecting statistically significant changes. In order to do a statistically meaningful comparison between periods, one must group the numbers several years at a time, by quarter. The analysis below compares the four years before Schubb (1991-1994) to the period after his arrival at KPFK in three year blocks. 1995 was excluded because Schubb arrived in the middle of the year, and this was the year of the most massive program changes, so it is both unclear which epoch it should be included in, and it was an anomalous year. It should be noted that typically, claims about increased listenership at KPFK, for example those made by John Dinges in his May 2000 article in The Nation (http://past.thenation.com/cgi-bin/framizer.cgi?url=http://past.thenation.com/issue/000501/0501dinges.shtml), reference 1995 as the base year, when in fact it was an unusual year. Although 1995 was excluded from the analysis, it’s inclusion in either the pre-Schubb or post-Schubb period does not change the result – the differences are still very far from statistical significance.

Fall
1991-1994 162,550+/- 5,962
1996-1998 155,200+/- 16,921
1999-2001 176,433+/- 2,477


Spring
1991-1994 140,725 +/- 13,881
1996-1998 163,000 +/- 7,590
1999-2001 171,700 +/- 22,247

The complete raw data set used to generate these values is printed below. The source for the data between 1991 and 1998 were Arbitron topline data for total survey area CUMES. I do not have access to the data from this source for 2000 and 2001. Data for the years 1999-2001 were drawn from Audigraphics summaries of Arbitron data. These values can differ somewhat from the Arbitron published data. However, the Audigraphics data are not available prior to 1995. Both Arbitron and Audigraphics data are available for the period 1995-1999. I reproduce both data sets below to be thorough. As can be seen, where data from both are available the values do not diverge markedly from each other, and thus comparing data from the two data sets appears fully justifiable.

Fall Arbitr Audigraphics CUME
2001 171,500
2000 178,500
1999 177,100 179,300
1998 175,500 179,400
1997 168,500 170,000
1996 121,600 125,500
1995 139,800 144,700
1994 139,200
1993 158,700
1992 151,400
1991 160,700
1990 179,400
1999-2001 176,433
std. dev. 4,291
s.e.m. 2,477
1996-1998 155,200
std dev 29,308
s.e.m. 16,921
1991-1994 162,550
std dev 11,923
s.e.m. 5,962

Spring Arbitr Audigraphics CUME
2001 215,400
2000 142,600
1999 157,100 157,100
1998 171,900 181,900
1997 169,200 170,400
1996 147,900 156,600
1995 107,500 109,400
1994 116,500
1993 169,100
1992 117,300
1991 160,000



1999-2001 171,700
std dev 38,533
s.e.m. 22,247
1996-1998 163,000
std dev 13,146
s.e.m. 7,590
1991-1994 140,725
std dev 27,762
s.e.m. 13,881