Saturday, 31 May 2014

Research - 'a good servant but a bad master'

Y
Don't you miss those huge colourful books of Rajar audience figures?  With two inch thick spines, the huge volumes were tremendously useful as door stops; or could be piled up randomly in some sort of Tracey Emin's installation.  I'm not sure how the colours were chosen, but anyone who considered using the same gaudy colour scheme for their home would likely attract few visitors and even fewer friends. I liked the orange ones best.

Before the internet age, these were the hefty tomes through which we we thumbed to establish whether our new 'lost and founds' feature at 2.15 weekdays had been a wise move for audiences. Shelves groaned with their weight. Programmers developed pecs to die for.

Back then, the Rajar ritual was a little like Black Rod banging on the door of the Commons. Figures, if required promptly, had to be collected in person by a breathless ambassador down South, who would adjourn to a nearby red phone kiosk, press Button A, and telephone the results back to an anxious base like Katie Boyle phoning in the London Eurovision votes.


The car sticker business used to be the one to be in, back in the days when the UK research survey period was confined to a short set span, just twice a year.  Much fun was to be had, witnessing  each station focusing the entirety of its on and off-air promotional activity during that frantic window.  As a jock, if you wanted a day off in the period, the answer was no. Wise programmers understood that listening habits needed to be nurtured before, not during the target weeks. The system did, however, offer a host of new excuses for poor performances if, for example, Wimbledon fortnight fell in the relevant spell.

As more radio operations became listed companies, publication time was shifted variously to the end or the beginning of the trading day, lest a huge audience decline might provoke investor jitters.  The crack of dawn was surely the worst timing, given one was already in a pretty miserable mood.  I recall one Antipodean  programme director suggesting he was about to throw himself out the third floor window as one set of disappointing dawn digits were downloaded.  As his character was a tad unpredictable at times, I stood in front of said window, just in case.

The BBC similarly harboured some Reithian reservations about the value of hard data when it began its own Listener Research Section in 1936, launched under the Corporation's PR wing.  Robert Silvey was cannily hired from what was then Britain's largest ad agency, the London Press Exchange, to spearhead the initiative.  He'd already been involved in fact-finding across Europe, not least as to the growing revenue potential from overseas commercial radio.  He was, however, made aware of the Corporation's worries that "a deeper analysis of audience reactions would amount to an intolerable strain" amongst programme-makers.  Poor loves.  One memo cautioned: "any research that might be undertaken should be so controlled as to secure that it never developed from a servant into a master, to the detriment of the essential qualities of good broadcasting" (memo from Sir Stephen Tallents, BBC PR Controller, to the BBC's General Advisory Council in 1936).  

As commercial radio was established in the UK in the '70s as 'independent local radio', figures
were quickly demanded, not least by the medium's cautious new advertising clients. By late '73, in the absence of published data, the ad agency Benton & Bowles conducted a sample of 222 homes.  The findings suggested that at 8 a.m, five London homes had been tuned to the happy sound of Capital and none to the chimes of LBC.  Such statistics, said the agency, "failed to achieve listening levels which even remotely justified the rates they were charging".  The residents of all such homes, nevertheless, likely expressed devotion to Birds Eye peas, the product which had been promoted in the first ad breaks on both services.

In January '74, Capital trumpeted results which suggested that the new station actually enjoyed a reach of 1m (sample size 465), a little awry from the estimate by the BBC's research Wombles the previous November of 400,000.  This BBC/commercial duelling became tremendous sport.  By September '74, a survey published by Radio Audience Measurement Ltd (a division of NOP) declared that  independent local radio attracted an audience of 5m for its growing network; whereas the BBC suggested those same commercial stations generated barely a million fans.  Clyde's Jimmy Gordon declared proudly that his station alone attracted 875,000.  Jimmy went on to invite the Corporation to join a 'joint research currency' to stop the squabbling for once and for all.

The creation of such a currency was still some time off, but as the Queen donned her Silver Jubilee hat in '77, JICRAR, the Joint Industry
Committee for Radio Audience Research co-ordinated data which enabled all 19 commercial stations to plot graphs in coloured pens, thanks to the instigation of a comprehensive diary research programme across the UK.  13m adults were tuning in for 12.5 hours per week, on average, with more than half the audience 'over 35'.  Piccadilly was able to claim the 'largest station outside London' crown  by June '78.  As the network grew, ILR's reach overall quickly blossomed to 20.3m by mid '82.

In '92, JICRAR rose to acronym heaven; its last words suggesting a reach for Capital London of 34% and Capital Gold at 23%; 9% for Melody in London (now Magic); 24% for Aire in Leeds; and 39% for all Midlands Radio's West Midlands stations.  Key 103 walked away with a 25% reach, with its AM service Piccadilly enjoying 26%. For a generation not unduly distracted by the Paul Daniels Magic Show and Noel's House Party on wobbly TV sets, average weekly time spent listening to any radio station was rarely in single figures.

Rajar (Radio Joint Audience Research) entered the fray from Q4 '92: a body jointly owned by commercial radio and the BBC, with the ad industry contributing round the Board table.  Replacing JICRAR and the BBC's Daily Surveys, it published its first results in January the following year, indicating an 89% reach for all radio (virtually unchanged to the present day).  The BBC reached 69%, with 53% to commercial radio.  Radio 1's reach was 16.5m (34%), with Radio 2 at 10.2m.  The overseas long-wave service, Atlantic 252 'you're never more than a minute away from music' enjoyed 4.3m, with Classic FM at 4.2m. 

Its debut is well-remembered by some very sore local commercial stations, given the early data suggested a significantly lesser reach for local commercial services. An embattled AIRC (now RadioCentre) correctly pointed out "There is a settling in period during which the methodology adopted, and the gathering and processing of data, continues to undergo the closest scrutiny".  So, whilst chalk and cheese must never be compared, the new figures suggested a 42% reach for chalk, compared with the earlier 52% for cheese.  Understandably, the stations most adversely affected made their views clear; and the now familiar issue of registering audience habits amongst sullen 15-24s was hotly debated.

Methodology and currency has always been under review.  As the list of stations grew so long it could no longer be printed in the diaries in anything above font size 6, new sticky labels were introduced.  Respondents were invited to don a Valerie Singleton smile and stick their own in.  Not without incident, however, and in January '96, Rajar conceded that respondents were 'failing to stick in sufficient labels'.

In September 2004, Rajar declared an 'ambitious but achievable' plan to measure audience figures by electronic methods by '07.  Thus continued serious consideration of audiometers, housed in personal tailored devices or watches, in those pre-smart phone days.  Such pronouncements were set against a fiery backdrop of persistent challenge by the shy retiring Kelvin MacKenzie, then Chief Executive of The Wireless Group, owner of the new TalkSport, who even resorted to a court challenge to the accuracy of RAJAR’s diary system.  His company's own research, conducted by GfK, unsurprisingly, disagreed with the published Rajar figures.  The words "preposterous, scandalous and shocking" were heard in this expensive 'spat', as MacKenzie's former titles might have called it.  

By the end of 2011, we did not quite see electronic 'measurement' per se in the UK, but we did
witness the arrival of a proportion of online listening diaries. Around the World, others have dipped toes in waters. The US began to employ PPM (personal people meters) in 2007 in some markets, and a whole new science has evolved of trying to maximise audiences in the light of intelligence suggesting more listening occasions in shorter bursts. Arbitron (Nielson) respondents are recruited by phone.  To debate the pros and cons of metering would require another blog as lengthy as this. 

Like our fashions, audience research has changed hugely in the last forty years.  Some of us observe that we now appear to be facing vacillation for established stations in large markets which had not hitherto been witnessed.  The research work, however, is likely performed as diligently as ever, if not more so.  But neither Rajar nor its contractors Ipsos-MORI and RSMB can be held responsible for the growing number of radio stations, our increasing lack of enthusiasm for admitting cheery strangers bearing clip boards through our doors, nor the increasingly low attention spans of diary-fillers.  Maybe we can take some responsibility as an industry  for the numbers of listening diaries for which we choose to pay. Whatever, it remains one of the World's most comprehensive pieces of ongoing research into anything.

Sensible, considered debate is always underway as to whether the current system fits the bill. That's entirely correct: we and our clients deserve the very best answers.  Are we measuring the right things in the right way sufficiently often?  Are the three-monthly injections a sensible way of reflecting genuine incremental changes in audience tastes?  Is the data sufficiently prompt?  Is there merit in some form of electronic measurement?  Is there merit in including data compiled in more than one way?  Are we right to continue to consider 15+ as our key metric (or 12+ as in the US)?  
Should radio continue to be largely measured in isolation of the growing number of competitive advertising platforms and entertainment services.  Is there a need for better incentives for respondents? Should we include catch-up?  Can we integrate the hard data from online streaming?  Should we seek to demonstrate attention, recall  and engagement if radio is really to continue to prove its worth?  And - the question which gets my brow most furrowed - is the doorstep approach really the right one now?

Do commercial radio and the BBC still need to share a currency, bearing in mind the needs are so very different; and public comparisons between commercial and BBC are now so rarely meaningful? Apart from top line reach figures, might the BBC wisely deploy its research funding more to attitudinal analysis of how closely it is meeting the requirements of its Charter?  And, in general terms, should we be requiring the data to be an effective tool for programmers; or just a trading currency?

The answers to any questions on radio research are rarely simple, not least in the concentrated UK market.  The only certainty is, as in the past, that a significant change in methodology would likely produce a similar change in the figures generated; and a rational amnesty by all interested parties would be required when adjusting to any new norms.

Meanwhile, where were you last Thursday at 4.00? What were you listening to? And was it on a DAB set or FM or online?


Grab my book 'How to Make Great Radio'. Published by Biteback. 



1 comment:

  1. On the computer, listening to Heart West Midlands, on FM. I did have to think about that, mind!

    ReplyDelete

Radio - and the Smart Speaker

It’s tough to believe that radio effectively used to require pin-code access.  You had to remember a string of random digits if you were...