Polling fiasco: What will the AMSRO inquiry recommend?

Australia’s ridiculous love affair with opinion polling generally, and Newspoll in particular, suffered a serious moment of doubt on the evening of 18 May 2019.

Thank goodness! Hopefully this will lead to media outlets, political commentators, politicians themselves and even the average punter on the street to reassess how much importance is given to the polls. (It still beggars belief that not one but 2 PMs have been turfed out at least partially due to “losing 30 Newspolls in a row”.)

For its part, the market research industry body AMSRO has called an inquiry into the performance of opinion polls at the 2019 Federal Election.

An interim report will be provided as soon as practicable. Which probably means in about 6-9 months, judging by the length of time it took AMSRO’s British equivalent to conduct the autopsy on the corpse of the British election polling in 2015.

In the meantime, here is my educated guess as to the likely recommendations that will come out of the inquiry – and one critical element that is unlikely to see the light of day.

  1. That pollsters need to improve the sample frames (where they find the people to interview, in layman’s terms), and sampling processes (how they choose which individuals from the sample frame to include) to improve the representativeness of the samples.
  2. That pollsters provide much more transparency regarding their methodologies, including the number of interviews completed by each method (phone interview with human, robo-calls, online, face to face etc).
  3. That more emphasis is placed on “margins of error” in the polls, both by pollsters and by media outlets reporting the results.
  4. That media outlets re-evaluate the role that polling plays in their political reporting, with a view to rather more circumspect interpretations and greater use of statistical caveats.

All of which is extremely sensible – but is also a statement of the bleeding obvious. And “surely they already do that”, I hear you ask?

Yes and no. The fact is that it is near impossible in Australia to construct a perfect (or even very good) sample frame, and therefore to deliver a representative sample. A mail survey (and I mean snail mail) based on the electoral roll would come very close, but will never be used due to the time it would take to conduct properly.

All other methodologies potentially introduce at least some form of bias (calls to landlines exclude the increasing number of households without a fixed line; calls to mobile exclude those without mobiles – yes, they exist; online sampling excludes people not online panels; face-to-face excludes residents in secure buildings and those who spend long periods away from home).

The problem is that word potentially.  Sample bias is very difficult to correct for, because it is impossible to know the extent to which any given sample is biased.

The other major problem with polling is what I call the Fight Club problem. The first rule of Fight Club being that you don’t talk about it. But here it is: response rates – that is, the number of people who actually agree to participate in a survey, divided by the number of people who you invite to participate. A la Fight Club, most pollsters and market researchers have a pathological hatred of talking about response rates. Why? Because a typical response rate for a market research survey conducted online is about 5%. That is, for every 100 people invited to participate, only 5 agree to do so. (A well-designed study with a short questionnaire may achieve a response rate of 20% – still very low).

You don’t need to have a PhD in stats to realise that this might be a problem. What if some of the 95 people who didn’t participate have a slightly different voting intention to those 5 people who did participate? 

Even if the response rate is 50%, it is impossible to know if the non-responders differ materially from the responders. The lower the response rate, the more likely it is that sample bias is present. But there is no way of knowing – and therefore it is very difficult to correct for.

When you combine a biased sample with a low response rate, the surprising thing is not that the pollsters all made the wrong call on the election – it’s that they got as close as they did.

Indeed, the issue of “unrepresentative samples” was found to be the primary cause of the polling miss in the British general election in 2015. In short, the sampling methods “systematically over-represented Labour supporters and under-represented Conservative supporters”. Sound familiar?

Will there be a lengthy discussion of sample bias and low response rates in the AMSRO inquiry report? I sincerely hope so.