Survey fraud comes in all the colours of the rainbow: from mild to severe

We at Faster Horses recently conducted self-funded ‘research on research’ to understand, among things, the extent and nature of survey fraud. Our first article on the topic focused on the accuracy, or lack thereof, in Google’s inferred demographics.

This article provides an overview of the extent of the problem in survey research and how we mitigate this to ensure the integrity of our clients’ data.

How bad is the problem?

For some reason, many market researchers don’t like talking about how many respondents provide what might euphemistically be called ‘sub-optimal’ responses. That is, how many ‘make shit up’.

The answer is, a lot. At the risk of stealing my own thunder, only one third of respondents completed a 5 minute survey without raising at least one (of fourteen) red flags that we routinely use to monitor and manage data quality and fraud.

One-third. Yes, that’s right, just one in three respondents in our survey could be considered to be beyond question – and it was a short survey by industry standards.

That doesn’t mean that two-thirds of survey data is garbage. People do make honest mistakes even when trying their hardest to stay focused. But it does mean that close scrutiny is required to weed out the speeders, lazy respondents, cheats, and fraudsters to ensure high quality data.

Crimes and misdemeanours

Some respondents commit a survey crime so heinous as to be instantly tossed out. Spending less than 10 seconds reading a 200-word product concept, which takes a normal human a good 25-30 seconds to read, is one such example.

Others may commit a series of minor misdemeanours, each of which by itself is not sufficiently bad to cause an instant disqualification – but which together amount to, at best sloppiness and more likely just plain dud respondents. Inconsistent attitudinal responses fall into this category.

And then there are those who make little effort at all to cover their tracks, failing at 7 or more of the 14 checks. Laziness, incompetence, whatever, they are OUT and refused future participation in any of our surveys. In our current study, this was 5% of respondents.

Shades of grey and rainbows

Of course there is no hard and fast rule for identifying poor quality respondents. It is by definition a grey area, but as a guide we routinely discard between 20% and up to 50% of respondents from our clients’ surveys, to ensure they have the best quality data.

So, from relatively mild issues such as duplicate respondents from multiple panels (more of which in a coming article – how many panels is the typical respondent on?), through the various forms of speeding and lack of attention, to more industrial strength bot-farms and link manipulation, survey fraud truly comes in all the colours of the rainbow.

And if you think you’re on top of the problem because you ask detailed screeners, lie detector and red-herring questions – NO, these pick up just a tiny percentage of the cheats.

The solution is two-fold.

First, have a rigorous and consistently applied system of quality checks for every quant survey you put into field. Don’t assume the panel has it covered, or that the platform has it covered. Check the measures yourself.

Second, remain vigilant for emerging threats, because there are always new and emerging methods of fraud. Last year’s solution simply won’t catch all poor quality respondents.

If you have ever had concerns about the quality of your data or want to know how we detect survey fraud, contact us via our website (https://fasterhorses.consulting/contact/)

Peter Fairbrother, Managing Director