American pollsters and their big ‘oops’ moment

I don’t know if you’ve heard – perhaps through the mass out-pouring of hysteria on Facebook – but Donald J Trump is going to be the next President of the United States of America.

The BBC has currently called 46 of 50 states, giving Trump 278 Electoral College votes to Clinton’s 218, against the 270 needed to win. The popular vote, as is traditional, looks more even. Trump sits at 47.5% and Clinton at 47.7% with 98.2% of districts counted.

But this isn’t what the pollsters promised us, is it?

The final poll roll of shame:

Economist/YouGov: Clinton – 49; Trump – 45

ABC News/Wash Post: Clinton – 49; Trump – 46

IBD/TIPP: Clinton – 43; Trump – 42

Fox News: Clinton – 48; Trump – 44

CBS News: Clinton – 47; Trump – 43

Bloomberg: Clinton – 46; Trump – 43

Monmouth: Clinton – 50; Trump – 44

NBC News/SM: Clinton – 51; Trump – 44

Each of these major polls in the few days running up to the election understated Trump’s support by 2.5% to 7%.

Most of our readers are British, and understand as well as I do the polling failures we have seen recently: last year pollsters understated the Conservative vote, and this year they understated the vote for leaving the EU.

I can’t help but think this latest upset could have been predicted. So what was going on?

Could the whole methodology of self-reporting surveys be flawed? As Faris Yakob has explained, asking people what they are going to do in the future is not a particularly reliable way of predicting what they will actually do.

And what about ‘shy’ Trump voters? As a Conservative and Leave voter in the UK, I know all too well that certain sections of society – and indeed the media – are pretty dismissive of our views. To be a Tory is to be heartless, and to be a Brexiter is to be stupid and racist. The discourse in America was similar, making it hard for the more moderate, floating voters who eventually voted Trump to speak up.

Over the weekend I saw an American pollster challenged on Sky News about the possibility of a shy Trump vote. He rejected the possibility outright, saying that as most polls were conducted online there was no reason for voters to be shy. Hmm. At the time I wondered if he had looked at the UK polling failures, where online polls were wrong too.

Pollsters will be hoping that the problem is merely an issue of sampling and weighting, and that asking more people and tweaking their models will fix the problem. My advice: don’t hold your breath on that.

My scepticism about the reliability of polls made the result overnight much less of a shock to me than it has been to other people, as illustrated by the shrieking cacophony on Facebook and Twitter as people woke up this morning.

At 1am GMT when a slurry of exit polls came in putting Clinton a little ahead in key battleground, must win states like Florida, I made my prediction: the exit polls would understate his vote by 2-4% and Trump would win them. Sometimes being right is quite satisfying:

  • In Florida, the exit poll predicted Clinton would win 48-46%. But in reality, Trump won it 49.1% to 47.7%.
  • In Ohio, the exit poll predicted a Trump win, but again understated his vote. They called it for a 48-47% Trump win, but he actually won by a bigger margin – 52.1% to 43.8% for Clinton.
  • Clinton was predicted to win Pennsylvania 50% to Trump’s 46%. Wrong again – he won the state by 48.8% to 47.7%.
  • The exit poll from North Carolina suggested a narrow win for Clinton, too – 48% to 47%. But again, Trump won – 50.5% to 46.7%. The Trump effect in NC is particularly interesting, because on the same night the Republican North Carolina governor was unseated by his Democratic rival – probably over the controversial HB2.

The exit polls have a different set of problems to the pre-election polls. Firstly because the issue of predicting behaviour is out of play – they’ve already voted, and are telling you what they did. The issue then is sampling: are you surveying the polling places you need to get the full picture? Are you weighting the demographics properly based on who has actually voted?

And in my view it is also an issue of survey length – market researchers know that the longer a survey, the less accurate it gets because people get bored and stop paying attention. The exit polls this year were incredibly detailed, asking all kinds of questions about backgrounds, attitudes and preferences – I think this may have degraded the overall quality of the data.

This also feeds into the issue of privacy and openness. Some of these questions were quite intrusive, and people may not want to share their real views. When it comes to headline voting intention, and when the media environment is hostile towards Trump voters, some people will undoubtedly claim they voted Clinton – or perhaps Johnson or Stein – when they actually voted for Trump.

But the most important thing pre-election polling and exit polling have in common is that they assume that if pollsters sample enough people with various demographic characteristics – enough men and women, people of each age group and ethnic background, and income level – then they can extrapolate the views of the sample onto the rest of the demographic they belong to. If this was ever true before, I don’t think it is true now.

My day job is in marketing, and market researchers are increasingly finding that people are more individualistic, and think more for themselves. Nowadays you are less able to make assumptions about what people think and feel based on their age, gender, education and income level. And though commentators may persist in saying more uneducated people voted for Brexit, or more high income people voted Conservative, or more uneducated, blue collar working men voted for Trump, the truth is much more complex.

Pollsters need to find new ways to take the temperature on public opinion, and in the mean time we all need to take the polls with a pinch of salt.


Emily is the chairman of Conservatives for Liberty. Follow Emily on Twitter: @ThinkEmily

Follow @con4lib on Twitter!

Like Conservatives for Liberty on Facebook