This article originally appeared on VICE UK.
The 8th of May 2015 was a bad day for polls. Despite most predictions indicating a hung parliament that could tip in Labour’s favour – from YouGov to a Guardian/ICM campaign poll – the 2015 general election was a disaster for the party.
After the 2015 election, the polls continued to get it wrong. 2016, also known as 'The Year Polls Globally Fucked Up', left us with two of the most unreasonable plot twists in history: Brexit and the election of Donald Trump. Over the last few years, opinion polls have misled us not once but numerous times. For the 2017 election, most predicted a Conservative majority, but we got a hung parliament. Some public figures, who swore to eat a hat, a book or a bug if the polls were wrong, had to do so on air. Did we enter the darkest timeline because the polls had been so wrong, or had the polls turned all wrong because we had entered the darkest timeline? (Crucially, no polling information is available on this question.)
“People tend to think polls are right or wrong, with ‘right’ meaning calling the right winner. But it’s a matter of degree,” Peter Kellner, who was the president of YouGov from 2007 to 2016, tells me. In 2017, “the polls did show narrowing of the gap throughout the campaign, the collapse of Ukip and the rise of Corbyn’s popularity,” Kellner says. In 2015 and 2016, opinion polls were “three or four points out” which is “more out than they should have been,” he admits, but still within the usual margin of error – usually more or less three points for a sample of 1,000. As Kellner puts it: “A good poll is like a big pot of soup. If you stir it well, you can taste whether it’s good or not without having to drink the whole pan.”
But in high stakes elections, and especially in close races such as the EU referendum, a few points out is the difference between a future in the EU, and the most divisive political decision of our lifetimes.
The problem comes when polls are taken as predictions of the election, rather than a snapshot of opinions on that day, and risk misleading the electorate. Take the recent poll (or model) from YouGov, using Multi-level Regression and Post-stratification (MRP) model, published last Thursday. It reads like a transcript of your worst nightmare: “Were the election held tomorrow, the Tories would win 359 seats (42 more than they took in 2017) and 43 percent of the vote (around the same as last time). In terms of seats won, this would be the Conservatives’ best performance since 1987.” It continues: “Meanwhile, Labour are set to lose 51 seats – falling from 262 seats in 2017 to 211 now – and taking 32 percent of the vote, a 9 percentage point decrease. In terms of seats won this would be the party’s worst performance since 1983.” Even with the disclaimer from YouGov that this isn’t a prediction, a poll like this could encourage a huge sense of disengagement when it comes to fighting for the 'losing' parties.
So, why even bother having polls, if they continually prove to be inaccurate and misleading? Couldn’t we get rid of them?
Unfortunately, it’s not quite that simple, especially when methods are improving. Unlike polls from the infamous 2015-2016 period, YouGov’s MRP polling method – which takes into account the characteristics of voters and applies that to different constituencies – is pretty solid, according to Kellner. The model, which Kellner says is based on “an enormous sample, removing most of the risk of sample errors,” was first used in 2017 when Theresa May called a snap election: “The 2017 MRP was spectacularly good in calling a number of constituencies right,” says Kellner. “It gets the big picture.” Welcome to 2019: the polls, which didn’t foresee a climate-denying sexual assaulter entering the White House or British politics disintegrating around EU accession, but are now fixed just in time to predict a hellish Conservative win!
It’s also not the fault of the polls if they’re misinterpreted, argues Matt Singh, a polling analyst at Number Cruncher Politics, a non-partisan polling and elections consultancy. “The media attitude to polling swings between total distrust and blind faith”, Singh says. “They’re the best available evidence, but they’re not perfect.” The MRG “is a measure, not a prediction,” he adds. The polls currently seem to indicate a Tory majority, but “it could be a smaller one” or it could switch dramatically: “I don’t think it is appropriate to treat this as set in stone or close to it.”
Singh doesn’t deny there can be problems, but these often lie with its sample. Even detailed national surveys like MRPs can’t be bias-proof. “The biases in the sample will tend to be with particular groups,” Singh explains. “For example, a particular seat that has a lot of young people could cause problems.” Singh cites the example of the Finchley and Golders Green constituency, which due its large Jewish community is both “competitive and very atypical.” “There, pollsters have found wildly divergent results,” he says. “It is the kind of seat they are nervous about.”
Polls can also influence how we vote: if the polls say the race is close, you may think twice before casting your ballot for a smaller party. However, it’s very hard to measure how much influence opinion polls have on actual voting, Kellner says, because “you can’t have a control group: there’s no parallel Britain to compare to.”
But polls can still be used for tactical voting: that’s what the anti-Brexit group Best for Britain is currently doing. It has identified 57 “target seats” where “it would take less than 4,000 tactical votes to prevent the Conservative Party winning”, and is advising Remainers on how to oust Tory candidates on its Get Voting website through an MRP polling model. (Get Voting met controversy when VICE revealed Best For Britain’s links to the Liberal Democrats.)
It’s true that polls can influence people’s voting behaviour, Matt Singh says, “but it’s not the polls themselves that are doing the influencing: it’s expectations of the outcome based on the polls.” Without polls, people could still be influenced by their expectations of the outcome, he added, but those expectations, “would instead be based on rumour, spin, betting markets – which are infinitely easier to manipulate than polls – punditry and goodness knows what else.” Eek.
Nonetheless, in the current febrile political climate, polls are coming under increased scrutiny, with “trust no poll” becoming a watchword for many on the left who distrust the role they play. Would it be reasonable to run an election without polling allowed? Kellner tells me that banning polls would be “ridiculous, offensive and absurd.”
Most importantly, it would harm democracy: what would be banned would not be polls themselves – which could still be conducted for private use – but their publication. Kellner takes the example of France, where until 2002 no opinion poll could be published in the week between the two rounds of the presidential election. “French pollsters made more money during these seven-day periods from political polls than any other time”, he says. Organisations with money, like big parties or companies, paid big bucks for polls. “What you end up with are two groups: one who knows what is going on, and the great majority of people who don’t have access to the information. Is that democratic?” Kellner asks. “I don’t think so.”
Picture it: Boris Johnson, having bought all the private polls money can offer, making claims on live TV about a Tory landslide – without any available information to prove him wrong. That would be hell for everyone: journalists would be unable to call bullshit, less wealthy parties couldn’t check their progress via paid-for polls, and the public would end up even more lost than we are right now. “If you want to ban polls because they can mislead people and have a poll-free election,” Kellner says, “then let’s also have a politician-free election, because they can be misleading too!”
Not a bad shout, to be honest.