by Stephen Mills Executive Director UMR Research
The twin failures to predict David Cameron’s win in the 2015 UK election and now the Trump triumph have certainly damaged the reputation of political polling.
The world was expecting a Clinton presidency and , even so it seems, were the two candidates. Hillary Clinton barely campaigned in Michigan and Wisconsin two key states she lost, presumably on the back of public and private campaign polling showing those states were solidly in her camp.
The much vaunted polling aggregators, who are meant to take out the risks of individual polls , like financial derivatives prior to the GFC , were also badly wrong. The final call of the celebrated 538 model was a 71% probability of a Clinton win. Golden boy Nate Silver may be taking some solace that the model, Upshot, which replaced 538 at the New York Times when he decamped to ESPN had Clinton’s chances at 85% and the Huffington Post model had her at a now hilarious 98%.
Exit polls (or at least the rumours of what they showed) appear to have been a disaster and new entrants to the prediction market using early turnout data were also hopelessly wrong.
A few days reflectioin later the picture is perhaps not quite so bad. The average of the US- wide polls had Clinton about three points ahead, She looks likely to win the popular vote by about 1%. Polls were close to final results in many of the battleground states and most showed the contest tightening towards Trump in the last 10 days.
Election polling was on the mark just months ago in the Australian Federal election with all public polling close to the final result. Polling was right for Trudeau’s win in Canada . In New Zealand you have to go back to 1993 when a disappointed Jim Bolger famously called for the buggering of pollsters. His own party polling may have been wrong that year but there was no systematic failure. There has never been a Trump- like shock in Australian federal and New Zealand general elections since intensive polling has been undertaken.
What went really wrong in the USA was the polls in the rust belt states of Pennsylvania, Michigan and Wisconsin which consistently had Clinton ahead. The scale of Trump’s win in Ohio was also underestimated. There was, in hindsight, an early warning here with a catastrophic failure of polling during the Democratic primaries to predict the Sanders win in Michigan over Clinton.
Polling in US elections , with low voter turnout, is trickier than in countries with higher turnouts or compulsory voting such as Australia. There are reports that the campaign polling for Romney in the 2012 presidential contest consistently showed him doing much better than the (accurate) public or Democratic polls because they always underestimated the high-level of black voter turnout.
The most convincing early theory for the inaccurate polling in 2016 is that pollsters this time underestimated the turnout of poorly educated white voters and also rural voters.
But patience is needed before firm conclusions are made.
In the immediate aftermath of the UK election polling failures there were theories abounding about “shy Tories” (Conservative voters who were embarrassed by their choice and not prepared to admit it to a pollster) and “lazy Labour” (intending Labour voters who did not bother to eventually vote). Others speculated that late polls showing the possibility of a hung parliament with the Scottish National party holding the balance of power led some to make a late change to the Conservatives and the promise of greater stability.
An official enquiry by the UK Market Research Society and British Polling Council undertaken by a mix of high-powered academics and industry personnel dismissed all the immediate theories and concluded that “the primary cause of the polling miss in 2015 was unrepresentative samples. The methods pollsters used to collect samples of voters systematically overrepresented Labour supporters and underrepresented Conservative supporters”.
Specifically the polls underrepresented the oldest voters (over 70s) who in polls, especially online panels, ended up being represented by voters in the 60s who did not vote as strongly for the Conservatives. Polls also overrepresented younger voters who are more engaged with politics and likely to vote and underrepresented younger voters who did not vote. The third failure was to represent “busy voters” (who required more contacts from polling companies to get hold of) and were more likely to vote Conservative.
The USA polling industry will presumably undertake possibly multiple enquiries into the polling failure during this presidential election. Lessons will be learnt and applied and the industry will be around for a while yet.