The 2019 general election is now complete, but there is still plenty to say about the campaign, the rules that governed it, and the new parliament it has produced. Luke Moore summarises the contributions at our final seminar of 2019, where Unit staff were joined by other experts to dicuss the lessons of the election.
On Monday 16 December the Constitution United hosted an event entitled Election Replay with the Experts, at which four leading political scientists, including the Director and Deputy Director of the Constitution Unit, looked back on the 2019 general election. The issues discussed included polling, women’s representation, the rules of the electoral game, and the effect of the election on the new parliament. The event was chaired by Unit Research Associate Lisa James.
Ben Lauderdale – polling
Ben Lauderdale, Professor of Political Science at UCL, started the evening by discussing the performance of polling at the election. During the election campaign Lauderdale had been involved in producing the much-discussed ‘MRP’ (multilevel regression and post-stratification) polling used to predict constituency results. His central message was that after two general elections — in 2015 and 2017 — in which some of the polls proved to be significantly out of step with the results, polling for the 2019 election is largely a non-story, as most pollsters were on target in their predictions. Further, the accuracy of the polls meant that the media was (in retrospect and in Lauderdale’s view) discussing the right topics during the election campaign. The most important of these was the prospect of a Conservative majority, but also the specific demographic and geographic weaknesses of the 2017 Labour coalition. While the terminology was a bit reductive and silly, it was not wrong to have focused on the vulnerability of Labour’s ‘red wall’ and Conservative appeals to ‘Workington man’. Continue reading →
Stephen Fisher and Alan Renwick have developed a method for forecasting the outcome of the EU referendum based on current vote intention polling and analysis of opinion polling from previous referendums in the UK and around the world. Last month, in their initial forecast, they suggested that Remain had an 87 per cent chance of winning. In this second forecast this has now dropped to 73 per cent.
A month ago we issued our first forecast for the EU membership referendum on 23 June . Based on an analysis of referendums in the UK and on the EU outside the UK, and on vote intention opinion polls we forecast that Remain had an 87 per cent chance of winning, and that Remain would get 58 per cent of the vote, plus or minus 14. This was in part based on our polling average (excluding don’t knows) of 55 per cent for Remain on 11 March.
Our current forecast suggests that the contest is a fair bit closer. Our polling average now puts Remain on 52 per cent. We now give Remain a 73 per cent chance of winning and estimate that the Remain share of the vote will be 54 per cent, plus or minus 13 points.
The key change here is the drop from 55 per cent to 52 per cent for Remain in the polling average. The main reasons for this are as much or more methodological than substantive.
This year’s general election result took almost everyone by surprise, including the pollsters, forecasters and other experts. On 3 June, Joe Twyman, Dr Ben Lauderdale, Dr Rosie Campbell, Professor Justin Fisher and Professor Matt Goodwin took part in a roundtable to discuss where the predictions went wrong and lessons for 2020. David Ireland offers an overview of the event.
The exit poll that came out at 10pm on 7 May took almost everyone by surprise. Over the course of Friday morning, the scale of the Conservative majority revealed itself, showing that even the exit poll had underestimated the Conservative support. What happened? How did the polls get it so wrong and what are the lessons for 2020? This blog highlights the key issues from a recent roundtable on GE2015 hosted by UCL’s Department of Political Science and the Constitution Unit and chaired by Dr Jennifer Hudson.
Joe Twyman, Head of Political and Social Research, YouGov
As one of many pollsters who had long predicted a hung parliament, Joe acknowledged YouGov didn’t get it right this time. He also, rather humorously, showed the range of Twitter abuse directed at him as a result.
Voting intention remained tightly balanced in the months leading up to the election, but YouGov’s polling revealed that the ‘fundamentals’ may not have been given enough weight in predicting vote share.Importantly, no party had ever come from behind on the economy and leadership to won an election before, and this election was not to be the first. The economy remained the single most important issue, and here, the Conservatives were significantly ahead. Similarly, Miliband never got close to Cameron on party leader ratings. Continue reading →
Hungry for a quick and simple explanation of the phenomenon, the mainstream media and commenting classes were quick in bringing up the ‘Shy Tory’ hypothesis. The adage dates back to 1992 and goes something like this: right-wing voters felt cornered by the adversarial and negative propaganda directed at them by the left wing, prompting them to feel safer in withholding their voting intention on surveys by either answering they’re undecided or won’t vote.
The theory has received little real scrutiny nor been critically evaluated despite the self-reinforcing coverage it has been given since the election, to the extent that it has now morphed into ‘Lying Tories’. However, some pollsters and experts have already manifested their doubts on its value. It presents a number of flaws that discourage its adoption as a principal explanation for the polls.
As the Scots goes to the polls Anthony Wells considers to what extent we can expect the outcome to match the predictions.
The Scottish polls at the end of last week and the weekend were broadly clustered around a small No lead. Perhaps a more likely route to a YES victory is if the polls are underestimating the level of YES support for some reason. Over the last couple of days I’ve seen several blogs or articles pondering whether the polls could be wrong, could they be underestimating YES or NO?
It would be hubris to suggest the polls couldn’t be wrong. Obviously they can. At most elections there are polls that perform better or worse than their peers, some of that is better methodology. When the polls are close most is probably just normal sample variation. That’s a matter for another time though, here I’m pondering more about the possibly that all the polls are wrong, the potential for a systemic bias with everyone a bit too yes or a bit too no. This is possible too – think of the way all polls overestimated Lib Dem support in 2010, or most famously how all the polls overestimated Labour support in 1992. How likely is that?
The Scottish referendum is a bigger challenge for pollsters than an election would be because it’s a one-off. In designing methodology for voting intention the experience of what worked or didn’t work at previous elections weighs heavy, and most companies’ weighting schemes rely heavily upon the previous election – if not directly through weighting by recalled vote, in using the data from the previous election in designing and testing other weighting targets. For a referendum you can’t take that direct approach, pollsters needed to rely more on modelling what they think is an accurate picture of the Scottish electorate and hoping it reflects the Scottish people well enough that it will also reflect their referendum voting intentions – it’s complicated because Scotland has a complicated electorate. Scottish voters have two Holyrood votes and a Westminster vote, and they use them all in different ways with different political loyalties. Within the space of a year Scotland managed to be a Labour stronghold at Westminster and to produce a SNP landside at Holyrood – using either election alone for weighting gives a rather different picture of what the Scottish electorate are like, even though you are trying to model the same population. Different companies have arrived at different methods of political weighting to deal with the issue – Survation, ICM and TNS weight by Holyrood recalled voted alone, YouGov weight by Holyrood recalled vote with a nod towards 2011 Holyrood voters who backed Labour in 2010, Opinium weight by Holyrood and Westminster recalled vote, Panelbase weight by Holyrood and European recalled vote, Ipsos MORI don’t use political weighting at all. Despite the variance they have all converged to produce the same sort of result, and that gives me some confidence – if there was a particular skew from being online or from using Holyrood recalled vote we would expect to see different results.