Constitutional reformers need to tackle six key questions about the regulation of digital campaigning

Today marks the second day of the Unit’s conference on the Johnson government’s constitutional reform agenda, for which free tickets remain available. One of today’s speakers, Kate Dommett, argues that the government’s proposals to tackle the challenges posed by digital campaigning are far from comprehensive, leaving many unanswered questions for future governments to address.

Five years on from the Brexit referendum and the Cambridge Analytica scandal that emerged in its wake, the government is poised to publish its Electoral Integrity Bill. Proposing ‘significant changes to the electoral and democratic system’, it could be presumed that Boris Johnson’s government is about to enact an ambitious programme of constitutional change that will update electoral systems to the digital age. Yet, from the details available so far – including a new announcement this week – it seems Johnson’s government is failing to address six critical questions about digital campaigning, leaving considerable room for further reform.

The rise of digital technology in campaigning

The rise of digital campaigning has been a slow and steady phenomenon in UK elections, but in recent years there has been significant attention paid to the need for electoral reform. The current regulation governing electoral campaigning can be found in the Political Parties, Elections and Referendum Act (PPERA) that was passed in 2000. Since then the adoption of websites, social media profiles and, more recently, online advertising by electoral campaigners has raised questions about the suitability of existing legislation. Indeed, a range of parliamentary committees, civil society bodies, academics and even digital companies such as Facebook, have asserted a need for urgent digital campaigning regulation.

Publishing a report devoted to digital campaigning in 2018, the Electoral Commission has been at the forefront of these debates. Its analysis revealed the rapid rise of digital tools in elections, showing increasing amounts are being spent on digital advertising. Updating its statistics to include the last election, Figure 1 (below) shows that spending on digital advertising has increased to around £7.5 million, and now represents a significant proportion of election campaign spend.

Figure 1: Electoral Commission spending return declarations related to advertising and digital advertising 2014-2019
Continue reading

Online harms to democracy: the government’s change of approach

Two years after the publication of the government’s Online Harms white paper, the government has published its final consultation response. Its commitment in the white paper to legislate to prevent online harms to democracy has disappeared, to the frustration of many inside and outside parliament. Alex Walker reflects on the government’s decision to ‘abandon the field’ and argues that a laissez-faire approach could lead to negative consequences.

It is expected that the Queen’s Speech on 11 May will include the government’s long-awaited Online Safety Bill. This will be a major piece of legislation with significant implications for the regulation of digital technology companies in the UK. However, when it is introduced it now seems highly unlikely that it will encompass measures to prevent harms to democracy, as was initially indicated.

The Online Harms white paper published in April 2019 set out a position that recognised the dangers that digital technology could pose to democracy and proposed measures to tackle them. This was followed by an initial consultation response in February 2020 and a full response in December. In the course of the policy’s development, the democracy aspect of the proposals has disappeared. The government now points instead to other areas of activity. This represents a shift away from the ambition of the white paper, which promised to address online harms ‘in a single and coherent way.’

Online Harms white paper: April 2019

The white paper first put forward the government’s intention for a statutory duty of care that would make companies responsible for harms caused on their platforms. This would include illegal harmful content, such as child abuse and terrorist material, but also some forms of harmful but legal content, including disinformation and misinformation. The white paper explicitly framed some of its proposals for tackling online harms in relation to the consequences for democracy. It detailed some of the harms that can be caused, including the manipulation of individual voters through micro-targeting, deepfakes, and concerted disinformation campaigns. It concluded that online platforms are ‘inherently vulnerable to the efforts of a few to manipulate and confuse the information environment for nefarious purposes, including undermining trust’. It recognised that there is a distinction to be drawn between legitimate influence and illegitimate manipulation.

The white paper also set out what the government expected to be in the regulators’ Code of Practice, and what would be required to fulfil the duty of care. This included: using fact-checking services, particularly during election periods; limiting the visibility of disputed content; promoting authoritative news sources and diverse news content; and processes to tackle those who misrepresent their identity to spread disinformation. It stated that action is needed to combat the spread of false and misleading information in part because it can ‘damage our trust in our democratic institutions, including Parliament.’

Continue reading

Responding to the coronavirus ‘infodemic’: some lessons in tackling misinformation

Michela.Palese (1)alan.jfif (1)The proliferation of false, misleading and harmful information about the coronavirus has been described as an ‘infodemic’ by the World Health Organisation. Government, social media companies, and others have taken concerted action against it. Michela Palese and Alan Renwick here examine these responses and consider potential lessons for tackling online misinformation more broadly.

COVID-19 is rightly dominating the international agenda. Besides the crucial health, economic, and social dimensions, considerable attention is being paid to the information on COVID-19 that is circulating online. 

Ever since the virus emerged, false, misleading and/or harmful information has spread, especially online. Newsguard, which ranks websites by trustworthiness, found that, in the 90 days to 3 March, 75 US websites publishing coronavirus misinformation received ‘more than 142 times the engagement of the two major public health institutions providing information about the outbreak’. Ofcom found that ‘[a]lmost half of UK online adults came across false or misleading information about the coronavirus’ in the last week of March. The World Health Organisation (WHO) described the misinformation as an ‘infodemic – an over-abundance of information – some accurate and some not – that makes it hard for people to find trustworthy sources and reliable guidance when they need it.’

The capacity of social media and 24/7 news to proliferate misinformation was already manifest. But this is the first time the potentially nefarious effects of an unregulated online space have combined with a global pandemic. As Conservative MP Damian Collins put it, this is the ‘first major public health crisis of the social media age’.

Governments and tech companies across the globe are responding. In this post, we highlight key steps and consider lessons for dealing with misinformation in general. Continue reading

Is there an app for that? Voter information in the event of a snap election

juxZ1M58_400x400.jpg.pngDigital technology has transformed the way we access information and interact with services. Democratic services have not kept up, risking a situation where democracy is seen as out of date. Joe Mitchell argues that it’s time to dream big: the UK has an opportunity to create a new digital-first office of civic education and democratic information, to restore trust and grow public understanding of our democracy.

What’s the biggest threat to democracy in the UK? Interference by foreign powers? Disinformation? Fake news? Micro-targeting of voters on social media? Or is it more simple than that? Is itt is just that engaging in the democratic process no longer fits with people’s lives? 

Digital technology has transformed the way we live. It has changed our expectations of how we access information, how we communicate, how we bank, shop or access government services. It should not surprise us then, to learn that people expect to access information on the democratic process digitally. For example, Google News Trends published the top ten searches on Google UK on the day of the 2015 general election; these all related to the election. The most popular question was ‘who should I vote for’ — a genuinely complex question, but the following searches were straightforward: variations on the theme of ‘who are the candidates’ and ‘where do I vote’. 

Worryingly, the democratic process has been left behind by digital transformation. A gulf has emerged between the way we live our lives now and the way we participate in democracy: it can feel like something from a bygone age. Notices of elections are posted to a noticeboard in front of a council building and (not even in all cases) uploaded as a PDF to a webpage buried in a council website somewhere. While the digital register-to-vote service is welcome, no state institution has taken responsibility for meeting the digital demand for even the most basic information: when are elections happening, who is standing, what was the result? How to vote is covered by the Electoral Commission’s website, but with research on voter ID showing that only 8% of voters know the voting rules, clearly not enough is being done.  Continue reading