Constitutional reformers need to tackle six key questions about the regulation of digital campaigning

Today marks the second day of the Unit’s conference on the Johnson government’s constitutional reform agenda, for which free tickets remain available. One of today’s speakers, Kate Dommett, argues that the government’s proposals to tackle the challenges posed by digital campaigning are far from comprehensive, leaving many unanswered questions for future governments to address.

Five years on from the Brexit referendum and the Cambridge Analytica scandal that emerged in its wake, the government is poised to publish its Electoral Integrity Bill. Proposing ‘significant changes to the electoral and democratic system’, it could be presumed that Boris Johnson’s government is about to enact an ambitious programme of constitutional change that will update electoral systems to the digital age. Yet, from the details available so far – including a new announcement this week – it seems Johnson’s government is failing to address six critical questions about digital campaigning, leaving considerable room for further reform.

The rise of digital technology in campaigning

The rise of digital campaigning has been a slow and steady phenomenon in UK elections, but in recent years there has been significant attention paid to the need for electoral reform. The current regulation governing electoral campaigning can be found in the Political Parties, Elections and Referendum Act (PPERA) that was passed in 2000. Since then the adoption of websites, social media profiles and, more recently, online advertising by electoral campaigners has raised questions about the suitability of existing legislation. Indeed, a range of parliamentary committees, civil society bodies, academics and even digital companies such as Facebook, have asserted a need for urgent digital campaigning regulation.

Publishing a report devoted to digital campaigning in 2018, the Electoral Commission has been at the forefront of these debates. Its analysis revealed the rapid rise of digital tools in elections, showing increasing amounts are being spent on digital advertising. Updating its statistics to include the last election, Figure 1 (below) shows that spending on digital advertising has increased to around £7.5 million, and now represents a significant proportion of election campaign spend.

Figure 1: Electoral Commission spending return declarations related to advertising and digital advertising 2014-2019
Continue reading

Online harms to democracy: the government’s change of approach

Two years after the publication of the government’s Online Harms white paper, the government has published its final consultation response. Its commitment in the white paper to legislate to prevent online harms to democracy has disappeared, to the frustration of many inside and outside parliament. Alex Walker reflects on the government’s decision to ‘abandon the field’ and argues that a laissez-faire approach could lead to negative consequences.

It is expected that the Queen’s Speech on 11 May will include the government’s long-awaited Online Safety Bill. This will be a major piece of legislation with significant implications for the regulation of digital technology companies in the UK. However, when it is introduced it now seems highly unlikely that it will encompass measures to prevent harms to democracy, as was initially indicated.

The Online Harms white paper published in April 2019 set out a position that recognised the dangers that digital technology could pose to democracy and proposed measures to tackle them. This was followed by an initial consultation response in February 2020 and a full response in December. In the course of the policy’s development, the democracy aspect of the proposals has disappeared. The government now points instead to other areas of activity. This represents a shift away from the ambition of the white paper, which promised to address online harms ‘in a single and coherent way.’

Online Harms white paper: April 2019

The white paper first put forward the government’s intention for a statutory duty of care that would make companies responsible for harms caused on their platforms. This would include illegal harmful content, such as child abuse and terrorist material, but also some forms of harmful but legal content, including disinformation and misinformation. The white paper explicitly framed some of its proposals for tackling online harms in relation to the consequences for democracy. It detailed some of the harms that can be caused, including the manipulation of individual voters through micro-targeting, deepfakes, and concerted disinformation campaigns. It concluded that online platforms are ‘inherently vulnerable to the efforts of a few to manipulate and confuse the information environment for nefarious purposes, including undermining trust’. It recognised that there is a distinction to be drawn between legitimate influence and illegitimate manipulation.

The white paper also set out what the government expected to be in the regulators’ Code of Practice, and what would be required to fulfil the duty of care. This included: using fact-checking services, particularly during election periods; limiting the visibility of disputed content; promoting authoritative news sources and diverse news content; and processes to tackle those who misrepresent their identity to spread disinformation. It stated that action is needed to combat the spread of false and misleading information in part because it can ‘damage our trust in our democratic institutions, including Parliament.’

Continue reading