Updating campaign regulation for the digital era

John Pullinger, chair of the Electoral Commission, argues digital campaign regulations need  an ‘overhaul’ to make the electoral process more transparent and accessible to voters, thereby increasing confidence in the system in a manner that doesn’t discourage parties, candidates and campaigners to take in part in elections. He also calls on the UK’s parliaments to show that they do not tolerate the use of online activities that undermine democracy.

Digital channels are transforming our democracy. Action now can harness that transformation to make political campaigns better. Without the right action, our democracy may not be resilient in the face of the challenges posed by the digital era. But there is nothing unique to elections in this. It applies in the same way to how technological change is affecting so many aspects of our lives. And we can respond in the same way.

Voters can already be sceptical about what they see on social media and practise the art of asking. Who is telling me this? Can I be sure it is really from them? Why are they telling me this? Can I believe what they are saying? How can I check it out? Parties, candidates and campaigners can already use digital tools like imprints to show where information is coming from.

Other voices can already accentuate the positive and shame the bad. Social media platforms, news organisations, influencers and fact checkers increasingly see this as central to their own reputation. A platform is not neutral. It has values and shows its true colours by how it acts. By standing on the sidelines, they are getting the message that they will be seen to be complicit in undermining democracy. By standing tall they can see that they can provide a vital public service that will enhance their brand.

Continue reading

Constitutional reformers need to tackle six key questions about the regulation of digital campaigning

Today marks the second day of the Unit’s conference on the Johnson government’s constitutional reform agenda, for which free tickets remain available. One of today’s speakers, Kate Dommett, argues that the government’s proposals to tackle the challenges posed by digital campaigning are far from comprehensive, leaving many unanswered questions for future governments to address.

Five years on from the Brexit referendum and the Cambridge Analytica scandal that emerged in its wake, the government is poised to publish its Electoral Integrity Bill. Proposing ‘significant changes to the electoral and democratic system’, it could be presumed that Boris Johnson’s government is about to enact an ambitious programme of constitutional change that will update electoral systems to the digital age. Yet, from the details available so far – including a new announcement this week – it seems Johnson’s government is failing to address six critical questions about digital campaigning, leaving considerable room for further reform.

The rise of digital technology in campaigning

The rise of digital campaigning has been a slow and steady phenomenon in UK elections, but in recent years there has been significant attention paid to the need for electoral reform. The current regulation governing electoral campaigning can be found in the Political Parties, Elections and Referendum Act (PPERA) that was passed in 2000. Since then the adoption of websites, social media profiles and, more recently, online advertising by electoral campaigners has raised questions about the suitability of existing legislation. Indeed, a range of parliamentary committees, civil society bodies, academics and even digital companies such as Facebook, have asserted a need for urgent digital campaigning regulation.

Publishing a report devoted to digital campaigning in 2018, the Electoral Commission has been at the forefront of these debates. Its analysis revealed the rapid rise of digital tools in elections, showing increasing amounts are being spent on digital advertising. Updating its statistics to include the last election, Figure 1 (below) shows that spending on digital advertising has increased to around £7.5 million, and now represents a significant proportion of election campaign spend.

Figure 1: Electoral Commission spending return declarations related to advertising and digital advertising 2014-2019
Continue reading

Online harms to democracy: the government’s change of approach

Two years after the publication of the government’s Online Harms white paper, the government has published its final consultation response. Its commitment in the white paper to legislate to prevent online harms to democracy has disappeared, to the frustration of many inside and outside parliament. Alex Walker reflects on the government’s decision to ‘abandon the field’ and argues that a laissez-faire approach could lead to negative consequences.

It is expected that the Queen’s Speech on 11 May will include the government’s long-awaited Online Safety Bill. This will be a major piece of legislation with significant implications for the regulation of digital technology companies in the UK. However, when it is introduced it now seems highly unlikely that it will encompass measures to prevent harms to democracy, as was initially indicated.

The Online Harms white paper published in April 2019 set out a position that recognised the dangers that digital technology could pose to democracy and proposed measures to tackle them. This was followed by an initial consultation response in February 2020 and a full response in December. In the course of the policy’s development, the democracy aspect of the proposals has disappeared. The government now points instead to other areas of activity. This represents a shift away from the ambition of the white paper, which promised to address online harms ‘in a single and coherent way.’

Online Harms white paper: April 2019

The white paper first put forward the government’s intention for a statutory duty of care that would make companies responsible for harms caused on their platforms. This would include illegal harmful content, such as child abuse and terrorist material, but also some forms of harmful but legal content, including disinformation and misinformation. The white paper explicitly framed some of its proposals for tackling online harms in relation to the consequences for democracy. It detailed some of the harms that can be caused, including the manipulation of individual voters through micro-targeting, deepfakes, and concerted disinformation campaigns. It concluded that online platforms are ‘inherently vulnerable to the efforts of a few to manipulate and confuse the information environment for nefarious purposes, including undermining trust’. It recognised that there is a distinction to be drawn between legitimate influence and illegitimate manipulation.

The white paper also set out what the government expected to be in the regulators’ Code of Practice, and what would be required to fulfil the duty of care. This included: using fact-checking services, particularly during election periods; limiting the visibility of disputed content; promoting authoritative news sources and diverse news content; and processes to tackle those who misrepresent their identity to spread disinformation. It stated that action is needed to combat the spread of false and misleading information in part because it can ‘damage our trust in our democratic institutions, including Parliament.’

Continue reading

Responding to the coronavirus ‘infodemic’: some lessons in tackling misinformation

Michela.Palese (1)alan.jfif (1)The proliferation of false, misleading and harmful information about the coronavirus has been described as an ‘infodemic’ by the World Health Organisation. Government, social media companies, and others have taken concerted action against it. Michela Palese and Alan Renwick here examine these responses and consider potential lessons for tackling online misinformation more broadly.

COVID-19 is rightly dominating the international agenda. Besides the crucial health, economic, and social dimensions, considerable attention is being paid to the information on COVID-19 that is circulating online. 

Ever since the virus emerged, false, misleading and/or harmful information has spread, especially online. Newsguard, which ranks websites by trustworthiness, found that, in the 90 days to 3 March, 75 US websites publishing coronavirus misinformation received ‘more than 142 times the engagement of the two major public health institutions providing information about the outbreak’. Ofcom found that ‘[a]lmost half of UK online adults came across false or misleading information about the coronavirus’ in the last week of March. The World Health Organisation (WHO) described the misinformation as an ‘infodemic – an over-abundance of information – some accurate and some not – that makes it hard for people to find trustworthy sources and reliable guidance when they need it.’

The capacity of social media and 24/7 news to proliferate misinformation was already manifest. But this is the first time the potentially nefarious effects of an unregulated online space have combined with a global pandemic. As Conservative MP Damian Collins put it, this is the ‘first major public health crisis of the social media age’.

Governments and tech companies across the globe are responding. In this post, we highlight key steps and consider lessons for dealing with misinformation in general. Continue reading