The Joint Committee on the Draft Online Safety Bill: steps in the right direction for democracy

The government’s draft Online Safety Bill does little to protect democracy from damage caused by online actors, despite a previous commitment to take action. Alex Walker argues that this was an error. Here, he analyses the December report of the parliamentary joint committee tasked with examining the bill. A post in early February will critique the conclusions and recommendations of the DCMS select committee, which published its report earlier this week.

In December, the joint committee tasked with scrutinising the government’s draft Online Safety Bill published its report, the conclusions of which were outlined by its Chair, Damian Collins, on this blog. The committee recommended significant overarching changes to the draft bill, which represents the first major attempt in the UK at online regulation.

Since its publication in May 2021, the draft bill has been subject to extensive criticism, including on this blog. In previous posts, I’ve highlighted that it fails to address online threats to democracy. The government’s 2019 Online Harms white paper acknowledged the seriousness of this issue and set out measures to tackle it. These proposals were then later abandoned.

Positively, the committee noted the government’s change of direction and concluded to the contrary that online harms to democracy should be tackled by legislation. Whilst the committee’s recommendations have their own limitations, if adopted they would better protect democratic processes from online harm than at present.

Continue reading

The Elections Bill: examining the evidence

The Elections Bill is currently being scrutinised by the Commons Public Administration and Constitutional Affairs Committee, which has received a large amount of evidence from a wide range of academics and organisations. Ahead of the Unit’s September webinar on the bill, Emilia Cieslak offered a summary of the key themes, including the parts of the bill that are welcomed, and the sections that have caused concern.

The Elections Bill currently before parliament aims to tackle a wide range of issues, including fighting electoral fraud, increasing parliamentary supervision of the Electoral Commission, and extending the franchise to more overseas electors and EU citizens. The bill recently received its second reading in the Commons. It is currently going through committee stage and is also being reviewed by the Commons Public Administration and Constitutional Affairs Committee (PACAC). While some provisions have proved popular, many have attracted criticism.

This post reviews the written evidence submissions to PACAC’s inquiry, focusing largely on the most controversial provisions: the introduction of photographic voter ID, changes to parliamentary scrutiny of the Electoral Commission, and reform of campaign spending rules. Before addressing those controversial aspects, however, I highlight sections of the bill that are generally welcomed.

Popular provisions

The bill proposes to abolish the current 15-year limit after which overseas electors become ineligible to vote. This has so far met very little opposition, and has strong support from groups representing British citizens living abroad. Several submissions (for example, from the Electoral Commission and Association of Electoral Administrators) do, however, draw attention to practical difficulties. And one submission, from Professor Justin Fisher, argues that the principled case for the change is not straightforward.

Meanwhile, no submissions oppose extending voting and candidacy rights to EU citizens through bilateral arrangements with individual member states. Most welcome changes to provision for voters with disabilities, though some identify what they see as flaws in certain elements of those measures.

The introduction of digital imprints is hailed as an overdue, necessary step to tackling the problem of misleading campaign material online. Most respondents writing on the topic argue that the provision is a good start, but that more is needed. Dr Sam Power comments that the provision should be accompanied by a renewed focus on citizen engagement and digital literacy campaigns. The Electoral Reform Society argues for a requirement that campaigners provide invoices on their digital spending, an open database for all political advertisements, and a code of practice on use of sensitive data. Multiple respondents warned about the rapid development of technology which means the legislation will require post-legislative scrutiny and frequent updates to avoid new loopholes developing.

Continue reading

Online harms to democracy: the government’s change of approach

Two years after the publication of the government’s Online Harms white paper, the government has published its final consultation response. Its commitment in the white paper to legislate to prevent online harms to democracy has disappeared, to the frustration of many inside and outside parliament. Alex Walker reflects on the government’s decision to ‘abandon the field’ and argues that a laissez-faire approach could lead to negative consequences.

It is expected that the Queen’s Speech on 11 May will include the government’s long-awaited Online Safety Bill. This will be a major piece of legislation with significant implications for the regulation of digital technology companies in the UK. However, when it is introduced it now seems highly unlikely that it will encompass measures to prevent harms to democracy, as was initially indicated.

The Online Harms white paper published in April 2019 set out a position that recognised the dangers that digital technology could pose to democracy and proposed measures to tackle them. This was followed by an initial consultation response in February 2020 and a full response in December. In the course of the policy’s development, the democracy aspect of the proposals has disappeared. The government now points instead to other areas of activity. This represents a shift away from the ambition of the white paper, which promised to address online harms ‘in a single and coherent way.’

Online Harms white paper: April 2019

The white paper first put forward the government’s intention for a statutory duty of care that would make companies responsible for harms caused on their platforms. This would include illegal harmful content, such as child abuse and terrorist material, but also some forms of harmful but legal content, including disinformation and misinformation. The white paper explicitly framed some of its proposals for tackling online harms in relation to the consequences for democracy. It detailed some of the harms that can be caused, including the manipulation of individual voters through micro-targeting, deepfakes, and concerted disinformation campaigns. It concluded that online platforms are ‘inherently vulnerable to the efforts of a few to manipulate and confuse the information environment for nefarious purposes, including undermining trust’. It recognised that there is a distinction to be drawn between legitimate influence and illegitimate manipulation.

The white paper also set out what the government expected to be in the regulators’ Code of Practice, and what would be required to fulfil the duty of care. This included: using fact-checking services, particularly during election periods; limiting the visibility of disputed content; promoting authoritative news sources and diverse news content; and processes to tackle those who misrepresent their identity to spread disinformation. It stated that action is needed to combat the spread of false and misleading information in part because it can ‘damage our trust in our democratic institutions, including Parliament.’

Continue reading

How the new Sub-Committee on Disinformation can help strengthen democracy in the digital age

Michela.Palese (1)In April 2019 the Commons Digital, Culture, Media and Sport select committee established a sub-committee to continue its inquiry into disinformation and data privacy in the digital age. Michela Palese considers the motivations underlying the establishment of this sub-committee, its stated priorities, and how it can help confront the challenges and threats to our democratic processes arising from online campaigning.

Last month the Digital, Culture, Media and Sport (DCMS) select committee launched a new Sub-Committee on Disinformation. Its task is to become ‘Parliament’s institutional home’ for matters concerning disinformation and data privacy; a focal point that will bring together those seeking to scrutinise and examine threats to democracy.’

The new sub-committee promises to offer an ongoing channel through which to gather evidence on disinformation and online political campaigning, and to highlight the urgent need for government, parliament, tech companies and others to take action so as to protect the integrity of our political system from online threats.

Damian Collins, chair of the DCMS committee, explained that the sub-committee was created because of:

‘concerns about the spread of disinformation and the pivotal role that social media plays. Disinformation is a growing issue for democracy and society, and robust public policy responses are needed to tackle it at source, as well as through the channels through which it is shared. We need to look principally at the responsibilities of big technology companies to act more effectively against the dissemination of disinformation, to provide more tools for their users to help them identify untrustworthy sources of information, and to provide greater transparency about who is promoting that content.’

The sub-committee follows up on the significant work conducted as part of the DCMS committee’s long-running inquiry into Disinformation and ‘Fake News’, whose final report was published in February 2019.

This inquiry ran for 18 months, held 23 oral evidence sessions, and took evidence from 73 witnesses: its final report contained a series of important conclusions and recommendations.

Among these, the report called on the government to look at how UK law should define ‘digital campaigning’ and ‘online political advertising’, and to acknowledge the role and influence of unpaid campaigns and Facebook groups both outside and during regulated campaign periods. It also advocated the creation of a code of practice around the political use of personal data, which would offer transparency about how people’s data are being collected and used, and about what messages users are being targeted with and by whom. It would also mean that political parties would have to take greater responsibility with regards to the use of personal data for political purposes, and ensure compliance with data protection and user consent legislation. Continue reading