The draft Online Safety Bill: abandoning democracy to disinformation

The draft Online Safety Bill published in May is the first significant attempt to safeguard the public from online harms through legislation. However, as Alex Walker explains, the government’s current proposals are a missed opportunity to address online harms to democracy and could even make tackling disinformation more difficult.

In May, the government published its draft Online Safety Bill, which is currently undergoing pre-legislative scrutiny by a committee of both Houses. It is also the subject of an inquiry by the Digital, Culture, Media and Sport (DCMS) Sub-committee on Online Harms and Disinformation. Published two years after the Online Harms white paper, the draft bill represents the first major attempt in this country to regulate the online environment and the major companies that dominate it. Given the significance of the bill, the parliamentary attention it is currently receiving is welcome. Nevertheless, as much of the evidence given to parliament points out, the draft bill has significant weaknesses. In September, Constitution Unit Deputy Director Alan Renwick and I submitted evidence to the DCMS Sub-committee inquiry. We highlighted the draft bill’s failure to address online harms to democracy. There is a danger that in its present form the bill will make it more difficult to tackle disinformation that damages and undermines democracy.

Abandoning the field: from the Online Harms white paper to the draft Online Safety Bill

As previously documented, in the course of the development of the online safety regime measures to strengthen democracy in the face of new challenges posed by digital technology have been dropped from the proposals. The Online Harms white paper, published in April 2019, was explicit that various types of online activity could harm democracy. It referenced concerted disinformation campaigns, deepfakes, and micro-targeting. The white paper set out a number of actions that it was expected would be in the regulator’s Code of Practice. They included: using fact-checking services, especially during election campaigns; limiting the visibility of disputed content; promoting authoritative news sources and diverse news content; and processes to tackle those who mispresent their identity to spread disinformation.

In many areas, the white paper’s position chimed with the findings of a major inquiry into disinformation conducted by the DCMS select committee over the previous eighteen months.

But the publication of the draft Online Safety Bill in May confirmed that the government has opted for a much more limited approach. Only disinformation that could have a significant adverse physical or psychological impact on an individual is now in scope. In choosing this approach, the government ignored the recommendations of the House of Lords Democracy and Digital Technologies Committee, which proposed that certain service providers should have a duty of care towards democracy.

The emphasis has shifted decisively away from acknowledging that online platforms have a responsibility for the impact their technology has on democracy, towards a completely unregulated approach to political content, regardless of the broader democratic consequences.

Continue reading

Responding to the coronavirus ‘infodemic’: some lessons in tackling misinformation

Michela.Palese (1)alan.jfif (1)The proliferation of false, misleading and harmful information about the coronavirus has been described as an ‘infodemic’ by the World Health Organisation. Government, social media companies, and others have taken concerted action against it. Michela Palese and Alan Renwick here examine these responses and consider potential lessons for tackling online misinformation more broadly.

COVID-19 is rightly dominating the international agenda. Besides the crucial health, economic, and social dimensions, considerable attention is being paid to the information on COVID-19 that is circulating online. 

Ever since the virus emerged, false, misleading and/or harmful information has spread, especially online. Newsguard, which ranks websites by trustworthiness, found that, in the 90 days to 3 March, 75 US websites publishing coronavirus misinformation received ‘more than 142 times the engagement of the two major public health institutions providing information about the outbreak’. Ofcom found that ‘[a]lmost half of UK online adults came across false or misleading information about the coronavirus’ in the last week of March. The World Health Organisation (WHO) described the misinformation as an ‘infodemic – an over-abundance of information – some accurate and some not – that makes it hard for people to find trustworthy sources and reliable guidance when they need it.’

The capacity of social media and 24/7 news to proliferate misinformation was already manifest. But this is the first time the potentially nefarious effects of an unregulated online space have combined with a global pandemic. As Conservative MP Damian Collins put it, this is the ‘first major public health crisis of the social media age’.

Governments and tech companies across the globe are responding. In this post, we highlight key steps and consider lessons for dealing with misinformation in general. Continue reading

How the new Sub-Committee on Disinformation can help strengthen democracy in the digital age

Michela.Palese (1)In April 2019 the Commons Digital, Culture, Media and Sport select committee established a sub-committee to continue its inquiry into disinformation and data privacy in the digital age. Michela Palese considers the motivations underlying the establishment of this sub-committee, its stated priorities, and how it can help confront the challenges and threats to our democratic processes arising from online campaigning.

Last month the Digital, Culture, Media and Sport (DCMS) select committee launched a new Sub-Committee on Disinformation. Its task is to become ‘Parliament’s institutional home’ for matters concerning disinformation and data privacy; a focal point that will bring together those seeking to scrutinise and examine threats to democracy.’

The new sub-committee promises to offer an ongoing channel through which to gather evidence on disinformation and online political campaigning, and to highlight the urgent need for government, parliament, tech companies and others to take action so as to protect the integrity of our political system from online threats.

Damian Collins, chair of the DCMS committee, explained that the sub-committee was created because of:

‘concerns about the spread of disinformation and the pivotal role that social media plays. Disinformation is a growing issue for democracy and society, and robust public policy responses are needed to tackle it at source, as well as through the channels through which it is shared. We need to look principally at the responsibilities of big technology companies to act more effectively against the dissemination of disinformation, to provide more tools for their users to help them identify untrustworthy sources of information, and to provide greater transparency about who is promoting that content.’

The sub-committee follows up on the significant work conducted as part of the DCMS committee’s long-running inquiry into Disinformation and ‘Fake News’, whose final report was published in February 2019.

This inquiry ran for 18 months, held 23 oral evidence sessions, and took evidence from 73 witnesses: its final report contained a series of important conclusions and recommendations.

Among these, the report called on the government to look at how UK law should define ‘digital campaigning’ and ‘online political advertising’, and to acknowledge the role and influence of unpaid campaigns and Facebook groups both outside and during regulated campaign periods. It also advocated the creation of a code of practice around the political use of personal data, which would offer transparency about how people’s data are being collected and used, and about what messages users are being targeted with and by whom. It would also mean that political parties would have to take greater responsibility with regards to the use of personal data for political purposes, and ensure compliance with data protection and user consent legislation. Continue reading