The draft Online Safety Bill: abandoning democracy to disinformation

The draft Online Safety Bill published in May is the first significant attempt to safeguard the public from online harms through legislation. However, as Alex Walker explains, the government’s current proposals are a missed opportunity to address online harms to democracy and could even make tackling disinformation more difficult.

In May, the government published its draft Online Safety Bill, which is currently undergoing pre-legislative scrutiny by a committee of both Houses. It is also the subject of an inquiry by the Digital, Culture, Media and Sport (DCMS) Sub-committee on Online Harms and Disinformation. Published two years after the Online Harms white paper, the draft bill represents the first major attempt in this country to regulate the online environment and the major companies that dominate it. Given the significance of the bill, the parliamentary attention it is currently receiving is welcome. Nevertheless, as much of the evidence given to parliament points out, the draft bill has significant weaknesses. In September, Constitution Unit Deputy Director Alan Renwick and I submitted evidence to the DCMS Sub-committee inquiry. We highlighted the draft bill’s failure to address online harms to democracy. There is a danger that in its present form the bill will make it more difficult to tackle disinformation that damages and undermines democracy.

Abandoning the field: from the Online Harms white paper to the draft Online Safety Bill

As previously documented, in the course of the development of the online safety regime measures to strengthen democracy in the face of new challenges posed by digital technology have been dropped from the proposals. The Online Harms white paper, published in April 2019, was explicit that various types of online activity could harm democracy. It referenced concerted disinformation campaigns, deepfakes, and micro-targeting. The white paper set out a number of actions that it was expected would be in the regulator’s Code of Practice. They included: using fact-checking services, especially during election campaigns; limiting the visibility of disputed content; promoting authoritative news sources and diverse news content; and processes to tackle those who mispresent their identity to spread disinformation.

In many areas, the white paper’s position chimed with the findings of a major inquiry into disinformation conducted by the DCMS select committee over the previous eighteen months.

But the publication of the draft Online Safety Bill in May confirmed that the government has opted for a much more limited approach. Only disinformation that could have a significant adverse physical or psychological impact on an individual is now in scope. In choosing this approach, the government ignored the recommendations of the House of Lords Democracy and Digital Technologies Committee, which proposed that certain service providers should have a duty of care towards democracy.

The emphasis has shifted decisively away from acknowledging that online platforms have a responsibility for the impact their technology has on democracy, towards a completely unregulated approach to political content, regardless of the broader democratic consequences.

Omissions from the draft bill

The draft bill contains no provisions directed at harmful political disinformation. This would not necessarily have to involve removing or banning this kind of content, which could impact on freedom of expression. There is also little evidence that an outright ban on misinformation would work. Indeed, in some situations it could make the situation worse, for example, by drawing attention to the banned content. But limiting the visibility of disputed content could be effective. The Democracy and Digital Technologies Committee proposed that companies be required to adjust their algorithms so that disinformation is not promoted to a wider audience. Despite being mentioned in the Online Harms white paper, this possibility has subsequently been ignored. 

Limiting the reach of harmful political disinformation would help protect democracy. But to make democracy work effectively, more positive interventions are needed.  A requirement for certain platforms to promote authoritative news sources would represent one such positive step. This is another measure that has been dropped since it was included in the 2019 white paper. The fact-checking organisation Full Fact observed in its submission to the DCMS Sub-committee that news is a required part of television and broadcast output; it is recognised as necessary for a healthy society. The largest online companies – which are now the primary source of information for many – should have equivalent obligations. 

The establishment of a democratic information hub would be a further positive step in this direction. The idea of a democratic information hub was first detailed in the Constitution Unit’s 2019 report Doing Democracy Better and was subsequently endorsed by the Democracy and Digital Technologies Committee. This could provide a coordinated home for the accurate, balanced, relevant and accessible information that is essential for a well-functioning democracy. To work best, it would require establishing a new public body to set up and run the hub independently of government. The online safety bill presents an opportunity to legislate for such a body. But, as it stands, there is nothing of this nature in the draft bill.

Additions to the draft bill: clause 13

The draft bill contains several new elements that were not included in the initial white paper. Whilst the abstract intention of these clauses – to protect freedom of expression online – is laudable, there is a risk that they could have unintended negative consequences.

Clause 13 sets out a duty for certain service providers to protect content of ‘democratic importance’. This is currently defined in the draft bill as content which ‘is or appears to be specifically intended to contribute to democratic political debate in the United Kingdom’. This definition, much like many of the other definitions in the draft legislation, is extremely broad. Almost any online content could conceivably be claimed to belong in this category. Furthermore, the draft bill contains a requirement that companies must take into consideration ‘the importance of the free expression of content of democratic importance’ when they are deciding whether to take action against harmful content or users. Given the breadth of the present definition, this could offer protection to a wide range of harmful content that companies would otherwise be expected to address. 

Additions to the draft bill: clause 14

Further protections are included for journalistic content (clause 14). As several observers have pointed out, it is not clear why journalists are entitled to special rights of freedom of expression that ordinary users are not. Moreover, it is easy to see how this provision could be exploited. It would be relatively straightforward for those spreading misinformation and other types of harmful content to categorise themselves as news publishers.

Taken together, these provisions represent an own goal for the government. It will make it harder to tackle the disinformation it acknowledges can cause major social and democratic problems. The government’s RESIST: Counter-disinformation toolkit says that ‘when the information environment is deliberately confused this can: threaten public safety; fracture community cohesion; reduce trust in institutions and the media; undermine public acceptance of science’s role in informing policy development and implementation; damage our economic prosperity and our global influence; and undermine the integrity of government, the constitution and our democratic processes.’

The government has set up a Counter Disinformation Unit for the purpose of tackling disinformation, which engages with some of the large internet companies on the issue. But the draft Online Safety Bill could be more of a hindrance than a help in these efforts. In limiting the disinformation in scope to that which causes individual psychological or physical harm it leaves out many of the types of disinformation that lead to the broader social and democratic problems outlined in the disinformation toolkit. In the future, ministers may find it harder to work with social media companies to deal with disinformation that undermines democracy, for example. The government may want to counter the spread of dangerous conspiracy theories, but more often than not these have a political dimension. The legislation as drafted could encourage platforms to take a less proactive approach to this kind of content, in case it can be classified as a contribution to political debate.

The recent testimony of whistleblower Frances Haugen indicates that Facebook is already reluctant to address the social and democratic harms caused by its platforms. In certain areas, the draft Online Safety Bill could make it even less likely to do so.

The consequences of failing to address online political disinformation

Haugen’s revelations make plain the tangible dangers of failing to include political disinformation in the legislation. Her evidence included the claim that after the 2020 US election Facebook relaxed many of the controls it had in place during the campaign. Research by the BBC has shown that conspiracy theories alleging that the election result was fraudulent proliferated on Facebook after the election. The real-world dangers of this were vividly demonstrated by the attack on the US Capitol on 6 January.

The government’s current approach runs the risk of something similar happening here. It is crucial that social media companies do not end up censoring legitimate online political debate. But the draft legislation fails to strike a suitable balance between the right to freedom of expression and responsibilities towards democracy. The government has adopted a ‘free market of ideas’ approach that posits that all political content should be protected, regardless of its veracity or potential to cause harm. The justification for the ‘free market of ideas’ is that the process of unrestricted debate reveals the truth. However, there is no evidence that this is the case online. In fact, as we have seen, it often leads to the spread of corrosive political conspiracy theories, that, when left unchecked, can have serious consequences.

Delegated powers: clause 33

The draft bill ignores the challenge disinformation poses to UK democracy and, in several ways, it may in fact make the situation worse. In addition to this, there are concerns about the broad delegated powers it contains. In particular, clause 33 would allow the Secretary of State to direct Ofcom to modify the code of practice to ensure that it ‘reflects government policy’. This opens up the regulator to political control in this area and undermines parliament’s proper role in scrutinising and establishing a detailed regulatory framework in primary legislation.

For a more detailed discussion of the issues raised in this post, read the evidence submitted by Alex and Unit Deputy Director Alan Renwick to the DCMS Sub-committee on Online Harms and Disinformation.

About the author

Alex Walker is Communications Manager and Researcher at the Constitution Society.