The publication of a draft Online Safety Bill has enabled two parliamentary committees to engage in detailed pre-legislative scrutiny. The conclusions of a special joint committee were discussed in earlier posts by its Chair, Damian Collins and Alex Walker. Here, Alex analyses the findings of the second report on the draft bill, authored by the DCMS Committee, and analyses the points of contention between the two reports.
Parliament has been giving close attention to the landmark Online Safety Bill since it was published in draft in May 2021. In December, the joint committee set up to consider the draft bill published its report. I considered its recommendations in the first part of this two-part series on the scrutiny of the draft bill. The Digital, Culture, Media and Sport (DCMS) Select Committee has since published its take on the draft legislation. As the DCMS committee commented, it is welcome that the bill was published in draft, and is receiving such comprehensive pre-legislative scrutiny. Whilst the government is of course not required to accept the recommendations of the committees, failing to address gaps they have both identified would not be a constructive response to the pre-legislative process.
One such gap (highlighted previously on this blog) is that of online harms to democracy. Whilst they diverge on a number of points, the DCMS committee and the joint committee share the analysis that this is a serious issue which the bill should address. In this piece, I consider the DCMS committee’s proposals to address online threats to democracy and look at how they differ from those of the joint committee. Both approaches to improving this aspect of the bill are worthy of careful consideration and the government should not use the points of difference as a way to avoid taking action.
Content that undermines democracy should be in scope
Constitution Unit Deputy Director Alan Renwick and I argued in written evidence to the DCMS committee that online harms to democracy should be addressed in the legislation. The committee agreed. The government’s own 2019 Online Harms white paper detailed the dangers that online activity such as the viral spread of disinformation could pose to democracy. But the measures the white paper set out to address this issue were later abandoned, leaving the draft bill with a considerable blind spot. Both the DCMS committee and the joint committee concluded that leaving this gap unfilled would be a mistake. However, the two committees recommended different changes to the legislation.
Reframing the definition of harm versus replacing it with more specific categories
The draft bill contains a provision requiring that high reach user-to-user services (such as the big social media companies) address content that is harmful to adults in their terms and conditions. This is defined as content that presents a risk of having a significant adverse physical or psychological impact on an individual.
On the one hand, this approach has been criticised for being vague and leaving too much to the discretion of technology companies, who might limit freedom of expression by overzealously taking down content. On the other hand, its focus on the individual excludes wider harms of a serious nature from its scope, such as the potential for disinformation to undermine national security and democracy.
To deal with this problem, the DCMS committee said that the definition of harmful content should be reframed. Infringements on freedom of expression must be necessary and proportionate. To ensure that this is the case, the committee recommended that ‘the need to consider context, the position and intentionality of the speaker, the susceptibility of the audience and the content’s accuracy’ be added to the definition of harm. A revised definition would create a more detailed reference point for companies when considering what kinds of content should be considered harmful. But it could also include wider harms beyond the scope of the definition in the draft bill.
The committee added that the definition should ‘explicitly account for any intention of electoral interference and voter suppression’. It argued that refining the definition along these lines would ‘provide meaningful ways to proportionately mitigate the impacts of harms to democracy.’
This is a different approach to that proposed by the Joint Committee on the Draft Online Safety Bill. Instead, it recommended replacing the general definition with a list of categories of harmful content and activity. These categories should reflect existing areas of law where it has been recognised that it is legitimate to limit freedom of speech. The joint committee proposed that a measure addressing disinformation that aims to disrupt elections be included on the face of either the Online Safety Bill or the Elections Bill.
Comparing the different approaches
The DCMS committee was critical of the joint committee’s approach, saying that the replacement provisions ‘simply [rebrand] the current duties’. This is somewhat unfair. If implemented, the joint committee’s recommendations would include new offences and provide more certainty in terms of what categories of harmful content companies should be focusing on.
In fact, the intention of both committees is broadly the same on this issue – to be more specific about what harmful content is in scope, while making sure that kinds of content that are clearly an issue, but might not at present be included or illegal, are encompassed. Both are clear that the legislation should cover disinformation aimed at manipulating and undermining elections.
It is important that this point of agreement is not ignored. They differed, however, on how best to go about it.
As I commented in my post on the joint committee’s report, in some respects its proposal is unclear. It is left ambiguous as to whether a general provision addressing disinformation that aims to disrupt elections should be included in the Online Safety Bill or the Elections Bill. Unfortunately, this could mean that the measure falls between the cracks.
Alternatively, the DCMS committee’s recommendation might prove more achievable in the immediate term. Modifying the definition in the draft bill to account for content intended to manipulate or distort an election works within the grain of the legislation as currently drafted.
This said, the government should consider both proposals carefully. It should make a decision informed by which approach would be most effective at minimising the harm to democratic processes arising from disinformation, whilst ensuring that online speech is not unduly interfered with.
Finally, both committees focused specifically on disinformation aimed at interfering with elections. Their desire to be precise is understandable. Nevertheless, it is worth highlighting that this focus on electoral events would leave much disinformation that is harmful to democracy outside of the scope of the provisions.
Safety by design and preventative measures
The draft bill’s apparent focus on taking down content has been widely criticised. Whilst takedowns may be appropriate in certain instances, the joint committee argued that the legislation should also include a requirement that companies modify the design of their services in order to minimise wider harms on their platforms, such as those caused by the amplification of disinformation.
The DCMS committee agreed. It proposed that the bill include an illustrative list of ‘preventative and remedial measures’ such as ‘tagging or labelling, covering, redacting, [and] deprioritising’.
The report also referenced Alan Renwick’s suggestion in oral evidence that companies should be required to promote ‘accurate, accessible information’ on their platforms. On this front, it would have been good to see the idea of a ‘democratic information hub’ taken up by the committee, as set out in the Unit’s Doing Democracy Betterreport and recommended in our written evidence.
Criticism of the proposal for a permanent joint committee
In many respects, the committees shared similar criticisms of the draft bill. But as we have seen, in some instances (including online harms to democracy) they put forward different proposals for addressing its defects. On other issues, however, they differed more substantially; for example, on how digital regulation should be scrutinised by parliament. The joint committee proposed a permanent committee of both Houses to scrutinise Ofcom’s work in this area and track developments across the landscape.
The DCMS committee stated plainly in its report (and in letters to the Leader of the House and the Secretary of State) that such a committee should not be established. It argued that this would ‘[duplicate] our existing constitutional role’ and that the doubling up of committee scrutiny work would exert unhelpful competing political pressures on Ofcom. The DCMS committee rejected the notion that digital regulation is not covered by the existing committee landscape – arguing instead that ‘it is inherent to the digital aspect of our remit’.
It is valuable that the draft bill has received pre-legislative scrutiny from multiple committees, but it is also important that there is clarity regarding which committee is primarily responsible for parliamentary scrutiny of digital regulation going forward. Having two committees performing similar functions is likely to confuse matters for the regulator. Nevertheless, this is a complex and emerging policy area which could no doubt benefit from focused scrutiny. If an ongoing joint committee is established, the committees should aim to work constructively together and not duplicate each other’s functions.
It is of course important that a suitable and adequate framework of parliamentary scrutiny is in place. However, the disagreement between the committees on this issue – however strongly felt – shouldn’t distract from their common calls for the government to improve the bill, such as by including measures to respond to disinformation that undermines democracy.
Government announcement of new measures
On 4 February, the government announced that new criminal offences, including a ban on ‘sending knowingly false communications’, would be added to the bill. This had been recommended by the Law Commission and the joint committee.
It is positive that the government is making changes to the draft bill before it is introduced. But as the joint committee pointed out in its report, the knowingly-false communications offence would only capture disinformation that causes psychological or physical harm to individuals. This is why it also recommended a provision to address election-related disinformation. This was not mentioned in the press release. Previously, the government has stated its aversion to taking any action on political disinformation, arguing that ‘[it] is a matter for voters to decide whether they consider material to be accurate or not.’ The omission of this kind of disinformation from the recent announcement of measures to strengthen the bill indicates the government remains unconvinced.
Yet both committees concluded on the basis of the evidence they received that something should be done about online harms to democracy. As we have seen, the detail of their proposals for how the legislation should address election-related disinformation differed. Nevertheless, the government should not use this – and other points of disagreement such as on the establishment of a permanent joint committee – as an excuse for not taking action. Rather, the proposals of both committees should be carefully assessed to determine which approach would be most effective.
This is the second in a two-part series of posts by Alex on the draft Online Safety Bill. The first post was published in January.
About the author
Alex Walker is Communications Manager and Researcher at The Constitution Society.