The UK Government today published its initial response to the Online Harms consultation, in the first substantive update since the publication of the White Paper in April 2019. The consultation response is described as an indication of the “direction of travel“, and it is clear that we remain at an early stage in the policy development process. Certain things have been clarified, whilst much remains up for grabs. The Government’s full response to the consultation is expected in the Spring.
Ofcom to be the regulator for online harms
DCMS and the Home Office announced in a press release that they are minded to appoint Ofcom to oversee the new regime, adding to the regulator’s current communications and broadcasting remit. The regulator would be responsible for drawing up the new codes of practice and enforcing compliance with the statutory duty to protect users from harm. Ofcom would focus on overseeing companies’ wider systems and processes, including provision of effective complaints processes – it would not adjudicate on individual complaints, or require the removal of specific content.
The powers available to Ofcom to enforce the new duty of care are still up for debate. There were strong responses to the more draconian measures being considered i.e. director’s liability and business disruption measures such as ISP blocking. Concerns raised included the need to comply with human rights legislation in relation to freedom of expression, and the practical and technical issues that ISP blocking in particular presents. A decision on whether these measures would be available to the regulator has been delayed until the Spring.
Regulation to differentiate between illegal and harmful content
In a substantial development from the proposals as outlined in the White Paper, the Government now intends to differentiate clearly between the expectations applicable to illegal content, as opposed to content that is not illegal but “has the potential to cause harm“. In-scope services would be required to remove illegal content expeditiously and to ensure that “the risk of it appearing is minimised by effective systems“, particularly where terrorist or CSE content is concerned. Stakeholders will likely be questioning whether such ‘systems’ may necessitate active monitoring of content or even pre-filtering.
It is not clear how this distinction will interrelate with the clear focus on child safety online, and the responsibility to protect children from accessing harmful or age-inappropriate content, regardless of whether that content is legal. Perhaps tellingly, the Government states that “We will not prevent adults from accessing or posting legal content” (our emphasis).
Plans have also changed in relation to the provision of codes of practice. Whereas the White Paper envisaged a separate code applicable to each distinct harm, both legal and illegal, the response states that it is not the intention to have a code of practice for each category of harmful content. Codes of practice would be produced in relation to terrorist and CSE content, but other codes may focus more on the expectations around systems and processes, rather than specific harms.
Further clarity on what services would be in scope?
The response provides some limited further information on which businesses are likely to be subject to the new regulation. The response states that, to be caught, a business must operate its own website with the functionality to enable sharing of user-generated content or user interactions. It would not, for example, catch businesses using social media for promotion purposes. In response to concerns from respondents, it’s also been clarified that business to business services, such as companies providing infrastructure for content storage, would not be in scope. Changes to the wording used to describe the scope further indicate that it has narrowed. The White Paper referred to “companies that provide services or tools that allow, enable or facilitate users to share or discover user-generated content, or interact with each other online”. The reference to ‘tools’ has been dropped in the response, suggesting that those at a further remove from the actions and experience of users are less likely to be in scope. ‘Discover’ has also disappeared, indicating that the new regime would not apply to search engine providers.
The scope remains very broad, however, potentially applying to everyone from global social media giants to small traders who allow customer reviews to be posted on their website. The response says that the regulator would provide guidance to assist businesses in assessing whether their services fall into scope, but it’s unclear to what extent that scope would be clearly defined in legislation, or whether it would be largely down to the regulator to interpret. The response expressly states that work is continuing on finalising the details of which organisations will be in scope, and that an impact assessment on the burdens on businesses would accompany any legislation.
Certain issues remain contentious and undecided
The consultation responses show that certain aspects of the proposal remain particularly contentious, with widely differing views received. In relation to whether private communications should be in scope, some argued that all messaging channels, including encrypted services, should be in scope as this is necessary in order to protect children. At the other end of the scale, others argued that any monitoring or oversight of private channels would be an unacceptable invasion of privacy, and that harm could be effectively dealt with e.g. by the provision of a mechanism to block other users. Overall, the majority of respondents were opposed to private messaging being in scope. However, it is unclear which side the Government will come down on, or indeed how ‘private’ communications should be defined. The response seems to leave the question open.
Other areas that proved contentious include how the regulator is to be funded – with many responses expressing concern over ensuring that a tax or charge was applied proportionately and fairly. There is also disagreement about the appropriate level of parliamentary oversight – with some calling for codes of practice to be agreed by parliament, whilst others argued that any parliamentary involvement would compromise the regulator’s independence. It’s not yet clear what the direction of travel is with respect to either of these issues.
Scope of harms still unclear
This new response does not provide any further clarity on three issues that particularly raised questions when the White Paper was published. Firstly, there is no further detail on how ‘harm’ would be defined, if at all It is not clear whether any attempt will be made to define this in primary legislation, or if it would be left to the codes of practice to place some firmer limits on an undeniably subjective term. Secondly, it is not clear if the White Paper’s list of harms to be excluded from scope continues to apply, or whether any others would be added (i.e. harm to businesses or organisations, harm resulting from data protection or cyber security breaches, and harms suffered on the dark web). Thirdly, whilst the former Culture Secretary confirmed that the new regime is not intended to apply to news organisations, there is no more information on how this would be achieved. At present, a news website that allows comments on its articles would seem to be in scope. Defining what constitutes a news website could also prove contentious. The response refers to safeguards in the legislation, but provides no further detail as to what these might be.
Interim codes to be published
Interim codes of practice in relation to Child Sexual Abuse and Exploitation and terrorist activity are expected to be published in the next few months. These are intended to “bridge the gap” until the new regulator is up and running.
You can see all MediaWrites coverage of the Online Harms White Paper here.