‘Legal but harmful’ in the Online Safety Bill – gone but not forgotten?

In the second article in our new Online Safety Bill series, we consider changes to the duties regarding legal but harmful content and how this might impact service providers.

0
525

As the United Kingdom’s Online Safety Bill staggers into the House of Lords (the Upper Legislative Chamber) for the last stage of its legislative journey, it is worth taking stock of what has changed so far. Further changes are of course likely but despite its heavily delayed progress and despite both its novelty and complexity, the fundamentals of the Bill remain intact and are not likely to change. The Bill still imposes duties of care on providers of ‘user to user’ services and search services to assess the risks of content on those services being illegal or harmful to children, and then to use proportionate measures, systems and processes to manage and mitigate those risks, taking into account counterweight duties to preserve freedom of speech and privacy. Larger providers will have to comply with additional duties.

Perhaps the most extensively advertised change to the scope of the Bill and the one that has resulted from the loudest political controversy has been the removal of duties on larger platforms to risk assess, and to determine and explain their treatment of, content deemed to be harmful to adults but not unlawful. Instead, those providers must enforce their terms of service regarding the ‘legal but harmful’ content they decide to prohibit or restrict access to, and must make tools available to their users to reduce the likelihood that those users see material they do not want to see, or wish to be alerted to before they see (Clause 12 of the latest version of the Bill). The material in question would be that which encourages or promotes suicide, self-harm or eating disorders, which is abusive and targets protected characteristics (namely race, religion, sex, sexual orientation, disability or gender reassignment), or which incites hatred based on those protected characteristics.

So should the providers affected by this change be relieved? Perhaps not entirely.

The changed Bill certainly asks less of them. First, the previous version of the Bill effectively required larger providers to set out the treatment they intended to give (including but not limited to take-down) to each kind of ‘priority content’ that may be harmful to adults. The new version is not as specific and the regulator has no power to find terms and conditions to be inadequate or to require them to be changed. The expectation is only that whatever your terms and conditions say you will do, you will do.

Second, larger providers will not now have to publish a risk assessment relating to conduct that may be harmful to adults. However, they will almost certainly still have to assess the risk of such content being present on their platform, and be able to identify and isolate it. How else will they be able to offer users the required functionality to insulate themselves from, or at least to be warned about, that content? With amendments likely to be proposed on this in the House of Lords, it is not yet clear whether the default position will be that users must opt to be protected from some or all of the specified types of harmful content otherwise they will be shown it, or instead that users will be protected from such content unless they opt actively to see it. If the latter, providers will need to assume that they will be expected to filter out a large amount of ‘legal but harmful’ content for a likely majority of their users.

So in headline terms ‘legal but harmful’ material is no longer part of the Online Safety Bill, but in reality larger providers in scope of the Bill are still going to have to worry about it.

Leave a Reply