Online Harms Series: Government response to the Online Harms Consultation – What harms are in scope?

In the first article in our series looking at the government response to the Online Harms Consultation, Theo Rees-Bidder looks at the definition of illegal and harmful content, and what companies that post/host such content will be expected to do in response.

0
490

In April 2019, the Online Harms White Paper proposed the introduction of a duty of care on companies whose services host user-generated content or facilitate online interaction between users. It was said that the duty of care would require those companies to take reasonable steps to prevent, reduce or mitigate harm occurring on their service. There was concern that the notion of harm was left broad and undefined. It was unclear how a regulator might interpret this duty – as harm can be experienced in different degrees by different users, the question was when would the duty of care bite and require a company to act?

The government’s response does provide some clarity to those set to be affected by the online harms regime.  It provides guidance on what “harmful” will mean, certain limits on what will be considered illegal content, and certain obligations are limited to only the most influential service providers. However, concerns remain, including as to ambiguity over how the definition of harm might be interpreted.

What content will be illegal?

All companies caught by the regime will be required to do two things: (1) take action against illegal content and activity; and (2) assess the likelihood of children accessing their services, and if that is deemed likely, provide additional protections to children using the service.

Illegal content and activity will be that which could constitute a criminal offence in the UK, or an element of a criminal offence which meets the proposed definition of “harm”. It will not cover content which only gives rise to a risk of a civil offence (such as defamation or negligence).

All service providers will be required to take action to remove illegal content expeditiously. The government has indicated that it will set out priority categories of offences in secondary legislation, against which companies will be required to take “particularly robust action”.  The indication is that these will be the offences posing the greatest risk of harm (based on the number of people who could be affected and the severity of the harm that could occur). Examples include terrorism and child sexual exploitation and abuse.

For priority categories, companies will need to conduct a risk assessment and consider what systems and processes will be necessary to identify, assess and address such content (including devoting more resources to content moderation and/or limiting algorithmic promotion of content). If companies fail to put in place systems and processes that adequately identify that content, they may then be required to develop a method of proactively identifying and removing such content.

At first glance, this does not appear to sit entirely comfortably with Article 15 of the e-Commerce Directive, which provides that an information society service does not have a general obligation to monitor the information they transmit or store.  It remains to be seen how this inter-relationship will be addressed (if at all) in the draft legislation – interestingly, the EU has introduced a “good Samaritan” provision in the Digital Services Act to protect platforms who take voluntary steps to remove negative content.

Companies will also be required to act against illegal material falling outside the priority categories on their services, where this type of material is incidentally identified through their systems or is reported to them.

 What content will be legal but still harmful?

Only companies with Category 1 services (which is essentially expected to be the largest tech platforms) will be required to act to remove content which is lawful but harmful to adults. However, all companies (not just those designated as Category 1) who determine there is a likelihood that children will access their services, will be required to have procedures for dealing with content which is legal but harmful to children.  The government has said it anticipates that age assurance and verification technologies will play an important role in fulfilling the duty of care in this regard.

The government response offers a definition for “harmful” content – being content which “gives rise to a reasonably foreseeable risk of significant adverse physical or psychological impact to individuals”. Companies caught by the regime will not have to address content that does not pose a reasonably foreseeable risk of harm, or that has a minor impact on users or other individuals.  Other content specifically identified as falling outside the scope of this definition includes that causing financial harm, harm to businesses/organisations, or harm resulting from breaches of IP rights, data protection or consumer protection law, cyber security or hacking, and fraud.

The government has indicated that it will also set out priority categories of legal but harmful material (separately in respect to adults and children) in secondary legislation.  The examples given as to what type of content might fall in this area for adults includes content promoting self-harm; hate content; content encouraging or promoting eating disorders; and online abuse that does not meet the threshold to be considered a criminal offence.  The last of these will likely prove particularly controversial with those who advocate for freedom of speech online, who are sure to be critical of the government for allowing content to be tagged as harmful (and potentially removed) in the absence of criminal legislation or an offence.

One area specifically highlighted in the government’s response is disinformation and misinformation online, with reference to recent events including the Covid-19 pandemic and conspiracy theories relating to the roll out of the UK’s 5G network.  Disinformation and misinformation will not be banned completely under the new regime. However, disinformation or misinformation satisfying the definition of harm will be within the scope of the duty of care and therefore actionable.

The government believes most disinformation and misinformation will fall in the category of being legal but potentially harmful. Where this is the case, a company offering Category 1 services will need to assess that content in line with their terms and conditions, as per the requirement to enforce them consistently and transparently. Where urgent action is required to address disinformation or misinformation in emergency situations, Ofcom will have the power to intervene.  The government has also indicated a belief that the regulatory framework will further the public’s understanding of what companies are doing to tackle disinformation and misinformation, through the obligation to publish annual transparency reports.

Steps required to tackle “harmful” content

Where content is identified as being harmful but legal, there is not a requirement for it to be expeditiously removed, as there is for illegal content.  Instead, Category 1 companies will be required:

  1. To undertake regular risk assessments to identify harmful but legal content on their service (using the definition of harm) and to identify and notify the regulator of emerging legal but harmful harms.
  2. To set clear and accessible terms and conditions which state how they will handle the priority categories of legal but harmful material and any others they have identified in their risk assessment. These terms and conditions will need to make clear to users what is acceptable on their service and how content will be treated.
  3. To enforce their terms and conditions consistently and transparently, including having effective and accessible reporting and redress mechanisms. The Codes of Practice to be introduced by Ofcom will set out the steps companies should take to meet expectations in this regard and the government’s response also indicates Category 1 companies will need to publish annual transparency reports, explaining how they have been enforcing their terms and conditions.

This approach means that companies classed as providing Category 1 services will not be required to remove specific pieces of legal content, unless it is not permitted under their own terms and conditions. This poses a significant challenge for companies in ensuring that their terms and conditions are clear as to which content will be treated as “harmful” and what steps will be taken to deal with it; where secondary legislation doesn’t specify the harm in question, it will be left to companies to decide where to draw the lines.

So where are we?

The response has provided some clarity as to the scope of harms that will be included in the new regime. However, key areas of concern remain. The definition of harm remains ambiguous, with uncertainty as to how exactly it will be enforced. For example, the definition of “harmful” doesn’t indicate what level of tolerance the individual is expected to have, a relevant factor when considering a concept that cannot easily be measured and will be experienced differently by each of us, such as psychological harm.  Is the definition going to be based on the specific user in question who alleges a harm? Or will there be a more objective threshold based on a reasonable user of the service? These questions are unanswered, leaving platforms unclear just how burdensome the duty of care might be in this respect.

Category 1 companies will also have justifiable concerns about the onus being placed on them to determine what is legal but harmful content and how they will deal with such content. For example, assuming a platform adopts the government’s definition of harm in their terms and conditions, does that suffice?  Is it permissible for a platform’s terms and conditions to say that, where they identify harmful but legal content, they will publicly identify it as such but the content will remain on the platform (in a similar manner to how Twitter dealt with many of President Trump’s tweets regarding the recent US election)?  Would this meet the need to abide by their own terms and conditions, so long as they enforce them transparently and consistently? Or, is there an implicit obligation under the new regime that where harmful content is identified it should be removed, even if it is legal?

Many will feel that government should more clearly set out the parameters for such content and how it should be dealt with, rather than leaving challenging judgment calls to private companies or a regulator.   With proposed potential fines of up to 10% of a company’s annual global turnover, this is not a judgment that platforms want to risk getting wrong.

Leave a Reply