The White Paper
The Online Harms White Paper, published in April 2019, had a broad scope, catching any company whose services allowed users to “share or discover user-generated content or interact with each other online”.
Acknowledging that companies of all sizes may be caught by the regime, the paper expressly noted that file hosting sites, public discussion forums, search engines and online games with social features were caught by the proposal. Other companies not expressly identified in the White Paper faced uncertainty as to whether they would fall within the regime, for example, those involved in providing content sharing services indirectly, and digital journalism providers whose platforms had online comments sections. As a result, further clarifications on the regime have been eagerly awaited.
Which, to some extent at least, we now have. The UK government published its final response to the proposals last month. The report fleshed out details of the regime (which will be given effect through the upcoming Online Safety Bill) and provided important guidance on the companies that will now be in scope and those which will be exempt.
Scope of the regime – which companies must comply?
The final response confirmed that companies will fall under the regime if they:
- host user-generated content which can be accessed by users in the UK; and/or
- facilitate public or private online interaction between service users, one or more of whom is in the UK.
Looking more closely at the above, one point to note is what appears to be broad territorial scope. Companies providing services that can be accessed by UK users are caught; it doesn’t matter whether they have a physical presence in the UK, or whether they are even targeting UK users specifically – if a UK citizen uses their services, they are caught. The government’s intention therefore appears to be to regulate behaviours of companies across the world. It is anticipated that companies concerned about being caught by the legislation may need to implement geographical restrictions to ensure their services are not accessible within the UK.
The ambitious scope may also lead to enforcement issues. The proposed regulator, Ofcom, is to be given significant powers of enforcement (including the ability to issue fines of up to 10% of a company’s annual global turnover). The response also leaves open the possibility for criminal sanctions to be introduced against senior management who fail to respond to requests from the regulator. Whilst all penalties for non-compliance should be implemented equally against companies regardless of their location, where such services have no physical presence in the UK, it could be challenging for Ofcom to enforce such measures. Ultimately, successful implementation of enforcement measures will require the engagement and support of other countries. It is anticipated that Ofcom will work with equivalent organisations internationally to help foster collaboration. The response also notes that international cooperation may improve as other countries follow suit and introduce equivalent legislation, but presumably this will require the regimes to be of similar scope.
Companies confirmed to be within the scope of the regime include social media sites, consumer cloud storage sites, video sharing platforms, online forums, dating services, online instant messaging services, peer-to-peer services, video games allowing interaction with other users, and online marketplaces. Search engines will also be included within the framework despite not directly hosting user-generated content or facilitating interaction between users. The government explained this by claiming to have reviewed evidence of substantial harm occurring via search engines, such as enabling access to abusive content online.
The White Paper did not clearly define the extent to which platforms hosting private communications may be caught by the regime, but we now know they are in scope. The government has confirmed that the framework will apply to “public communication channels, and services where users expect a greater degree of privacy, such as instant messaging services and closed social media groups”. Noting the importance of user privacy, the proposed regulator, Ofcom, will provide guidance on what measures are likely to be appropriate in the context of private communications. However, where no alternative measures are suitable, Ofcom will have the power to require companies to identify child sexual exploitation on private channels using automated technology.
This inclusion of private communications is controversial to some, who consider that online user privacy protections play an important part in enforcing fundamental human rights online, and believe that monitoring private channels is a significant encroachment on such rights. On the other hand, many online safety groups believe that private communications pose a significant risk of harm which should be mitigated against. The government’s response addresses this by noting that in reducing the prevalence of abuse online it seeks to support another fundamental right and “enable more people to exercise their right to freedom of expression online, without fear of abuse or discrimination”. We will have to wait to see further guidance from Ofcom to understand how it proposes to balance users’ rights with the potential risks of harm in practice.
Finally, the response has confirmed that businesses which support online platforms but do not have direct control over the content or activity on a service such as ISPs, VPNs, browsers, web-hosting companies, content delivery service providers, app stores and security software are not caught by the regime. This approach differs slightly from the recently announced EU Digital Services Act, which has adopted the Information Society Service definition from the E-Commerce Directive to ensure a wider range of companies, including those who merely provide access to networks, or only caching services, are caught by its most basic level of obligations in relation to content. These services will, however, need to comply with Ofcom in implementing “business disruption enforcement” measures, such as blocking access to a non-compliant service.
Despite the wide-reaching impact of the legislation, the government anticipates that less than 3% of UK businesses will fall within the regime, in part due to new exemptions which have carved certain services out of the framework.
The government has confirmed that journalistic content published by news services and broadcasters on their own sites is not user-generated content and will be out of scope. The response has also introduced a “low risk functionality” exemption which excludes user comments posted on online platforms, where they relate to content directly published by a service. As a result, comments on articles within news platforms will be exempt from the regime too. It is still unclear whether journalistic content posted outside of the service platform, for example on company social media pages, will be managed under the legislation. The response indicates that protections will be implemented to ensure media freedom but does not go so far as to describe what that might entail.
Platforms also likely to be out of scope as a result of the new low risk exemption include online retail sites providing products or services, where the only relevant content on the site is customer reviews.
Other companies now confirmed to be exempt from the regime include business-to-business services; online services managed by education institutions subject to sufficient safeguarding duties; and email and telephony services.
Advertising largely falls outside of the scope of the regime. However, organic adverts and influencer marketing may fall within the regime as they are often indistinguishable from other forms of user generated content hosted by a service provider.
Importantly, the government has taken steps to tackle the differing degrees of harm posed by service providers. The framework will take a tiered approach, with companies in scope being divided into categories:
- The majority of services in scope will fall within “Category 2”. Such companies will be required to take action to monitor and remove illegal content and activity; and provide additional protections for children accessing their services.
- A smaller group of services posing the highest risk of harm will form “Category 1” and will be subject to the most stringent requirements, being required to also tackle content which is legal but harmful to adults. The list of Category 1 companies is yet to be confirmed and will be decided by Ofcom with reference to certain risk thresholds, specifically the size of a service’s audience, and extent to which harmful content could be disseminated on the platform. Whilst we await the definitive list, the government has indicated that most of the large social media platforms will fall within this Category.
This response has provided much-needed clarity for companies affected by regime, but there is far more detail required to fully understand the impact of the proposed legislation. We must now wait to see the draft Online Safety Bill which is expected to be published later this year to understand the proposals in greater detail. As the Bill is intended to go through significant pre-legislative scrutiny, there should be opportunity for those most affected to engage with the legislative process early on and perhaps even shape the final content of the Bill.
For more information on the government’s response and to read other articles in this series, please see our Online Harms page here.