The Online Harms White Paper – which companies could be affected?

The recently published Online Harms White Paper has proposed a new regulatory framework to tackle illegal and harmful online content, establishing a statutory duty of care for companies who fall within scope. Whilst many large social media platforms fit squarely into the criteria of the proposal, start-ups, SMEs and charities may also be caught, and there is currently ambiguity for many companies in this sector.

0
410

Many social media platforms have recently been the subject of public scrutiny for harmful content such as terrorism, bullying and disinformation disseminated by individuals using their services. Whilst these companies rely broadly on self-governance to regulate inappropriate and dangerous content on their platforms, the introduction of a new regulatory framework could create more stringent content monitoring obligations in the future.

What is the Online Harms White Paper?

On 8 April this year, the UK Department for Digital, Culture, Media & Sport and the Home Office published the Online Harms White Paper. This paper proposes a new regulatory regime for the governance of internet safety which would require companies to take reasonable steps to tackle harmful and illegal content within their platforms and services, in order to protect users. This framework would be enforced by an independent regulator, who would have the power to issue sanctions for non-compliance including civil fines, enforcement notices and public notices of proven failure to comply with standards. The government is consulting on possible further powers including disruption of business activity, ISP blocking and senior management liability.

Who could be affected?

The proposed framework applies to companies that allow users to share or discover user-generated content or interact with each other online. The paper provides some guidance as to the type of services within scope, including hosting, sharing and discovery of user generated content (e.g. enabling posting on a public forum or video sharing); and facilitation of public and private online interaction between service users, such as instant messaging or commenting on posts.

As the name suggests, most social media platforms provide a function which allows individuals to interact with, or view the content of, other users. As a result, these companies would fall clearly into the scope of the proposal. The paper also notes that companies such as file-sharing sites, websites with discussion or review facilities, and search engines are likely to be caught by the framework. Although not specifically mentioned in the White Paper, the scope appears wide enough to cover online games and apps with social features.

In addition to large social media companies such as Facebook, Twitter and Instagram, this regime would also apply to smaller entities including charities and start-ups. Appreciating that the framework has a very wide scope, the proposed regulator would take a risk-based approach to regulation, and focus initially on the companies where user activities pose the greatest harm, either due to the size of the platform, or because of known issues causing serious harm.

Current areas of uncertainty

The broad wording of the proposal means that there are several areas of uncertainty. The regulation refers to facilitation of public and private online interaction, but “facilitation” is not defined. This term could have a very wide scope, and may include companies providing any form of support to online platforms which allow users to view each other’s content or interact.

The paper specifically references caching as an example of a facilitation activity which may fall within the scope of the proposal. A practical issue with the regulation of facilitating companies is their degree of disconnection from user activities. For example, a company providing ad-hoc caching services to a platform is unlikely to be able to control the users, content or the speed of removal of illegal or harmful content.

Although the paper notes that harmful activity can often involve both public and private channels, under the framework, requirements to scan or monitor communications for certain categories of illegal content would not apply to “private channels”. At present, there are two areas of uncertainty for companies caught by the proposed regulation. Firstly, a definition of private communications is yet to be provided. The paper offers an example of the messaging platform WhatsApp, highlighting that there is a clear distinction between one-to-one messaging and a group with several hundred users, but it does raise the question of how large a group has to be to meet the threshold of a “public” channel. Secondly, whilst the paper notes that private channels will be regulated using a differentiated framework, the monitoring and intervention requirements for such service providers is currently unclear. Both points are being considered as part of the ongoing consultation process.

Finally, sites and platforms which include a public comments section would appear to fall within the scope of the framework. However in a recent letter to the Society of Editors, the Secretary of State said that where functions such as moderated comment sections on news platforms are already well regulated, for example by IPSO and IMPRESS, the proposals would not duplicate those efforts. More broadly, he said that journalistic or editorial content would not be affected by the proposal. This leaves fundamental questions unanswered. What would be the position of mainstream newspapers that are not regulated by IPSO or IMPRESS? Would the Twitter feeds and Facebook pages of the regulated press be exempt from the proposed regime? If the Secretary of State intends a broader exemption for journalistic or editorial content, what would be the criteria for determining whether content is categorised as journalistic or editorial? At the moment, this is unclear.

What is next?

The government is currently running a 12-week consultation which will end on 1 July 2019, and is seeking responses to the proposal from a wide group of stakeholders from across the industry. Following the end of this consultation period the government will publish a response to the consultation, likely to be before the end of this year. Thereafter the government may proceed to legislation. All this is of course taking place against the background of a generally unpredictable political situation.

For more information on the Online Harms White Paper, please see further articles in this series.

Leave a Reply