UK Government releases new proposals to regulate internet safety in Online Harms White Paper

The Home Office and Department for Digital, Culture, Media and Sport (DCMS) released today the Online Harms White Paper proposing a new social media duty of care which would be interpreted and enforced by a new regulatory body. Under the new proposals social media, search and other companies allowing users to share or discover user-generated content, or to interact with each other online, will be legally required to take steps to protect their users and will face tough penalties for non-compliance.

1
958

After much speculation and anticipation, DCMS’s Online Harms White Paper has been published today. The Paper proposes fundamental changes to the UK online regulatory environment. Headline items include a new statutory duty of care for online platforms and services, a new regulatory framework to protect internet users, and an independent regulator with wide-ranging enforcement powers. There is a 12 week consultation period ending on 1 July 2019.

A new statutory duty of care and regulatory framework

The White Paper introduces a new statutory duty of care to make companies take reasonable steps to keep users safe and tackle illegal and harmful content or activity on their services. Online harms range from illegal activity and content, such as terrorism, child sexual exploitation and abuse and inciting or assisting suicide, to behaviours that may not be illegal but nonetheless may cause damage to individuals or, to use the government’s phrase,  “threaten our way of life in the UK”, such as the spread of disinformation and fake news.

The regulatory framework will apply to companies that allow users to share or discover user-generated content or interact with each other online. The regulation will therefore apply to a wide range of companies of all sizes; including the giants we are all familiar with such as Facebook and Twitter but also file hosting sites, public discussion forums such as The StudentRoom or Mumsnet, messaging services including SnapChat and search engines. The scope may also be wide enough to cover online games. Companies will be forced to publish annual transparency reports on the amount of harmful content on their platforms and what they are doing to address this. The Paper emphasises that the regulator should take a risk-based and proportionate approach to dealing with this wide range of companies.

A new independent regulator

Compliance with this mandatory duty of care will be overseen and enforced by an independent regulator. It is unclear at this stage whether the Government envisages a new regulator or an existing one handed new powers. Rumours are that Ofcom may be involved, but what is clear is that it will be funded by industry in the medium term. The regulator is to produce a “code of best practice” which companies falling in scope must adhere to. The Paper includes some suggestions, for example the spread of fake news could be tackled by forcing social networks to employ fact checkers and promote legitimate news sources. When it comes to particularly sensitive online harms, such as national security and the safety of children, the codes will be developed in conjunction with the Home Office, which will have the power to issue directions to the Regulator.

The regulator is to be armed with a suite of powers to take effective enforcement action, which will include imposing fines on companies and even directors in breach of the statutory duty, publishing notices naming and shaming those that break the rules. Culture Secretary Jeremy Wright has indicated that fines available to the Information Commissioner around the GDPR rules, which could be up to 4% of a company’s turnover, may be comparable here. The Government is also consulting on additional enforcement powers to be used as a last resort, such as disrupting business activities (e.g. by preventing search results or links to companies that are in breach) and requiring ISPs to block persistent offenders.

Immediate Reaction to the White Paper and Consultation now open

The White Paper has received a mixed reaction. On the one hand it has been criticised for inciting internet censorship and hindering freedom of speech. Others view it as a necessary and welcomed instrument given the proliferation of illegal and unacceptable content online that threatens democracy, national security and the safety of internet users.

The new regime has ignited many questions that remain unanswered. The Government is now consulting on some aspects of its proposals, although it appears committed to its basic proposed regime. It has set itself the somewhat paradoxical goal to make the UK both the safest place in the world online but yet the best to start a digital business. It hopes to promote a UK industry of tech-safety companies.

A full set of 18 consultation questions can be found in Annex A to the White Paper, with topics ranging from how to appoint a new regulator and what powers it should have, to how best to ensure that new regulation is targeted and proportionate, to what role the Government should have in education and awareness initiatives. The consultation will be open until 1 July 2019 and you can add your response here.

Keep an eye on MediaWrites for deeper analysis on the Online Harms White Paper over the coming weeks.

1 COMMENT

  1. Would this also apply to website hosting companies? The line “The regulatory framework will apply to companies that allow users to share or discover user-generated content or interact with each other online” So if a website hosting company host a website that includes the ability to share user-generated content, or includes a messaging facility, would the hosting company be responsible?

    While I think of it, would “companies that allow users to […] interact with each other online” cover email?

    I’m all for improved regulation, and social media giants have failed miserably in my opinion to self regulate, but the announcement contains very broad language that could, at this stage at least, make website hosting companies and even web development companies who host client websites, responsible for not only the content of the websites they host (which I’m for) although the practicalities of how they become aware of said content. For example, the company I work for removes any illegal, malicious, or intentionally misleading content within hours of being made aware of it.

    Having said that, as email allows users to interact with each other online, how would a website design company who hosts email services for their clients be aware of what the client was using their email service for without (a) being made aware of abuse – which is something that, as an example, we do tackle promptly, or (b) somehow scanning the content of each email sent through a server beyond just scanning for malware, spam, and virus infections. Essentially deputising small companies who provide email services into policing the email content of their users.

    I may be totally off the mark here but there seems to be a lot of ways this can be interpreted.

Leave a Reply