A New Year, a new approach to digital regulation: the DSA and Online Harms

Whilst many lawyers are busy doing their best to guide clients through the unfathomable challenges posed by Brexit and the ongoing pandemic, Bryony Hurst starts the year by offering some thoughts on one area which can be predicted with a little more certainty as to what the future may hold – digital regulation in Europe.


New proposals were handed down on 15 December by both the UK government and the European Commission and, consequently, we now have plenty of reading to get stuck into to get us through the dark winter days.  For those planning lighter relief for the January gloom, I’ve digested the fine print for you and set out below some high level food for thought for companies likely to be caught by both regimes.

Broad territorial scope

If you’re about to stop reading, on the basis you won’t be caught by either regime, stick with me.  Both the DSA and the UK’s Online Harms proposals have broader territorial scope than you might expect.


The Pre-amble to the DSA makes clear that its provisions apply to all intermediary service providers irrespective of their place of establishment or residence, if they provide services in the Union.  Whether they provide services in the Union is determined by a “substantial connection” test. This is deemed to exist where the provider either a) has an establishment in the EU; b) a significant number of users in one or more Member States; or c) targets activities towards one or more Member States (with many examples of “targeting” given).  A provider who pursues commercial or professional activities in a Member State or, by any means, “directs” such activities to that Member State (or to several) is also expressly called out as falling within scope.  Precisely what “directing” activities means (beyond “targeting”, if that is what is intended) is not made clear – but the wording suggests a new, broadly-drawn test may soon be coming.

Online Harms

The UK government’s response to the Online Harms White Paper appears to suggest that the Online Safety Bill (the next step in the process) will go even further.  It is drafted on the premise that a business will be in scope if it either a) hosts user-generated content which can be accessed by users in the UK or b) facilitates public or private online interaction between service users, one or more of whom is in the UK. This is not the “targeting” test that non-EU platforms have had to get familiar with to date in the context of regulation; this test appears to be based on mere accessibility alone.  No further guidance has yet been provided to enable us to understand whether this test will be nuanced at all; it appears we will need to wait for the first draft of the Bill.

Not only does the wording proposed both by the UK and the EU risk criticism of territorial overreach but it also gives rise to tricky questions around enforceability of any actions taken by the regulators in question. To take one example,  the UK proposals include an option for the government to introduce criminal liability for senior management of businesses who fail to comply; but if all of a company’s senior officers are based in, for example, the US, query how such a power could be effectively exercised.

A multi-layered approach to regulation

Both the pre-amble to the DSA and the foreword to the Online Harms response recognise the serious concerns raised by stakeholders of over-regulation leading to disproportionate burdens on smaller businesses and new market entrants. Given both proposals are underpinned by an aim to make the EU/UK digital markets more dynamic (in addition to much safer for users), these concerns of course needed to be addressed and have been as follows.


Under the DSA, “very large platforms” are carved out as having to comply with significant additional burdens over and above those imposed on all companies falling within scope.  These are platforms who have over 45million users in the EU (a threshold which is expressly stated as subject to regular review). In addition, there is provision made for fast-growth platforms to report to their regulator more frequently so that the regulator can “timely identify the moment at which that platform should be designed a very large online platform”.  Additional obligations upon very large platforms include appointing a compliance officer, implementing risk management systems and independent audits of the same, transparency around algorithms used for recommendations, data sharing with researchers and authorities, and enhanced oversight and enforcement by the European Commission (not just individual countries’ regulators).

Online Harms

In the Online Harms response, a tier system has been proposed.  “Category 1” services will have the most burdensome obligations imposed upon them.  Unlike the DSA’s “very large platforms” wording, the UK’s categories is not only focussed on size but also takes account of activities taking place on the platform (although the press release accompanying the government response indicates that, in reality, many “very large platforms” are going to fall within Category 1).  It appears from the proposal that it is only Category 1 companies who will have to tackle content that may be harmful (but not illegal) to adults – lower categories can restrict their efforts to illegal content and an audit to assess risks specifically posed to children by their services.  The way in which it is suggested Category 1 tackles harmful content is to ensure their terms and conditions are explicit as to unacceptable content and to enforce those terms consistently.  They will be obliged to publish transparency reports to demonstrate their compliance with this requirement.

Shifting the focus from liability to responsibility

Historically, platform obligations have been approached by law-makers in Europe through the lens of notice and takedown requirements, and liability for getting this wrong.  Whilst this structure allows platforms, as intermediaries, a degree of buffer between themselves and individuals complainants, conflicts with freedom of speech principles have long been a topic for debate in this area.   A further problem encountered by individual Member States when seeking to increase duties upon platforms to identify and remove harmful content is how such duties could be said to align with Article 15 of the E-Commerce Directive, which prohibits any general obligation on ISPs to monitor content on their sites.

In parallel, concern has been growing among policy-makers to address the growth in reach and impact of online platforms; the pre-amble to the DSA identifies their importance in “facilitating public debate, economic transactions and the dissemination of information, opinions and ideas and in influencing how recipients obtain and communicate information online”.  It was perhaps inevitable, then, that a shift away from the notice and takedown regime would occur and what would instead emerge is a call for greater “responsibility” by platforms. The detail surrounding which areas of content each regime has chosen to focus this responsibility on, and how they intend to enforce it, is where it becomes more interesting.


It is notable, when comparing with the Online Harms proposals, that the DSA stopped deliberately short of regulating harmful (but not illegal) content. That said, it does not define illegal content, but confirms that it intends to complement sector-specific legislation such as the proposed Regulation on Terrorist Content Online.  It is understood that going beyond illegal content into the realms of harmful content too was one of the most fiercely-debated elements of the DSA.  It appears that the various Member States had different levels of sensitivity to regulating this which could not be resolved. The DSA does clarify the existing exemptions from liability for intermediary service providers (by deleting and then republishing Articles 12-15 of the E-Commerce Directive) but also adds (in Article 6) a new principle that platforms will not lose such exemptions if they “carry out voluntary own-initiative investigations or other activities aimed at detecting, identifying and removing, or disabling of access to, illegal content”.  This “good Samaritan” principle provides much-needed clarification for platforms who, as technological measures become available to them to assist in the fight against illegal content, will welcome the reassurance that deploying such technology will not leave them exposed to claims.

Online Harms

The Online Harms proposal, as noted above, obliges all categories of company to tackle illegal content (and content harmful to children), and obliges Category 1 companies to tackle harmful but not illegal content in addition.  It describes the nature of the obligation owed by the companies as a “duty of care”, language more at home in the law of negligence and which will require much elaboration to be understood properly in this new context.  It proposes that Ofcom, who it has named as the new digital regulator in the UK, will publish Codes of Conduct setting out the measures companies should take to ensure that their systems and processes are of a standard that meets this “duty of care”.  Controversy abounds as to whether a regulator should be tasked with defining legal duties in this way, but the government does not appear to agree with the alternative suggestion that this is a job for Parliament via legislation.  The UK has implemented the E-Commerce Directive into its domestic law so, initially at least, post-Brexit the intermediary liability exemptions will co-exist alongside the new Online Harms regime.  It will be interesting to see if those drafting the Online Safety Bill deem it necessary to include a good Samaritan-type provision to clarify this inter-relationship, à la DSA.

Both proposals are dense, and wide-ranging in the changes they introduce.  We intend to explore the in a series of articles during January, so I’ll stop here for the time being. If you take away only one thought from the above though, it’s that these two proposals (and others on the stocks in both Europe and the UK) mark a watershed moment for digital regulation. Whilst have all been expecting for some time both the DSA and the UK government’s online harms proposals to emerge in some form before the end of the year,  the precise substance of each has nevertheless managed to raise some eyebrows. If you’d told me a year ago that platforms whose services were merely accessed by UK users would have to overhaul their systemic risk management processes according to standards set by Ofcom, or face fines of up to 10% of their global annual turnover for non-compliance, I’d have suggested drinking more water with your Christmas brandy.  But, hey, it was 2020 so instead I’ll just add it to my “well that was a surprise” list…

If you would like to find out more about the Digital Services Package and what it will mean for business, join our Regulatory & Public Affairs team on 12 January from 1-2pm GMT (London) for the Lexology webinar “Digital Services Package: should the European Union police the Internet?Click here to sign up.

Bryony is a partner in our Dispute Resolution Group, based in London, specialising in commercial litigation and technology and media law. She advises our clients on complex, fact-heavy commercial disputes, with a primary focus on disputes relating to the technology, communications and media sectors. She acts for both corporate clients (including IT and telecommunications providers, major film and broadcasting organisations, and internet corporations) and individuals on disputes spanning the commercial spectrum.

Leave a Reply