Is the concept of ‘duty of care’ a novel one?
The concept of a duty of care is well known in the offline world. So an occupier of property owes a duty of care to its visitors in relation to their safety. However the duty is limited to physical injury, and the law rarely imposes a duty to prevent one visitor injuring another. In the UK online platforms have occasionally had to defend allegations of breach of a safety-related duty of care framed in negligence. Thus some cases have been brought in Northern Ireland against Facebook by minors alleging, among other things, a failure to have more checks on accounts created by children and failures to adequately monitor and remove sexualised/indecent content relating to or created by the child. None of these has produced a court judgment.
What makes the suggestion controversial?
Apart from the uncertainties around scope of the proposed duty of care, outlined below, we could also ask:: “but what about the E-Commerce Directive?” The significance of this is Article 15 of the E-Commerce Directive (EU-wide legislation that’s been around for nearly 20 years and includes provisions which shield online intermediaries from liability for user-generated content) which prohibits Member States from imposing “general monitoring obligations” on intermediaries in respect of content they host/cache/transmit.
Key to falling within this protective regime is that intermediaries must play a purely passive role in relation to content (i.e. they do not in any way edit/organise the content so as to render it no longer merely “user-generated”). There is therefore, on the face of it, a tension between (1) the suggestion that platforms should take proactive steps to reduce online harms under some form of “duty of care” and (2) a well-established pan-European immunity that incentivises companies to play a minimal role in any content.
However, drilling down into Article 15 reveals that EU lawmakers did not make things so black-and-white. Recital 48 of the E-Commerce Directive expressly reserves a right for Member States to make laws requiring intermediaries to meet “duties of care, which can reasonably be expected of them….in order to detect and prevent certain types of illegal activities“. Certain Member States have sought to take advantage of this – for example, the law known as NetzDG introduced in Germany last year that requires social media companies to take down “manifestly illegal” content within 24 hours of notification. However whether NetzDG is compatible with Article 15 remains a matter of debate.
Three points of contention appear to remain, then, even after Recital 48 is taken into account. First, how does a Member State ensure that it limits any duty of care it introduces only to that which “can reasonably be expected” of an intermediary? Secondly, if it imposes significant penalties for non-compliance with such duties, is it not, via the backdoor, introducing liability upon platforms for content they merely host/cache/transmit? Finally, what kinds of duty of care can be imposed that do not amount to a prohibited general monitoring obligation? Member States may try to claim that liability is in fact being imposed for failures to take required measures to protect users, but this strikes many observers as a somewhat artificial distinction.
What does the OHWP propose?
At a high level, what is proposed appears to be quite simple. For each type of unlawful/offensive harm identified, the yet-to-be identified regulator would publish a Code of Conduct. That Code would set out measures that it would be reasonable for an in-scope intermediary to adopt in order to reduce the risk of the harm in question. If complied with, the duty of care would probably be found to have been met. If alternative measures were adopted, these might also be found to satisfy the duty of care, although it would be for the company in question to prove they are sufficient.
However, what at first appears simple, upon closer scrutiny, actually reveals that the government’s approach, to date, has been scratch-the-surface only. The duty of care described in the OHWP does not look like those we are more used to encountering in the context of negligence or other legal claims, in several respects, including:
- The duty is couched in terms that include not only harms to individuals but “life in the UK”, “national security” and “undermining our shared rights”. Traditional duty of care claims are two-party scenarios (the harmed and the harmer) – drawing the duties owed more broadly risks a complex set of rights requiring evaluation in any given allegation of breach.
- There is no clear definition of content that falls within the remit, nor of harm that may be caused: Such questions appear to be left to the regulator to identify. Examples in the OHWP relating to harm include psychological harm and distress – but in the context of online speech, for example, what one individual finds distressing may to another merely be intellectually challenging food-for-thought. Guidance may well be handed down by the regulator on this; however, this will inevitably be broad-brush and therefore difficult decisions on competing rights will be left to the ISPs – an arguably less-than-desirable public policy outcome.
- What circumstances might trigger/heighten a particular duty of care: given the wide remit of the proposed Online Harms regulatory framework, what, one month, will be a priority in terms of harm prevention may, next month, be overtaken by a new and more threatening issue. Whilst the OHWP envisages certain measures being implemented on an ongoing basis for the detection and prevention of harms, there will presumably also be times when particular threats are heightened (such as following a spate of teen suicides allegedly encouraged by a new online game, or during an identified period of higher terrorist threat) and additional obligations imposed. How these spikes in platform responsibility are to be adjudged and who should determine when they occur are questions also requiring further elaboration following the OHWP.
Where does this leave us?
Online intermediaries may have breathed a sigh of relief when they read in the OHWP that individuals would have no right to compensation or to have their complaints adjudicated under the proposed regulatory regime. However, reading on they would discover it nevertheless provides that individuals would have “scope to use the regulator’s findings in any claim against a company in the courts on grounds of negligence or breach of contract”.
This is worrying on two levels. First, in a world where it appears judgments as to what content falls within the duty of care, which individuals may be impacted, what level of impact will amount to harm, and what steps should be taken to counteract it, are all to be the subject of “look-and-feel” type assessment, the uncertainty that results may lead to a deluge of claims, perhaps encouraged by the increasingly active small claims claimant law firms that have grown up in the wake of GDPR. Secondly, if such claims are successful, where does that leave the immunity from liability for online content that ISPs have so long enjoyed under the E-Commerce Directive?
Arguably a better approach would have been to make clear that liability to individuals would not be affected by the new framework; not only would this clarify the blurred legal lines that are likely to result but it would also foster an environment where online companies could freely contribute to the ultimate end-goal (protecting users from online harm) by doing what they do best – deploying technological solutions to tackle the threats – without fear that such steps would come back to bite them.
Get in touch if you’d like to attend a thought provoking seminar responding to the Online Harms White paper at Bird & Bird this afternoon (17 June), titled Hate Speech, Fake News & Social Media: Time for New Responsibilities or Moral Panic?
You can find all of our Online Harms White Paper content here.