DCMS releases new Code of Practice for Social Media Platforms

The Department for Culture, Media and Sport (DCMS) released a code of practice on 8 April 2019 with the aim to reduce bullying, insulting, intimidating and humiliating behaviours on social media platforms. Myles Kotak looks at the new Code and asks what impact it will have on social media platforms, if any.

0
838

The Code comes in advance of the potential new regulatory framework envisaged in the Online Harms White Paper (see here for our initial thoughts on that) which aims to combat a range of issues, including fake news, social media echo chambers and online harassment. It’s clear from the Government’s aim to make the UK “the safest place in the world to be online” that they are keen to regulate and tackle online harms, and the Code acts as a starting point by cementing a few industry standard principles for social media platforms to follow, which Government hope will put them on the path to achieving their ambitious vision.

Four key principles of the Code: Social media platforms and beyond…

The guidance in the Code is primarily aimed at social media platforms, although it is relevant to any site that offers a social interaction between individuals via user-generated content or comments, such as online marketplaces and gaming platforms. In fact, any website, app or program which hosts content that allows individuals to interact and communicate with each other online should follow these guidelines if they wish to follow good practice to protect their end-users from potential online harassment.

The Government’s aim with the Code is to bring online safety to all. To achieve this it expects social media providers to adhere to its four key principles, which it hopes will prevent bullying, insulting, intimidating and humiliating behaviour. The four key principles state that social media providers should:

  • maintain a clear and accessible reporting process to enable users to notify the social media provider of harmful conduct;
  • maintain efficient processes for dealing with notifications from users about harmful conduct;
  • have clear and accessible information about reporting processes in their terms and conditions; and
  • give clear information to the public about action they take against harmful conduct.

The Code expands on each of these principles and provides a non-exhaustive list of some good practice examples which social media providers should align themselves with. In addition, the Code highlights Government’s commitment to promoting diversity and equality online and expects social media providers to have regard to the Equality Act 2010 when implementing these principles.

In reality, none of these principles are exactly ground-breaking and to some extent most social media providers already practise, or should be practising, these principles in order to protect their end-users from online harassment as it’s now industry practice. Moreover, as useful as the Code is at retrospectively outlining some key principles, as it’s purely voluntary there is no requirement for social media providers to actually follow these principles. That said, given the direction Government is heading in with the potential new legal framework detailed in the Online Harms White Paper, following the Code will no doubt put companies on the right track for when the inevitable new laws are introduced in regards to online harms.

Key considerations

The main takeaway if you’re a social media provider or otherwise impacted by the Code is it’s a good time to start considering the following (if you haven’t already), while having regard to the Equality Act 2010:

  • how easy and clear is it for end-users and non-users to report harmful conduct? Do any new social media features also need such report function and is it compatible?
  • are users acknowledged in an appropriate time frame once a report has been filed? Are you communicating with the user in the most effective way? Are the reports being checked against all potential breaches rather than just the user reported categories?
  • is there a clear and accessible policy regarding reporting? Does it detail the consequences for users in breach? Are you transparent in regards to how you review and enforce breaches or provide take-down metrics?
  • do you remove content or explain why the content has not been removed? Do you provide guidance on appropriate online conduct?

For further developments on the Code and the Online Harms White Paper, keep an eye on future updates from the team at MediaWrites.

Leave a Reply