Ad & Tech Giants step up to tackle the problem of Fake News

The European Commission announced a new self-regulatory Code of Practice designed to fight online disinformation (Fake News). Online platforms and advertising industry bodies will be encouraged to intensely scrutinise their ad placements and publically disclose any political advertising. Companies such as Facebook, Google, Mozilla and Twitter have committed to abiding by this Code, as well as advertising bodies such as the World Federation of Advertisers and the European Association of Communications Agencies (EACA).

0
941

The Code is a response to concerns about disinformation in the run-up to the European Parliament elections due to take place in May 2019. In a statement, Digital Economy & Society Commissioner Mariya Gabriel said: “Online platforms need to act as responsible social players especially in this crucial period ahead of elections. They must do their utmost to stop the spread of disinformation.” The Code emphasises that our democratic societies depend on public debates that allow well informed citizens to express their will through free and fair political processes.

A desire for accountability

This initiative comes as no surprise following the Cambridge Analytica scandal, in which the personal data of 87 million Facebook users was collected by a Facebook app developer, and passed to the British election consultancy company, without their consent. The introduction of this new Code also shows the increased momentum at EU level towards holding online companies more accountable for content disseminated on their platforms, as witnessed recently with the proposed Directive on Copyright in the Digital Single Market and the recent proposal regarding removal of online terrorist content within one hour of it being reported.

The development of this voluntary Code of Practice follows on from the Commission’s Communication ‘Tackling Online Disinformation: a European approach’ (26 April 2018), the Report of a High Level Group and Council Conclusions of 28 June 2018, previously reported in detail here at MediaWrites.

Disinformation is defined in the new Code as “verifiably false or misleading information,” with the aim of achieving economic gain or deceiving the public, which in turn may cause harm to the public, our democracy or our health, environment or security. The definition does not include misleading advertising, reporting errors, satire and parody or “clearly identified partisan news and commentary”.

What does the Code do?

The Code acts as a framework and outlines a wide range of commitments that each Signatory may sign up to depending on what service or product they offer. The Code allows for different approaches to accomplish the spirit of the provisions, given each organisation will operate differently, with different purposes, technologies and audiences.

The purpose of the Code is to identify actions that Signatories could put in place in order to address the challenges related to disinformation. These include:

  • Scrutiny of ad placements: to deploy policies and processes to disrupt advertising and monetisation incentives for purveyors of disinformation;
  • Political and issue-based advertising: to enable public disclosure of political ads, and to work towards a common understanding of “issue-based advertising” – ads that advocate legislative issues, such as national security, climate change or civil rights – and how to address it;
  • Integrity of services: to put in place – and enforce – clear policies related to the misuse of automated bots;
  • Empowering consumers: to invest in products, technologies, and programs to help people identify information that may be false, to develop and implement trust indicators; and to support efforts to improve critical thinking and digital media literacy; and
  • Empowering the research community: to strengthen collaboration with the research and fact checking communities and encourage good faith independent efforts to understand and track disinformation

The Annex to the Code lists various best practice principles to specific subject areas, and provides links to the current Signatories’ policies, for example, Facebook’s “Why am I seeing this Ad?” and Google’s Fact check tools for developers. In addition, the signatories undertake to report annually on their work to counter disinformation in the form of a publicly available report.

Acceptance by the big players

The big media players have already put the spirit of the Code into practice. Facebook and Twitter have reported to have deleted hundreds of fake accounts that were linked to Iran and Russia after finding a series of campaigns aimed at meddling in UK and US politics. Facebook began to rank British news organisations “trust score” after it changed its algorithm last month and Twitter implemented over 30 policy, product and operational changes to improve conversational health on the platform – many of which it claims is “helping to combat spam and automation”. Facebook has also simplified its processes for reporting fake news articles and has sought partnerships with third-party fact checking organisations who will verify any complaints. To read more on MediaWrites about the implications of posting false content online, click here.

A first for the industry

This is the first time that the industry has agreed on a set of self-regulatory standards to tackle disinformation worldwide on a voluntary basis. We expect to see a sharp rise in national regulators implementing this trend. The UK Communications Regulator OfCom has recently proposed to regulate social media companies in the same way as the telecom industry. Proposed targets would be set for how quickly offensive content is removed from their sites, with substantial fines imposed if they fail to meet such standards. The Code therefore indeed demonstrates another positive step forward in this regard, but also illustrates the challenges that lie ahead in addressing a problem that will only magnify in light of new technologies and prolific use of social media.

Keep an eye on MediaWrites next week for a more detailed analysis on OfCom’s proposed regulation of social networks.

Leave a Reply