Disinformation and COVID-19: EU Commission publishes second set of reports on actions taken by online platforms and advertisers to tackle coronavirus disinformation

Maisie Briggs explores the second set of reports published by the European Commission, which set out the steps taken by online platforms and advertisers to tackle COVID-19 disinformation amidst rising concerns about the spread of fake news during the pandemic.


On the 7th October 2020, the European Commission published its second set of reports on how the signatories to the Code of Practice on Disinformation have tackled false and misleading coronavirus information. The first assessment of how the Code has been implemented was published on the 10th September 2020, which you can read about here.


In response to the increasing spread of health related ‘fake news’, the EU Commission published a Communication in June asking that Code signatories provide monthly reports on their actions and policies to address COVID-19 disinformation. The Communication identified the vast range of disinformation that was spreading during the global pandemic, including: conspiracy theories, consumer fraud, cybercrime and misleading health information. In light of this, the Commission introduced an obligation on signatories to report regularly on COVID-19 measures, that adds to the annual self-assessment published in September. The aim of these reports is to hold online platforms and the advertising industry to account for their role in preventing health related disinformation. The latest reports build on those published in September, reinforcing that whilst platforms have continued to crack down on the distribution of false or misleading information, more work remains to be done. With two sets of reports now published on disinformation, the question becomes: what can the EU do to enhance the fight against disinformation, especially in relation to health, now they have regular data coming from the advertising industry and online platforms?

The reports

The signatories, such as Facebook, TikTok and Google, used specific indicators to show the effectiveness of the policies they put in place at the start of the COVID-19 pandemic. For example, the platforms reported on:

  • Their efforts to increase the visibility of authoritative information sources during the pandemic;
  • The actions taken to limit the appearance of false or misleading content;
  • Their broader collaboration with fact checkers, and promoting content that has been fact checked;
  • Any initiatives undertaken to provide free ad space to organisations promoting campaigns on the pandemic, and to journalist organisations to sustain good independent journalism; and
  • Any actions taken to update advertising policies to block or remove adverts exploiting the crisis or spreading fake news.

As noted by Thierry Breton, Commissioner for the Internal Market, it is clear from the reports that many platforms have ‘acknowledge[d] their critical responsibility in the fight against disinformation’. In fact, Google’s report states that from January to August 2020, the search engine removed over 82.5 million COVID-19 related ads for capitalising on global medical supply shortages by artificially inflating prices and making misleading claims about cures. They also took action on over 1,700 URLs with COVID related content because they made harmful health claims. Twitter reported that in August, they removed 4,000 tweets and challenged 2.5 million accounts under their COVID-19 guidance. This is a definite step towards companies taking responsibility for the content posted on their platforms, but also shows the alarming scale of the problem. In light of this, it is interesting to see that Věra Jourová, Vice-President for Values and Transparency, has called on ‘all relevant stakeholders such as other online platforms and advertising companies to join the Code’. These reports have only underlined that a unanimous approach must be taken to address this issue effectively.

In addition, the reports show that progress has been made within the advertising industry, but also reveal the challenges that come from blocking certain ads but not others. For example, the use of exact match words (e.g., “crisis,” and “COVID-19”) in avoidance technologies has unintentionally led to blocking any adverts from appearing next to COVID-related news. This has impacted technology providers, as it reduces the amount of space on a website for placing ads.

Looking beyond the pandemic

For all of the upsides of these monthly reports being published – mainly that they keep pressure on advertisers and platforms to continue to enhance efforts to tackle disinformation – there are still substantial gaps. In some of the reports, signatories have provided information about their policies’ efficacy at a global level, rather than in individual Member States, or even within the EU. In others, it is unclear whether the data provided specifically relates to actions taken to address COVID-19 disinformation, or ‘fake news’ in general. This hampers efforts to accurately measure whether platforms are managing to curb the spread of coronavirus disinformation. Additionally, until all major platforms and advertisers sign up to the Code, a truly harmonised campaign against disinformation (and analysis of the data coming out of that) cannot be achieved.

The Commission aims to use the outcome of these reports in its longer-term approach: to deliver a comprehensive European Democracy Action Plan and a Digital Services Act package. A draft Digital Services Act is expected the end of 2020 with the hope it will both increase and harmonise obligations of online platforms across a range of legal challenges.

Leave a Reply