Can AI content moderation keep us safe online?

Esme Strathcole reports on the increasing pressure on online platforms to moderate content and how tech companies are turning to AI to help manage the process, in an article that recently appeared in The Telegraph.

0
335

Moderation of Viral Videos

Companies agonise over the formula for creating viral content, whether it’s music videos such as Luis Fonsi’s Despacito ft. Daddy Yankee, or unlikely YouTube sensations (Baby Shark Dance). Technology giants, on the flip-side, need to be equipped to control the unintended consequences of the speed at which content can be disseminated on their platforms.

There is increasing pressure on technology companies to develop and invest in ways to prevent, detect and respond to illegal and objectionable content on their platforms. While teams of human moderators are involved in reviewing online content, recently the focus has shifted to AI solutions to automate this process. But is AI the answer to online content moderation?

Teamwork backed by AI Tools

Technology companies have teams of people to tackle the task of content moderation on their platforms. But people alone can’t keep up with the scale of content shared online. For example, each day around 576,000 hours of content are uploaded to YouTube, with its users watching more than a billion hours of videos.

To manage the volume of content and the speed at which it is disseminated, technology companies have turned to AI solutions to automate the analysis of uploaded content and to create blacklists so that future attempts to upload the same content are blocked. The situation is often likened to “whack-a-mole”; technology companies make considerable efforts to take down content, but it’s often reposted by others online and edited to avoid being blocked. Live streaming presents its own challenges. AI solutions may not detect the characteristics of the content instantly and therefore it often streams successfully for a period of time before being taken down.

Challenges

While technology companies grapple with monitoring and policing content on their platforms, there are some important challenges that AI systems will need to overcome:

  • Training:

    Data AI systems need to be trained to detect specific types of content. This requires large quantities of data containing examples. As a result, AI tends to be better trained at detecting the types of content that are regularly in circulation, which leaves a knowledge gap for rarer types of content.

  • Context and Discretion:

    Content moderation decisions are often very complex, and legal frameworks are different depending on the jurisdiction. Cultural differences can also significantly impact whether content is considered objectionable. AI algorithms can master recognising certain characteristics of content, but the context in which it is shared is often crucial to content moderation decisions. AI systems struggle to understand human concepts such as sarcasm and irony, or that content can be more or less objectionable depending on the identity of the party posting it online.

  • Moderation Versus Press Freedom:

    While technology companies are coming under increasing pressure to moderate their platforms, there is concern that the blanket removal of content based on AI decision-making could, in some circumstances, lead to censorship or a hampering of press freedom.

  • Responsibility and Governance:
  • Online platforms are responsible for their own content moderation and governance. However, organisations provide varying degrees of transparency on their algorithms and automated decision-making processes, leading to concern that the use of AI could result in certain types of social bias or inequality. Last year, Google announced a set of AI Principles to guide the responsible development and use of AI. Last month, Google announced the launch of its Advanced Technology External Advisory Council (ATEAC) to consider some of Google’s most complex challenges that arise under its AI Principles (which lasted just one week before Google issued a statement confirming that it intended to “find different ways of getting outside opinions on these topics”).

Taken together, these challenges suggest that while AI has an important role to play in online content moderation, it is unlikely to be a panacea to harmful online content, at least in the short term.

Leave a Reply