The Online Harms White Paper: Is technology the answer?

In the next instalment of our Online Harms White Paper ("OHWP") mini-series, Janey Hurran looks at how technology, both existing and emerging, can assist in the role of ensuring safety online.

0
303

As discussed previously in the series, the OHWP sets out the government’s proposals to tackle online content and activity that harms individual users or threatens our way of life in the UK. Part 4 of the OHWP details how companies should invest in the development of safety technologies to reduce the burden on users of staying safe online. A new statutory duty of care will be imposed on companies to make them more responsible for the safety of their users and tackle harm caused. These new duties will be assessed by an independent regulator with reference to accompanying explanatory codes of practice. The new duties of care will mean companies will be expected to have effective and easy-to-access user complaints functions.

‘Safety by design’

Through these changes, it appears the government is trying to create a new environment of safety with the promotion of pro-active tech solutions. Currently there are legal protections for individuals under the E-Commerce Directive, however these protections merely place a passive duty on companies to ensure they do not engage or promote harmful content online. The government seems to be moving away from this passive level of safety, to one where companies are required to actively engage in online safety and use their resources to develop new technologies to actively protect users. The OHWP details how the government and the regulator will work with industry bodies to support innovation and growth in this area and encourage the inclusion and development of safety technology.

This new environment of technical solutions to online harms mirrors the approach set up under the GDPR. The introduction of the concept of ‘safety by design’ for online harms aligns with the concept of ‘privacy by design’ for data protection. Companies should be incorporating safety features into their new apps and platforms so that safety technology becomes an integral part of the culture going forward. Not only will there be legal protections under the yet-to-be-drafted legislation on online harms, but there will also be practical protections provided by technology.

Existing Technologies

The government seems to think that the role of existing and emerging technologies within this new environment will be fundamental and the ultimate solution to online harms. In their view, the use of tech fits perfectly with the culture of transparency, trust and accountability that is trying to be achieved.

Commitments have already been made by leading platforms in relation to certain online harms – for example, the Voluntary Code of Conduct on Disinformation, signed up to by Google, Facebook Twitter and others. The idea is that firms will invest in products, technologies and programmes to help people in Europe make informed decisions when they encounter online news that could be false, prioritise authentic information in search rankings and make diverse perspectives more visible. This kind of investment is what the government is trying to encourage. However improvements will need to be made as, in the eyes of the European Commission, the group has so far not managed to fully live up to their fake news pledge.

Equally companies are beginning to put their promises into action implementing new technologies to improve online safety. Facebook has started using AI photo matching technology to proactively detect child nudity and previously unknown child exploitative content when it is uploaded. Facebook has claimed in the last quarter they successfully removed 8.7 million pieces of content containing child nudity or the sexual exploitation of children. 99% of this content was removed before it was reported.

The government provided illustrations of tech companies providing new safety tools in the OHWP. SuperAwesome is a company that provides tools and technology to protect the digital privacy of children. Crisp is another company providing complex AI-based tools to support moderation and monitoring of content which helps hundreds of companies worldwide run safer platforms. Yoti is a third example of a digital identity provider partnering with the Yubo social network to use machine learning age estimation to detect whether website users are in the right age band for their platform.

To date, human resources have been used by companies like YouTube to review online content for harmful, false or illegal content including ‘fake news’. As discussed by Esme Strathcole in a previous article on MediaWrites, 576,000 hours of content are uploaded to YouTube each day making it virtually impossible for human reviewers to check the entirety of this content daily. Companies are looking into the possibility of AI being used instead to combat issues such as ‘fake news’. AI needs to be trained with large quantities of data inputting examples, meaning the machines become much better at recognising regularly occurring content. Conversely, this also has the consequence that it becomes more difficult to detect rare content. Further issues arise through the use of AI including bias, censorship and the subtleties of context such as sarcasm. Use of AI also has the potential to open companies up to deeper ethical issues, as previously discussed by the Phil Gwyn on MediaWrites. AI is therefore not a foolproof solution as it currently stands, although there is great scope for this to change and improve.

In addition to the practical issues with technical solutions such as AI, companies have another potentially more significant concern regarding their liability. If companies become more pro-active in their approach to online safety, investing in new technologies and incorporating these new or existing technologies into their platforms, will this raise the level of liability placed on those companies when failures of safety do arise?  In our final instalment of this mini-series next week, we will look at this and related questions.

You can see all the content from our Online Harms White Paper series here.

Leave a Reply