Cyan CEO Ian Stevenson discusses why defining ‘harmful’ online content continues to be a difficult area for regulators

“I think what’s happening in social media is already worse than Chernobyl.” – on the face of it a startling claim to make about the technology we all use every day.  Yet Professor Stuart Russell, Professor of Computer Science at the University of California in the United States, made exactly this claim as part of the first of this year’s prestigious Reith Lectures from the BBC.

To state that social media has already had its “Chernobyl event” and is “already worse” is a damning indictment. If this is the case, why has social media not yet seen the level of regulation needed against the harms it causes?

Comparing social media with the Chernobyl disaster, which caused devastation and direct fatalities through tangible totems of fear in the form of an explosion and radiation, may seem extreme. But a closer look at the devastation caused by social media in the form of mental illness, self-harm, abuse, suicide and suicide-related behaviour, tells a different story. Research is beginning to show that social media appears to contribute to increased risk for a variety of mental health symptoms and poor wellbeing, especially among young people, while the prevalence of internet users is positively correlated with general population suicide rates.

According to the UK Government’s Online Harms White Paper last year, two-thirds of adults in the UK are “concerned about content online” with almost half saying that they’ve seen harmful content online in the past year. This is against a background of nearly nine in ten UK adults and 99% of 12- to 15-year-olds regularly spending time online.

Concurrently, the Report Harmful Content (RHC)’s 2021 Annual Report highlighted a 225% rise in reported hate speech online. While a sobering stat, the same report details improved public understanding of identifying and reporting incidents to official bodies, meaning at least some of these incidents may be attributed to greater numbers of people reporting ‘hate’ online.  In a poll by Opinium earlier this year, nearly two thirds (60%) of Brits prioritised the right to be protected from violence and abuse over the right for people to say what they want online (24%).  An EPCAT survey of EU citizens found that 76% of respondents said that allowing online service providers to be able to detect and flag any signs of child online sexual exploitation is more or as important as their privacy online.

It’s clear then that while most people want the internet to be a safe place for all, defining harmful content continues to be a difficult area for regulators.

Put simply: you can’t regulate against something you can’t describe. And this is the predicament Ofcom – the public body in charge of regulating online hate – finds itself in today. How do we define harm or hate and who decides what social media post is deemed racist, or just subconsciously biased?

The safety tech community does not yet have a standardised model for how various harmful behaviours are defined, which creates challenges for reliable moderation, modelling, and evaluation. This was brought to the fore in research titled ‘A Unified Typology of Harmful Content’ by Banko, MacKeen, and Ray of Sentropy Technologies in the US. The research details the most common types of abuse described by industry, policy, community and health experts, with a view to attaining a shared understanding of how online abuse may be modelled.

As part of an effort to tackle online hate, the UK Government introduced its Online Harms White Paper in 2019 and updated it earlier this year. In its current iteration, the bill states that social media sites must tackle content that is “lawful but still harmful”, including the promotion of self-harm, misinformation, and disinformation. The UK is also leading the G7 efforts in this area, putting technology at the heart of the global roadmap to tackle the global challenge of online safety.

In November, the UK culture secretary Nadine Dorries stepped up the rhetoric further and announced that internet trolls who threaten “serious harm” or post harmful misinformation could face jail sentences, in a marked escalation of sanctions in the draft bill. Also, the UK is also considering whether tech executives such as Mark Zuckerberg could face the threat of criminal prosecution if they do not tackle harmful algorithms.

This adds further pressure on social media platforms like Twitter and Facebook to take tougher stances on those spreading hate speech online. But should big tech companies be the ones who decide what is harmful or should the crucial task be the responsibility of society at large, through elected governments? One only needs to look at contentious debates played out in the public eye, such as that between J. K. Rowling and the trans community, to quickly see that one person’s harmful and offensive content can be considered another’s right to free speech.

Addressing online harm is still a relatively new concept, and so the debate is yet to mature. It’s further complicated by the technological domain it operates in. Most lawmakers, citizens and journalists have a limited understanding of how technology works and what is and isn’t possible when it comes to removing harmful content.  That’s completely understandable – it’s a complex and technical area. However, this disconnect seriously hampers the quality of the debate and therefore our ability as a society to make good decisions.

The challenges of online harms are also inherently international. Big tech companies are often based in the US, but their users are all over the world and laws are made at a national or supra-national (or at EU) level. Certainly, the UK has demonstrated the value of engagement, innovation networks, industry associations and events, but this work needs to become international.

Moving forward, collective action and international collaboration will be vital to improving online safety; words that represent the efforts to save lives, safeguard children from sexual abuse and protect our social and democratic functions.

These issues are too big and the consequences of getting them wrong are too severe to be left to social media companies or lawmakers in isolation to tackle. As we have seen in the past decade, technology is accelerating cultural changes and it should be up to society to decide what it deems hateful or harmful.

We are approaching the cusp of fundamental change in terms of how we define and regulate the internet to make it a safer place. Especially for children and the vulnerable. If we are to make the right decisions moving forward, we need better collaboration across the spectrum to make clear specifications so harmful content is easily identifiable. Without it, the challenge may be too big to tackle.

If social media has indeed already had its “Chernobyl-type event”, it is critical that that fallout does not continue to pollute people’s lives for much longer.

This article was first published by Digital Forensics Magazine in Issue 47, published in January 2022. You can access the article on p48 here: Digital Forensics #47, Jan 2022 and follow Digital Forensics on Twitter @DFMag. 

Please enter your details below to download your resource

By submitting this form you acknowledge that your personal data will be processed in accordance with our Privacy Policy.

Thank you.

Please click here to start downloading your file.