December 17, 2024 - 7:00am

The Online Safety Act introduced us to the slippery term “legal but harmful” — or, more colloquially, “lawful but awful”. Then-technology secretary Michelle Donelan declared that the online world was a “wild west of content” too dangerous for unsupervised adults, never mind children. The role of sheriff fell to Ofcom, which was empowered to recruit as deputies the very technology companies accused of creating this risky new world.

Now, Ofcom has published first-edition codes of practice and guidance, designed to turn the well-intentioned but sprawling principles of the Act into workable, enforceable regulation. The regulator claims that it is “putting managing risk of harm at the heart of decisions,” and demanding proportionate measures from service providers.

In one way, this risk-management approach is sensible. It was never going to be possible to make the online world, any more than the real world, perfectly safe. Services aimed at children should be regulated differently from what consenting adults get up to in digital privacy. Measures that are proportionate when tackling child sexual abuse or terrorism may not be justified against fraud or “hateful content”.

Like all safety-first regulation, however, Ofcom’s guidance faces unavoidable conflicts with other important social values, especially privacy and freedom of expression. For example, its description of hateful content “could include content which may not meet the threshold for illegal hate”. Under this definition, and given the nature of the internet, it is hardly surprising that “one in four online users (adults and children aged 13-17) had seen or experienced content they considered to be hateful, offensive, or discriminatory, and which targeted a group or person based on specific characteristics such as race, religion, disability, sexuality, or gender identity,” in the preceding four weeks.

In these febrile times, it’s very easy to point to online discussion of controversial issues which could meet this description. Especially since, as Ofcom notes, this content tends to spike following newsworthy events such as terrorist attacks. Feelings run high, but there is also genuine desire to debate why these things happen, and how they could be prevented in future.

The problem with a risk-based regulatory framework, backed by the threat of hefty fines for companies which show insufficient zeal to clean up online hate, is that the incentives only go in one direction. Freedom of public expression is given lip service, but there seems little likelihood of a Silicon Valley behemoth facing fines from Ofcom for taking down, or selectively muting, what might — in somebody’s eyes — be hateful content. Far more likely that erring on the side of caution will sanitise digital public squares from any human interactions which might cross an ill-defined boundary into the grey zone of online harm.

Alongside the draft guidance, Ofcom promises further consultations followed by further regulation. This will include “crisis response protocols for emergency events (such as last summer’s riots)”. It seems unlikely that such protocols will be designed to protect freedom of online speech, given Ofcom’s emphasis on protecting adults, as well as children, from material which, in itself, breaks no laws.

Under “risk factors” for hateful content which technology service providers are expected to consider, Ofcom lists users’ ability to post content, respond to others’ material, and to build online communities. We might regard these things as the very essence of social media, messaging apps and the internet. But to Ofcom, they are fertile ground for potential hate crimes.

The Online Safety Act aspired to make the internet a safer place. We might ask: safer for whom, and from whom? What started as an attempt to make it safer for children has become an ever-widening system of control. It will make the world safer for those in authority: from controversial public debates, bottom-up criticism, and the ability of ordinary internet users to make connections and organise ourselves.

It will protect those who fear offensive words and challenging ideas more than the use of legal power to silence them, which is an invitation to the powerful to silence the powerless. It’s big technology companies which will be playing it safe to avoid punitive fines, and the first casualty will be freedom of expression.


Timandra Harkness presents the BBC Radio 4 series, FutureProofing and How To Disagree. Her book, Technology is Not the Problem, is published by Harper Collins.

TimandraHarknes