The cost of getting it wrong is real with terrorism, too. The process is somewhat different, though. When it comes to the risk of someone being radicalised, unlike a blood test, there isn’t a number that you can read off. That doesn’t mean there can’t be one: it could be that the people who are passed on to Prevent are given a quasi-objective score. We do exactly that for, say, suicidality risk or autism or happiness, or any one of a thousand psychological functions. You tick boxes on a questionnaire about someone’s isolation, their anger, their ideologies, and if the score on the questionnaire adds up to more than 40 or whatever you declare them a terrorism risk. That is precisely what goes on, in a less obvious and open way, with things like AI parole decisions.
But as it happens, there isn’t an explicit number. The Prevent guidance says: “There is no fixed profile of a terrorist, so there is no defined threshold to determine whether an individual is at risk of being drawn into terrorism.” So there’s no nice straightforward “Terrorism risk: 13.6” readout.
Nonetheless, the same process is going on. A person comes into contact with the counterterrorism services. Of the 6,000 or so who are referred to Prevent each year, about 500 or so are deemed “vulnerable” to radicalisation and are passed on to a subgroup called “Channel”.
Since referral to Channel is not based on an objective score, it’s based on a subjective feeling (or “expert judgment”): people will be flagged up if the panel judging them feels sufficiently strongly that they’re a risk. They might not explicitly say “Terrorism risk: 13.6”, but still, there is a threshold of risk, over which someone is considered a threat.
And just like the PSA levels in the blood, that assessment will be imperfect. If you read some young man saying something disturbing online, is it harmless anger and braggadocio, or are they a terrorist? You can raise your implicit threshold and avoid harassing innocent people, at the cost of an increased risk of missing a genuine terrorist, or you can lower it and correctly identify more terrorists, but at the cost of labelling a lot of harmless people as potential terrorists.
It might seem, like the cancer test in reverse, that there’s an asymmetry here: a false positive annoys people; a false negative kills people. But also like the cancer test, it’s not as simple as that. The false positives will be a lot more common than the false negatives — simply put, there are more mouthy non-terrorists than actual terrorists in the world. As I said before, there were 6,000 people referred to Prevent every year, and 500 or so were judged to be of sufficiently high risk to be passed on to Channel. But there have been only four actual terror attacks in the last two years.
If you lower your threshold, raise the alarm on more borderline cases, then you will waste more police time, put more innocent people under needless scrutiny and stigmatise more communities. (It seems inevitable that a lowered threshold will mean more young Muslim men, in particular, being picked up by the security services.) That may be a price worth paying — but, let’s be clear, it will be a price that you pay. If you take in everybody suspicious for questioning — every misogynist loser on incel subreddits, every angry racist or radical Islamist on dark-web chatrooms — you may prevent one or two more attacks, but you will undoubtedly fill up your jails and enrage the populace.
This doesn’t mean there’s nothing you can do. Moving your threshold is zero-sum. But you can change your test: if instead of testing for PSA you looked for some other marker, you might be able to tell whether someone had cancer with more accuracy. In the case of counter-terrorism, you could do things like increasing police funding for surveillance, rather than simply being more strict about your criteria — although of course that would mean less money elsewhere.
An alternative suggestion might be to abandon “expert judgment” and introduce something like I talked about above, an explicit algorithm: human judgment is famously terrible at predicting complex things like the likelihood of a criminal to reoffend, and algorithms consistently outperform us, as the psychologist Paul Meehl demonstrated way back in 1954. They beat humans at predicting the price of wine, how long a cancer patient will live, who will win a football match, how likely a business is to succeed and dozens of other subjects. It is likely that some fairly simple algorithm could do significantly better than the best experts at predicting who is a terror risk, as well.
But it could only ever be a partial improvement. Humans are stubbornly hard to predict. Part of the reason why algorithms can outperform human judgment in those fields is because human judgment is consistently terrible: we are very often wrong about who will reoffend, who will live and die, who will win a football match. It is not that algorithmic prediction is great; the future is still hard to know. However good we make our systems for detecting terrorists, they will never be very good. So terrible things will always happen, and when they do, we will assume our systems are too lax and need to be tightened.
It is tempting to think like that in the wake of an atrocity such as David Amess’s murder: to think that we ought to lower our thresholds of what counts as a risk. Perhaps it’s even true. But just as it is an unavoidable fact of reality that reducing false alarms means missing more real ones, a world in which Sir David’s murderer was caught ahead of time could be grimly authoritarian.
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
Subscribe