You may feel this is all a bit technical and unimportant, but it’s not. The original “fear of Covid-19 scale” paper has more than 120 citations already. The validation paper was carried out in Bangladesh, but the authors have also created English, Turkish, Iranian, Spanish and other versions of the scale, in each of those languages. A small but non-negligible percentage of the research into the mental health impacts of Covid-19 is carried out using this scale.
And its authors make serious claims about those impacts: they say in one paper that Covid-19 has led to people “suffering from elevated anxiety, anger, confusion, and posttraumatic symptoms”, to “unusual sadness, fear, frustration, feelings of helplessness, loneliness, and nervousness”, and that in “extreme cases, it may trigger suicidal thoughts and attempts and, in some cases, actually result in suicide”. This is important, non-trivial stuff.
More than that: the authors use the scale to propose policies for national governments, such as banning and removing websites that host “misinformation (e.g., false COVID-19, treatment remedies, COVID-19 conspiracy theories)”, or having government health agencies “make online counseling sessions available to help ease the mental health concerns and worries of the general population”. It is very possible that, somewhere in the world, a government white paper will be written citing the “fear of Covid-19” scale as support for some policy or other. (Stranger things have happened.) And yet other researchers cannot look at it to see how it was validated.
(There are, I should note, other concerns with some of the research of one of the authors, some raised here by psychologist Dorothy Bishop and others here by the psychologist Nick Brown.)
As we said earlier, it seems obvious that Covid-19 is dragging a mental health crisis in its wake. It certainly seems obvious to me. So you might think it doesn’t really matter if some papers cut corners a bit in order to tell us things that we already know.
That’s why I wanted to bring up another piece of work, which seems to me an example of careful research in this fast-moving and scary topic. It looked at 800 adults living alone in the UK and USA, and measured their mental health on well-established, well-validated scales (the standard measures of depression, anxiety and loneliness).
The paper measured their responses at three points, one some time before the crisis had really taken off, and two after lockdown had started. Vitally, also, it was a “registered report”: that is, the authors said in advance what they were looking for, and the journal agreed to publish it as long as it was carried out according to its methods, regardless of its results. That rules out lots of the bad statistical practice that can distort science.
Just as the existence of a Covid-19 mental health crisis seemed obvious to me, it seemed obvious to the authors of this study, Dr Netta Weinstein of the University of Reading and Dr Thuy-Vy Nguyen of the University of Durham. They just assumed that it was true; so much so that they were only measuring them to look at other things, such as whether introverted people were coping better.
But to both authors’ surprise, they simply didn’t find any increase at all on any of the three scales. The average level of loneliness, anxiety and depression did not go up. It was — to me, and to the authors — a startling result.
It’s not the last word. There could be all sorts of reasons why it didn’t find any impact. Weinstein wondered if the study, looking at people living alone, missed the stress placed on families. Or it could be that depression, anxiety and loneliness aren’t the main things that went up. “We didn’t look at stress,” she says: “I can remember that moment when I realised I was a full-time mum and a full-time academic, that sense of panic, but I don’t think I felt depressed.”
But the point is: we would all have confidently predicted, like Nguyen and Weinstein, that lockdown would make people living alone more lonely, more anxious and more depressed. They didn’t find that. Psychological research is messy and hard; it demonstrates the importance of really good, careful science, especially in messy topics like psychology, and especially in fast-moving situations like this. “This is a good opportunity to practise all the stuff that came out of the replication crisis to make sure that the answers we get are the right ones,” says Etchells. Open data, preregistered hypotheses, well-validated scales: we know this stuff is important.
My stress levels have returned to normal now; partly that’s routine, partly it’s kids being back at school, partly it’s my hands being back to their usual washing-up-liquid-advert softness. But the issues surrounding Covid-19 and mental health have not gone away; if nothing else, we can expect economic hardship and widespread unemployment in the coming years, and we know that is linked to mental health problems. Research into those issues will only get more important, to help mitigate those problems. But it needs to be good research. The lessons of the replication crisis need to be learned.
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
Subscribe