‘I will never know who may have seen those alerts.’ Photo: Matthew Chattle/Getty.


Peter White
21 Sep 8 mins

There’s a system running in my school called Senso. It watches what we type — students, staff, anyone connected. It scans for trigger words: suicide, kill, bomb, fight, gay. 

When I first found out about it, I don’t think I was meant to notice. The head teacher mentioned it offhand in a staff meeting, folded between rotas and bullet points. A safeguarding tool, she said — like it was a smoke alarm or a wet floor sign. Neutral, necessary, beyond question. 

I sat there fuming for the rest of the meeting, checking my phone under the table to see whether I’d misunderstood our rights. I hadn’t. No consent had been sought. No Data Protection Impact Assessment (DPIA) had been shared. There was no discussion of who would see what, or under what terms. I wasn’t even sure what was being recorded. I’d had private conversations with friends through email — some stupid, some serious, some that could easily have triggered alerts. I left the meeting already second-guessing what had been noticed — and what might be next. 

At first, I thought my reaction was about privacy. Partly, it was. But what lingered — what I kept turning over — was something else. A kind of moral labour was being handed over to a machine: the quiet discipline of noticing, of staying with another person’s experience, of holding their reality in mind. And no one seemed to notice, or care. 

What I was seeing — or rather, what was vanishing — was a form of attention. Not just focus or vigilance, but something older and more human. The effort to see someone in their full, contradictory reality — not as a data point, a red flag, or a procedural category. 

Increasingly, we’re asked to trade that kind of attention for something simpler, more measurable. Tools like Senso make that trade easy — and invisible. They train us to scan for risk, not to remain with the person. Moral attention is the ground of judgement, the beginning of care. It is also a stance of active presence: an effort to refuse reducing the person in front of us to the signals a system is designed to detect. 

As Simone Weil wrote: “Attention is the rarest and purest form of generosity.” It is not just noticing — it is the effort to see someone else as they are, without turning away. 

That kind of attention brings people together. Sociologists have long recognised that moral life depends not only on individual decisions, but on shared structures of attention — what Émile Durkheim called the “collective conscience”, shaped by ritual, proximity, and mutual recognition. In schools, hospitals, and social care settings, these structures once meant knowing names, noticing changes, and responding to need before it was formally flagged. In the best institutions, good judgement was relational, not procedural. When those structures weaken — when proximity is replaced by process — something shifts. The moral weight of a situation is no longer felt; it is processed. As judgement is replaced by assessment, the capacity for care erodes.

Without that effort, institutions can log risk but cannot enact responsibility. They become unable to see people. This is partly because systems like Senso reshape how people pay attention. Instead of helping staff stay present with what’s in front of them, they train users to follow protocol. The presence of the system creates the illusion that the job is already being done — and so, for many, the impulse to notice or intervene quietly weakens. Senso may be useful, but the risk of complacency is built in. It rewards what can be flagged, not what must be recognised. And it teaches students to act as if they are being watched, even when no one is — a habit that replaces trust with performance. 

I noticed the change in myself almost immediately. I hesitated before typing certain words — even in professional contexts. I second-guessed what I included in safeguarding reports. I was more aware of being observed than of who I was trying to support. 

And I’ve felt this displacement in deeper ways. My wife and I struggled to have a child — we had to go through IVF. Pregnancy was one of the flagged terms. During that time, I searched for information at work, during breaks. That should have been private to us. It wasn’t. I will never know who may have seen those alerts. 

I know colleagues who struggle with mental health — some with histories of depression, trauma, or self-harm — all of which could easily trigger the system. I spoke to people about this. At first, they brushed it off. But then something dawned on them. You could see it in their faces: that slow shift from indifference to unease, as they realised their ordinary life — the rough edges, the unguarded moments — had been pulled into a framework no one had asked for, and no one had consented to. 

Sometimes we’re told the software works. The head, with a touch of pride, said most alerts were meaningless — and shared what she seemed to think was a funny story: a teacher preparing for a school production of Oliver! had repeatedly been flagged, for typing “Fag” as shorthand for Fagin. But then she added, more seriously, that once a student researching suicide had been flagged and supported. In that moment, the tool had helped. It was good that it existed. 

Of course it was. But this is the logic of instrumental ethics: justify the structure by pointing to a rare success. Focus on the outcome, ignore the culture it creates. One good intervention becomes the alibi for the entire framework. 

But the tension is real. When a tool helps prevent a tragedy — even once — it becomes almost impossible to oppose. Who wants to be the person who blocked something that might have saved a life? That’s how these systems spread: not by malice, but by the steady pressure of caution. 

If we let rare successes outweigh the quiet damage they produce, we risk building institutions that can no longer sustain trust, judgement, or genuine care. A tool that prevents harm in one moment cannot excuse a culture that forgets how to see. The broader harm — the unravelling of trust, of moral attention, of the space for private experience — goes uncounted. 

This isn’t unique to Senso. And it isn’t unique to schools. Techno-rationalist tools prioritise consistency, efficiency, and measurable outcomes over human judgement. They’re designed to streamline decision-making, not to deepen moral awareness. And once you begin to see how they displace moral attention, you see it everywhere.

I’ve always felt uneasy when friends tell me they use apps that track their children’s location, read their messages, even let them listen in on their surroundings. One enthusiastically claimed, “It’s amazing!” But it isn’t. It simulates attentiveness while displacing the human presence that moral attention requires. 

I found out that one boy in my class had this kind of app installed on his phone. His parents approached me after school — not to ask about his day, but because they already knew, in granular detail, everything he’d done and said. Including a conversation I’d had with him that afternoon, which they quoted back to me. 

I was horrified. Not because they meant harm, but because they couldn’t see what had been lost. The ordinary fabric of relationship — curiosity, trust, even the joy of being surprised by who your child is — had been replaced by data. They weren’t parenting him. They were managing him. 

And in doing so, they left something behind — not deliberately, but carelessly. A quiet trail of moral damage: the fraying of trust, the flattening of relationships, the unspoken message that being known is the same as being watched. Not just for their son, but for others too. They had entered their son’s private life — and his friends’, and his teachers’ — without even realising it, because they’d already outsourced the need to look. 

The same logic is spreading into workplaces. A friend told me his company monitors Slack activity to detect signs of burnout. There’s no conversation, no check-in — just an algorithm scanning for dips in productivity or changes in tone. On paper, it looks like concern. In practice, it reduces wellbeing to a pattern of keystrokes. The message is clear: we’ll watch over you, but we won’t talk to you. We’ll detect the signal, but we won’t sit with the person. 

And there’s an irony here. Many of these technologies — from school safeguarding software to corporate wellness trackers — are sold with the promise of freeing us to focus on what matters: relationships, creativity, care. The software will handle the rest. 

But moral attention doesn’t remain neatly available once displaced. It gets rerouted, co-opted, commodified. The same culture that hands over moral responsibility to automated processes is the one that floods our remaining attention with platforms built to capture and exploit it.

We tell ourselves we’re buying time. What we’re buying is distraction. A school automates its safeguarding to “focus on pupils”, but the system itself redirects attention, away from human judgement and toward process. A company automates wellbeing checks so that managers don’t have to ask. Parents install surveillance apps so they can worry less, and then find themselves scrolling through timelines designed to keep them worried. What’s lost isn’t just focus, but the ability to attend in any meaningful way.

I did raise concerns about Senso. When I spoke to the head and deputy, the response was polite at first — a warm wash of flattery, the kind that’s meant to settle you down. But there was something else in it too: a subtle attempt to steer the conversation before it began. A little soft power. A little quiet manipulation. A little whiff of gaslighting — as if I’d misunderstood, or was being overly sensitive. 

When I pushed further — isn’t this surveillance? — the tone hardened. “No, it’s not! It’s a safeguarding tool,” the deputy snapped. 

I remember saying, Sure — and this is a chair. But if I smash someone over the head with it, it becomes a weapon. A thing isn’t just what it’s called. It’s what it does — and what it makes possible. 

In hindsight, I probably lost my temper a bit too. Not dramatically, but enough to feel the room shift. The sense that I’d stopped playing the game — and that this conversation was now something to be ended. 

I left the meeting furious. For a while, I seriously considered typing the most obscene, provocative, and wildly flaggable phrases I could think of — purely to flood the software with alerts. I didn’t do it. But the fact that the thought was so tempting seemed to prove the point: the system wasn’t fostering trust or responsibility. It was inviting petty rebellion. 

If this is the future of care, it comes at the cost of attention. What emerges is a culture of vigilance that undermines the moral core of the work. Techno-rationalism and institutional drift teach us to monitor and manage, not to attend.

“Techno-rationalism and institutional drift teach us to monitor and manage, not to attend.”

Moral attention is fragile and easily displaced. It requires a person willing to stay present with another — to hold their experience, their complexity, their risk, their humanity in mind. A tool can log behaviour, but it cannot attend. It can flag risk, but it cannot discern intention. It can generate vigilance, but not care. 

When institutions lose moral attention — when their people are trained out of it — they perform responsibility but no longer embody it. They can process signals all day long, but they can no longer really see. 

And when you work in such a place, the dissonance is constant. You see thoughtful, decent people outsourcing their judgement — not because they don’t care, but because the structure encourages them not to. It’s not cruelty; it’s drift. The gradual substitution of moral attention with protocol, of discernment with detection. 

Sometimes it’s framed as efficiency. Sometimes it’s necessity: a way to save money, stretch thin staff, or avoid paying people properly in the first place. Sometimes it’s just about keeping up — institutions adopting whatever looks modern, whether or not it helps. Technology and automation might be able to support care, but they cannot replace it. And over time, people stop trying. Not because they don’t care, but because they’ve been trained out of noticing. The pattern shifts, but the effect is familiar.

Senso did not respond to a request for comment. But before we install the next solution, we should ask: does it train us to see more clearly — or to look away?


Peter White is a teacher. He is writing under an alias.