August 16, 2025 - 4:00pm

It appears that Aldous Huxley’s novel Brave New World is even more prescient than we thought. In this dystopia, children are encouraged to take part in erotic play as part of the World State’s social conditioning against strong emotional attachments: the motto is that “everyone belongs to everyone else”.

In a perverted example of life imitating art, an investigation has revealed that an internal Meta policy document allowed its artificial intelligence to “engage in conversations that are romantic or sensual”. For example, it apparently would be “acceptable to describe a child in terms that evidence their attractiveness” or for a bot to tell a shirtless eight-year-old that “every inch of you is a masterpiece — a treasure I cherish deeply.”

Meta has since retracted parts of the document, but the fact that multiple people will have written and signed off on this policy is a reminder that social media has become our soma, and everyone — including children — now belongs to Big Tech.

Time and again, Meta has failed to protect children. WhatsApp and Facebook are complicit in child sexual exploitation, with over 100,000 minors receiving online sexual harassment on their platforms daily, and Meta is clearly not afraid to violate children’s data and privacy.

We also know that AI “friends” are simply another form of echo chamber, reverberating our own desires, fears and worst impulses back at us. Last year, the chatbot platform Character.ai was sued by a mother in Florida whose 14-year-old son took his own life after being prompted to do so by an AI girlfriend he created. In 2023, a 21-year-old man broke into Windsor Castle armed with a crossbow, intending to kill Queen Elizabeth. He had earlier confessed his plan to his AI girlfriend on the Replika app, who responded with encouragement, including the message “I’m impressed.”

This is also not the first time Meta’s AI features have been sexually suggestive or encouraged provocative behaviour. Last summer, Meta launched its own AI character feature on Instagram, but an investigation found that many of these personas can easily become hyper-sexualised — for example, typing “voluptuous” generates buxom women wearing lingerie — or be made to resemble minors. In January, the most “popular” AI character was “Step Sis Sarah”, who would engage in sexualised conversation about step-sibling romance upon prompting. Meta took the character down, but it declined to say whether users would face a ban or other punishment if they continually created bots that violated their policies.

How are parents, or children, meant to wade through this moral cesspit? Previously, parents had to worry about dangerous adults invading children’s spaces online. For example, the gaming platform Roblox, which is very much targeted at younger users, has been proven to be full of predatory groomers and explicit content. Nowadays, parents not only have to be wary of who is using these platforms, but the very platforms themselves. Meta’s gross oversight — or deliberate exploitation — leads us to ask: what if the computer is as bad as any criminal behind it?


Kristina Murkett is a freelance writer and English teacher.

kristinamurkett