X Close

The horror of deepfake nudes Non-consensual porn isn't a 'woman's issue' — it shows that anyone's identity can be hijacked

Is this picture real? Credit: George Marks/Retrofile/Getty Images

Is this picture real? Credit: George Marks/Retrofile/Getty Images


October 28, 2020   5 mins

Martin Scorcese’s most recent film, The Irishman, told a story that spanned seven decades. Robert Di Niro and Joe Pesci starred, and in order to “de-age” them, Scorcese used a special three-rig camera and employed dedicated special effects artists for post-production work. The costs ran into the millions — and the results were patchy. Earlier this year, a YouTuber decided to see if he could do any better: using free artificial intelligence software, he bettered Scorcese’s attempt in a week.

It is no exaggeration to say that soon almost everything we see or hear online will be synthetic — that is, generated or manipulated by AI. Machines that can “learn” to do almost anything when ‘trained’ on the right data — and they’ve never had access to more data, nor so much power to churn through it all. Some experts estimate that within 5-7 years, 90% of all video content online will be synthetic. Before long, anyone with a smartphone will be able to make Hollywood level AI-generated content.

One synthetic-text generating model can already generate articles that appear to have been written by a human. AI can be trained to clone someone’s voice even if they’re already dead: an old recording of JFK’s voice has been used to make a clip of the former president reading the Book of Genesis. AI trained on a dataset of human faces can generate convincing fake images of people who do not exist, and it can be taught to insert people into photographs and videos they were not originally in. One YouTuber is working on a project to insert actor Nicholas Cage into every movie ever made.

All this sounds weird at worst and hilarious at best, but this technology has a dark side. It will, inevitably, be misused, and for that most obvious of male-driven reasons. It was reported last week that the messaging app Telegram is hosting a “deepfake pornography bot,” which allows users to generate images of naked women. According to the report, there are already over 100,000 such images being circulated in public Telegram channels; considering that they are almost certainly being shared privately too, the actual number in existence is likely much higher. The women who appear to feature in this cache of publicly-shared fake porn are mostly private individuals rather than celebrities. More disturbingly, the images also include deepfake nudes of underage girls.

Henry Ajder, the lead author of the report, told me that “the discovery of the bot and its surrounding ecosystem was a disturbing, yet sadly unsurprising, confirmation that the creation and sharing of malicious deepfakes is growing rapidly.” When he wrote to Telegram to request a takedown, he “received no response”. The bot and the fake pornography — including that of minors — is still live at time of writing.

Deepfake pornography first emerged less than three years ago, when an anonymous user started posting it on Reddit. He later revealed his methodology: he was using open-source AI software to insert celebrity faces into pornographic films, by training AI on images and videos of his intended target. The end result was surprisingly convincing. When other Redditors started making their own deepfake pornography and news of this AI-assisted community broke to the world, there was a furore: Reddit shut down the community and banned deepfake pornography. But the genie was out of the bottle.

Since its early days on Reddit, an entire deepfake pornography ecosystem has developed online. With a few clicks, it is possible to access deepfake pornography of every (female) celebrity imaginable, from Ivanka Trump and Michelle Obama to Ann Coulter. And celebrities are not the only targets. AI can clone any woman: all that is needed is some training data, and the rapid acceleration of the technology means that less and less training data is required. These days a single picture, a few seconds of a voice recording, or a single video would be enough.

The tools to make synthetic media are becoming more and more accessible. Anyone can try to create their own by using free software; support communities online explain how to use it. It is even possible to buy “learn to deepfake” courses, or to commission a “deepfake artist” for a bespoke “creation” from little as $20. Recently,entrepreneurs have begun to wrap up the technology in easy-to-use app interfaces, so millions of consumers will be able to experiment with making their own AI-generated fake content.

One such app launched last year, calling itself “DeepNude”. It allowed users to upload photos of a woman to generate a deepfake image of her naked, and for a fee of $50, users could pay to remove a watermark so that the image would look authentic. (Because the underlying AI was developed on training data of female bodies, the app only worked on women.) When the DeepNude was released, the demand was so high that the app’s servers crashed under a stampede of downloads. Facing a barrage of negative press, its developers eventually took their creation offline, saying “the world is not ready for DeepNude app”. Weeks later, they quietly sold their software in an anonymous auction for $30,000.

While deepfake pornography ostensibly seems to be a “woman’s issue”, it provides an early and worrying case study for how synthetic media could be weaponised against us all. It is inevitable, for example, that deepfakes will be used as a potent new tool of identity theft and fraud.

Last week, federal prosecutors in the US revealed the case of a Californian widow who was scammed out of nearly $300,000 by an unidentified overseas con man who romanced her using deepfake videos in which he posed as the superintendent of the US Naval Academy. The widow, only identified as “M.M.”, thought she was building a relationship with an admiral named “Sean Buck”, who told her he was stationed on an aircraft carrier in the Middle East. They communicated for months via Skype, during which “Buck” always appeared dressed in his military uniform. According to the prosecutors, “While M.M. believed she was communicating with [Buck] via live chat on Skype, what she was seeing were actually manipulated [deepfake] clips of preexisting publicly-available video of the real Admiral Buck, and not the live video chats that M.M. believed them to be.”

The costs of identity theft and fraud are vast and on the rise. According to an annual report by Javelin Strategy & Research, identity fraud-related losses grew 15% in 2019 to $16.9 billion — in the US alone. This is largely due to the fact that financial institutions’ methods of identifying and responding to fraud are no match for criminals’ high-tech schemes to steal money, which increasingly incorporate deepfakes.

Of course, they will not only be utilised against consumers, but businesses too. The first serious case of deepfake business fraud emerged last year when the Wall Street Journal reported that a British energy company lost €250,000 when scammers used AI to clone the voice of the company’s CEO to demand (over a phone call) that the money transfer be made.

Libel, identity theft and fraud are nothing new — but the potency of such ventures will increase exponentially as synthetic media becomes prolific. Because of its unique ability to “clone”, AI presents a dire threat to an individual’s right to privacy and security. At the moment it is almost impossible to distinguish between authentic and synthetic media, and the quality of the latter is improving. This, then, is the alarming reality: a world in which our identities can be “hijacked” by almost anyone and used against us.

We have reached the critical moment to set standards — to create ethical and legal frameworks — to define how synthetic media should be created, labelled and identified. Given that the AI behind synthetic media is still nascent, we still have (a little) time to influence the effect this technology will have on our societies and the individuals within them. Too often we build exciting technology without considering how it might amplify the worst parts of human nature, or hand weapons to the cruellest impulses.


Nina Schick is an author and broadcaster, specialising in how technology and Artificial Intelligence is reshaping politics. She has advised global leaders on deepfakes, including Joe Biden and the former Secretary-General of NATO.

NinaDSchick

Join the discussion


Join like minded readers that support our journalism by becoming a paid subscriber


To join the discussion in the comments, become a paid subscriber.

Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.

Subscribe
Subscribe
Notify of
guest

41 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments