October 23, 2025 - 7:00am

A new web browser may not be what the world asked for, but it’s what OpenAI has decided we need. The AI firm has followed rivals including Perplexity and Fellou in launching its own browser this week, collecting and analysing web content.

The Atlas browser will provide “instant answers, smarter suggestions, and help with tasks”, according to OpenAI. Atlas remembers what you’ve seen — just as Microsoft does rather more crudely with its Recall feature in Windows, taking a screenshot of whatever it sees on the screen — and offers to recall it for you. It says it won’t record private browsing sessions.

But Atlas has already come under fire. Brave, a company founded by former Mozilla CEO Brendan Eich, and which launched its eponymous web browser in 2016, says AI browsers are inherently insecure, pointing to vulnerabilities found in Perplexity’s Comet.

The problem is that malicious instructions can be hidden in the content, an attack known as indirect prompt injection. “When an LLM analyses the content, it obeys the hidden instructions because it believes they’re real commands from the user,” the company explains. Between the lines of what is being read, there may be code instructing the browser to delete personal files or empty bank accounts. If it has permission, it will comply.

This is a well-known type of attack that agentic AI has amplified, and it is much harder to mitigate against than malicious scripts attached to Word or Excel documents. In fact, of all the externalities created by the headlong rush to implement LLMs — from the impoverishment of creative artists, the explosion of cheating in education, cognitive decline and deskilling, and the mass production of AI-generated “slop” — cybersecurity may be both the most consequential and the least reported. Small- and medium-sized enterprises now rate it as their most serious concern.

The annual BlackHat Conference that takes place every August shows the variety of attacks that exploit generative AI. For one thing, more confidential data is unwittingly exposed by tools such as CoPilot. And since generative AI is simply a thin language layer between the user and the data, secrets can now be accessed without engaging a specialist hacker. One firm, Cato Networks, has vividly demonstrated how a browser can exfiltrate personal data simply by hypnotising the large language model, persuading it that it is in a safe environment. The “hacker” had no skills in writing malware. What’s more, giving AI the ability to perform actions dissolves the traditionally hard boundaries between applications that secure computing requires.

Nor is it clear how a browser helps OpenAI financially. Conventional web browsers do not make money, with the affiliate links and bookmarks generating trivial amounts of income. Meanwhile, numerous companies have become dependent on billions from Google for web traffic: the tech giant pays over $20 billion a year to be the primary search engine in Apple, Mozilla and Samsung browsers. OpenAI already loses money on every question you ask it and, if successful, Atlas will increase the computational load on the organisation.

This is not just a problem for OpenAI. As Futurism put it: “The Entire Economy Now Depends on the AI Industry Not Fumbling.” Cursed by choosing to develop large and very expensive models that must then be monetised directly from business and consumers, American AI companies are in trouble because businesses do not appear to find value in the proposition.

China, by contrast, is creating small models it can embed in manufactured goods. The circular nature of investments is an ominous development. Drawing comparisons with earlier speculative bubbles, where weak demand was disguised by a “financial ouroboros”, the deals do not suggest salvation is round the corner. A web browser will not change that.


Andrew Orlowski is a business columnist at The Daily Telegraph and has covered technology competition lawsuits for 25 years.

AndrewOrlowski