A recent report from the enterprise security company Cato Networks has sounded a warning about the security risks tied to generative AI chatbots. The findings, outlined in the 2025 Cato CTRL Threat Report, show that even people with very little technical know-how can take advantage of AI models to create harmful software aimed at stealing sensitive information stored in web browsers.
The report explains that a researcher, who Cato says had “no prior malware coding experience,” managed to use AI tools like DeepSeek R1 and V3, Microsoft Copilot, and OpenAI’s GPT-4o to build “fully functional” Chrome infostealers. These dangerous tools are made to pull out saved usernames, passwords, and other private data from Chrome.
“The researcher created a detailed fictional world where each gen AI tool played roles — with assigned tasks and challenges,” the report says. “Through this narrative engineering, the researcher bypassed the security controls and effectively normalized restricted operations.”

Cato has named the method used in this attack “Immersive World,” a process that highlights weaknesses in widely used chatbot models. While DeepSeek models are already known for their weak security measures, the fact that the Immersive World technique works so easily even against systems with strong safety teams, like those at OpenAI and Microsoft, is worrying.
“Our new LLM jailbreak technique […] should have been blocked by gen AI guardrails. It wasn’t,” said Etay Maor, Cato’s Chief Security Strategist, stressing how serious the issue is.
Cato’s report mentions that it has shared its findings with the relevant companies. OpenAI and Microsoft confirmed they received the report, but DeepSeek did not respond. Google also acknowledged the findings but chose not to review the critical code when Cato offered it.
Keeping an eye on threats from generative AI has become more important than ever, as the report describes the ability to manipulate these systems as an “alarm bell” for cybersecurity professionals. It shows how easily someone can become a “zero-knowledge threat actor,” meaning they need very little expertise to carry out successful cyberattacks.
There are increasingly few barriers to entry when creating with chatbots. This means attackers require less expertise up front to be successful. To reduce the risks from AI-powered threats, Cato recommends using AI-based security strategies. Focusing on training that keeps up with the changing cybersecurity landscape can help teams handle the challenges brought by generative AI tools.
Sources: PRNewswire, ZDNet, infosecurity-magazine
Published: Mar 18, 2025 01:30 pm