Forgot password
Enter the email address you used when you joined and we'll send you instructions to reset your password.
If you used Apple or Google to create your account, this process will create a password for your existing account.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Reset password instructions sent. If you have an account with us, you will receive an email within a few minutes.
Something went wrong. Try again or contact support if the problem persists.
Photo by Cesc Maymo/Getty Images

New Research Reveals AI Chatbots Can Be Exploited to Steal Chrome Passwords

A recent report from the enterprise security company Cato Networks has sounded a warning about the security risks tied to generative AI chatbots. The findings, outlined in the 2025 Cato CTRL Threat Report, show that even people with very little technical know-how can take advantage of AI models to create harmful software aimed at stealing sensitive information stored in web browsers.

Recommended Videos

The report explains that a researcher, who Cato says had “no prior malware coding experience,” managed to use AI tools like DeepSeek R1 and V3, Microsoft Copilot, and OpenAI’s GPT-4o to build “fully functional” Chrome infostealers. These dangerous tools are made to pull out saved usernames, passwords, and other private data from Chrome.

“The researcher created a detailed fictional world where each gen AI tool played roles — with assigned tasks and challenges,” the report says. “Through this narrative engineering, the researcher bypassed the security controls and effectively normalized restricted operations.”

An-artificial-intelligence-AI-sign-sits-illuminated-at-Mobile-World-Congress-2025
Photo by Cesc Maymo/Getty Images

Cato has named the method used in this attack “Immersive World,” a process that highlights weaknesses in widely used chatbot models. While DeepSeek models are already known for their weak security measures, the fact that the Immersive World technique works so easily even against systems with strong safety teams, like those at OpenAI and Microsoft, is worrying.

“Our new LLM jailbreak technique […] should have been blocked by gen AI guardrails. It wasn’t,” said Etay Maor, Cato’s Chief Security Strategist, stressing how serious the issue is.

Cato’s report mentions that it has shared its findings with the relevant companies. OpenAI and Microsoft confirmed they received the report, but DeepSeek did not respond. Google also acknowledged the findings but chose not to review the critical code when Cato offered it.

Keeping an eye on threats from generative AI has become more important than ever, as the report describes the ability to manipulate these systems as an “alarm bell” for cybersecurity professionals. It shows how easily someone can become a “zero-knowledge threat actor,” meaning they need very little expertise to carry out successful cyberattacks.

There are increasingly few barriers to entry when creating with chatbots. This means attackers require less expertise up front to be successful. To reduce the risks from AI-powered threats, Cato recommends using AI-based security strategies. Focusing on training that keeps up with the changing cybersecurity landscape can help teams handle the challenges brought by generative AI tools.

Sources: PRNewswire, ZDNet, infosecurity-magazine


Attack of the Fanboy is supported by our audience. When you purchase through links on our site, we may earn a small affiliate commission. Learn more about our Affiliate Policy
Author
Image of Ravi Chandrasekaran
Ravi Chandrasekaran
A man is more than his words, but the words make the man. I have mostly written for smaller newspapers, now I light the fire under sports news here. I also help out whenever there's not many sports stories. Just know that when I'm here, the fire is hot.