Forgot password
Enter the email address you used when you joined and we'll send you instructions to reset your password.
If you used Apple or Google to create your account, this process will create a password for your existing account.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Reset password instructions sent. If you have an account with us, you will receive an email within a few minutes.
Something went wrong. Try again or contact support if the problem persists.
Photo by Kevin Carter/Getty Images

Google says attackers hit Gemini with 100,000 prompts, but what they were really after is bigger than spam

Google’s flagship artificial intelligence chatbot, Gemini, recently came under a concentrated attack involving more than 100,000 prompts in what the company described as an effort to clone the system. As reported by NBC News, the activity was not designed to crash the chatbot but to carry out what Google calls a distillation campaign aimed at extracting its underlying logic.

Recommended Videos

The attack involved repeatedly submitting thousands of queries to probe for proprietary patterns and algorithms. Google characterized the effort as “model extraction,” in which actors attempt to reverse engineer the internal workings of an AI system to strengthen or build competing models.

Tech companies have invested billions in developing large language models, and their internal architectures are treated as highly valuable intellectual property. Google said it views distillation attempts as a form of IP theft.

The attack targeted Gemini’s reasoning capabilities

According to Google, many of the prompts were crafted to expose the algorithms that enable Gemini to reason through problems and process information. That reasoning layer is a key differentiator in advanced AI systems and a major competitive advantage.

Google believes the activity was largely driven by private companies or researchers seeking an edge in the rapidly expanding AI market, with attempts originating from multiple countries. Subscription cancellations over donations also drew scrutiny this week.

The company said it detected the campaign and implemented adjustments to strengthen protections before the effort could fully succeed. John Hultquist, chief analyst at Google’s Threat Intelligence Group, said the company expects more such incidents across the industry and described Google as an early indicator of a broader threat landscape.

Although major large language models include safeguards to detect and block distillation attempts, they remain accessible to the public, which creates inherent exposure. Ring Super Bowl ad backlash also put attention on how consumer tech frames risk.

The risk grows as organizations deploy custom LLMs trained on sensitive proprietary data. Hultquist noted that, in theory, a model trained on decades of confidential financial strategies or trade secrets could be partially distilled through persistent probing.

He said that if a model has been trained on 100 years of secret trading strategies, some of that embedded logic could theoretically be extracted. Google said the scale of the 100,000 prompt campaign underscores how organized and persistent model extraction efforts have become.


Attack of the Fanboy is supported by our audience. When you purchase through links on our site, we may earn a small affiliate commission. Learn more about our Affiliate Policy
Author
Image of Saqib Soomro
Saqib Soomro
Politics & Culture Writer
Saqib Soomro is a writer covering politics, entertainment, and internet culture. He spends most of his time following trending stories, online discourse, and the moments that take over social media. He is an LLB student at the University of London. When he’s not writing, he’s usually gaming, watching anime, or digging through law cases.