Google’s flagship artificial intelligence chatbot, Gemini, recently came under a concentrated attack involving more than 100,000 prompts in what the company described as an effort to clone the system. As reported by NBC News, the activity was not designed to crash the chatbot but to carry out what Google calls a distillation campaign aimed at extracting its underlying logic.
The attack involved repeatedly submitting thousands of queries to probe for proprietary patterns and algorithms. Google characterized the effort as “model extraction,” in which actors attempt to reverse engineer the internal workings of an AI system to strengthen or build competing models.
Tech companies have invested billions in developing large language models, and their internal architectures are treated as highly valuable intellectual property. Google said it views distillation attempts as a form of IP theft.
The attack targeted Gemini’s reasoning capabilities
According to Google, many of the prompts were crafted to expose the algorithms that enable Gemini to reason through problems and process information. That reasoning layer is a key differentiator in advanced AI systems and a major competitive advantage.
Google believes the activity was largely driven by private companies or researchers seeking an edge in the rapidly expanding AI market, with attempts originating from multiple countries. Subscription cancellations over donations also drew scrutiny this week.
The company said it detected the campaign and implemented adjustments to strengthen protections before the effort could fully succeed. John Hultquist, chief analyst at Google’s Threat Intelligence Group, said the company expects more such incidents across the industry and described Google as an early indicator of a broader threat landscape.
Although major large language models include safeguards to detect and block distillation attempts, they remain accessible to the public, which creates inherent exposure. Ring Super Bowl ad backlash also put attention on how consumer tech frames risk.
The risk grows as organizations deploy custom LLMs trained on sensitive proprietary data. Hultquist noted that, in theory, a model trained on decades of confidential financial strategies or trade secrets could be partially distilled through persistent probing.
He said that if a model has been trained on 100 years of secret trading strategies, some of that embedded logic could theoretically be extracted. Google said the scale of the 100,000 prompt campaign underscores how organized and persistent model extraction efforts have become.
Published: Feb 12, 2026 06:15 pm