An X user reportedly tricked AI chatbot Grok into initiating a cryptocurrency transfer worth around $200,000, and the method involved nothing more than a Morse code message. The attacker, identified as @illamrafli.base.eth, didn’t use complex decryption software or server-level access. As detailed by BroBible, the exploit hinged on manipulating a chain of AI systems through a carefully constructed sequence of instructions.
The mechanics began when the attacker sent a Bankr Club Membership NFT to the wallet associated with Grok. In blockchain and smart contract terms, this elevated the AI’s permissions within the Bankr system, allowing it to execute swaps and transfers of digital assets it previously couldn’t access. With those elevated permissions in place, the setup was ready to exploit.
The attacker then issued a command for Grok to translate a specific Morse code message and forward the decoded content to Bankrbot, a separate AI system with direct access to crypto wallets. Because Grok is designed to communicate with Bankrbot and follow plain-language instructions, it processed the request without flagging it. The decoded message acted as a transfer command, directing the bot to move 3 billion DRB tokens to a specific address on the Base network. Once the transaction cleared, the attacker sold the tokens on the open market and deleted their account.
This is exactly the kind of AI vulnerability researchers have been warning about
The incident is a textbook example of a prompt injection attack, a threat researchers have been tracking since the first documented case in 2022. Prompt injection, sometimes called prompt hacking, occurs when attackers embed malicious instructions into data that an AI model then processes. Because large language models merge system prompts, user inputs, and external data into a single text stream, the AI often cannot distinguish between authorized commands and malicious overrides.
Hiding the instruction in Morse code allowed the attacker to bypass standard filters that might have flagged a direct fund-transfer request. Concerns about AI systems operating without adequate guardrails have also surfaced in other contexts, including a wrongful death lawsuit against OpenAI alleging ChatGPT reinforced a user’s delusions before a fatal incident.
The danger is amplified by how many AI systems are now integrated with external tools, including browsers, email clients, and financial databases. When an AI has the ability to interact with those systems, a single successful prompt injection can trigger real-world consequences, from unauthorized fund transfers to the extraction of sensitive personal data.
Researchers identify two main categories of these attacks. Direct injections involve a user typing a command straight into the chat interface. Indirect injections hide malicious instructions within external content, images, documents, or websites, that the AI is asked to analyze. This Grok exploit sits somewhere between the two, using the AI’s own translation and communication functions to execute a command it was never intended to handle. Questions about Grok’s behavior and reliability are not new; amid separate coverage of the chatbot’s outputs and political positioning, Grok’s responses have drawn scrutiny from researchers and critics alike.
Security professionals generally advise against sharing sensitive financial details with AI models and recommend using established platforms that undergo regular security reviews. Monitoring AI outputs for unexpected behavior and reporting suspicious activity to a platform’s security team remain the most practical steps users can take. Norton’s breakdown of prompt injection attacks offers additional context on how these exploits work and how to reduce exposure.
Published: May 6, 2026 07:30 pm