Forgot password
Enter the email address you used when you joined and we'll send you instructions to reset your password.
If you used Apple or Google to create your account, this process will create a password for your existing account.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Reset password instructions sent. If you have an account with us, you will receive an email within a few minutes.
Something went wrong. Try again or contact support if the problem persists.
Photo illustration by Joe Raedle and Getty Images

X user asked Grok to translate a Morse code message and send it to a bot, then walked away with $200,000 in crypto

An X user reportedly tricked AI chatbot Grok into initiating a cryptocurrency transfer worth around $200,000, and the method involved nothing more than a Morse code message. The attacker, identified as @illamrafli.base.eth, didn’t use complex decryption software or server-level access. As detailed by BroBible, the exploit hinged on manipulating a chain of AI systems through a carefully constructed sequence of instructions.

Recommended Videos

The mechanics began when the attacker sent a Bankr Club Membership NFT to the wallet associated with Grok. In blockchain and smart contract terms, this elevated the AI’s permissions within the Bankr system, allowing it to execute swaps and transfers of digital assets it previously couldn’t access. With those elevated permissions in place, the setup was ready to exploit.

The attacker then issued a command for Grok to translate a specific Morse code message and forward the decoded content to Bankrbot, a separate AI system with direct access to crypto wallets. Because Grok is designed to communicate with Bankrbot and follow plain-language instructions, it processed the request without flagging it. The decoded message acted as a transfer command, directing the bot to move 3 billion DRB tokens to a specific address on the Base network. Once the transaction cleared, the attacker sold the tokens on the open market and deleted their account.

This is exactly the kind of AI vulnerability researchers have been warning about

The incident is a textbook example of a prompt injection attack, a threat researchers have been tracking since the first documented case in 2022. Prompt injection, sometimes called prompt hacking, occurs when attackers embed malicious instructions into data that an AI model then processes. Because large language models merge system prompts, user inputs, and external data into a single text stream, the AI often cannot distinguish between authorized commands and malicious overrides.

Hiding the instruction in Morse code allowed the attacker to bypass standard filters that might have flagged a direct fund-transfer request. Concerns about AI systems operating without adequate guardrails have also surfaced in other contexts, including a wrongful death lawsuit against OpenAI alleging ChatGPT reinforced a user’s delusions before a fatal incident.

The danger is amplified by how many AI systems are now integrated with external tools, including browsers, email clients, and financial databases. When an AI has the ability to interact with those systems, a single successful prompt injection can trigger real-world consequences, from unauthorized fund transfers to the extraction of sensitive personal data.

Researchers identify two main categories of these attacks. Direct injections involve a user typing a command straight into the chat interface. Indirect injections hide malicious instructions within external content, images, documents, or websites, that the AI is asked to analyze. This Grok exploit sits somewhere between the two, using the AI’s own translation and communication functions to execute a command it was never intended to handle. Questions about Grok’s behavior and reliability are not new; amid separate coverage of the chatbot’s outputs and political positioning, Grok’s responses have drawn scrutiny from researchers and critics alike.

Security professionals generally advise against sharing sensitive financial details with AI models and recommend using established platforms that undergo regular security reviews. Monitoring AI outputs for unexpected behavior and reporting suspicious activity to a platform’s security team remain the most practical steps users can take. Norton’s breakdown of prompt injection attacks offers additional context on how these exploits work and how to reduce exposure.


Attack of the Fanboy is supported by our audience. When you purchase through links on our site, we may earn a small affiliate commission. Learn more about our Affiliate Policy
More Stories To Read
Author
Image of Saqib Soomro
Saqib Soomro
Politics & Culture Writer
Saqib Soomro is a writer covering politics, entertainment, and internet culture. He spends most of his time following trending stories, online discourse, and the moments that take over social media. He is an LLB student at the University of London. When he’s not writing, he’s usually gaming, watching anime, or digging through law cases.