X has moved to restrict Grok’s ability to generate revealing images of real people following widespread backlash over sexualized AI deepfakes. The decision comes after weeks of public criticism and political pressure surrounding the misuse of the tool to edit images of identifiable individuals into explicit or suggestive content.
As reported by the BBC, the company says it has implemented new technical measures designed to prevent images of real people from being edited into revealing clothing. The change is intended to address growing concerns about harm caused by the AI feature, particularly when used without consent.
The announcement follows scrutiny from both UK and US authorities, with regulators warning that the platform could face legal consequences if it failed to act. While officials have welcomed the move, they have made clear that investigations into potential breaches of the law are still underway.
Pressure from regulators is forcing platforms to act
UK officials described X’s decision as a “vindication” of earlier calls to rein in the technology, though they stopped short of calling the issue resolved. A local regulator said its investigation into whether X broke UK laws remains ongoing, and the Technology Secretary stated that while the action was welcome, the facts still need to be “fully and robustly established” through the inquiry. This debate over AI moderation mirrors broader ongoing discussions about how major tech platforms respond to regulatory and legal pressure as seen in the reporting on Trump’s approaches to oil and geopolitical strategy.
Similar pressure has emerged in the United States. X’s announcement came just hours after California’s attorney general revealed the state was investigating the spread of sexualized AI deepfakes, including material involving children, adding further urgency to the company’s response. That legal scrutiny unfolds against the backdrop of other high-profile domestic controversies, like debates over major political donations by a billionaire.
To limit misuse, X said it will geoblock the generation of images of real people in bikinis, underwear, or similar attire in jurisdictions where such content is illegal. The company also reiterated that only paid users will be able to edit images using Grok, arguing that this makes it easier to identify and hold abusers accountable.
Platform owner Elon Musk previously defended X’s content standards, stating online that with NSFW settings enabled, Grok was intended to allow upper body nudity of imaginary adult humans, not real people, and that standards would vary depending on local laws. That position has since been tested by the scale of the backlash.
Campaigners argue that while the technical changes are a step forward, they come too late for those already affected. Journalist Jess Davies, whose image was altered using Grok, said the imagery should never have been allowed, while university lecturer Dr. Daisy Dixon described the reversal as a “battle-win” but emphasized that many women had already suffered harm.
Policy researchers and advocates have also raised questions about enforcement, particularly around whether geo-blocking can be easily bypassed using tools like virtual private networks. UK regulators have warned that if X fails to comply with its obligations, they have the power to seek court orders that could ultimately require internet providers to block access to the platform.
Published: Jan 15, 2026 04:45 pm