Google’s AI Overview feature falsely accused a well-known Canadian fiddle player of serious crimes, got his concert canceled, and left him fearing for his personal safety. Now, the musician, Ashley MacIsaac, is suing Google for $1.5 million in damages.
According to The Independent, the lawsuit claims that Google’s AI Overview published false statements saying MacIsaac had been convicted of multiple crimes, including sexual assault, internet luring of a child, and assault causing bodily harm. The tool also wrongly claimed that he was listed on the national sex offender registry for life.
MacIsaac, a three-time Juno Award winner, filed the claim in the Ontario Superior Court of Justice. He is seeking $500,000 in general damages, $500,000 in aggravated damages, and $500,000 in punitive damages. His lawyers argue that Google is responsible for the “foreseeable republication” of the defamatory statements made by its AI Overview feature.
Google’s AI error was serious enough to get MacIsaac’s concert canceled and left him fearing for his safety on stage
The false AI-generated summary led to the cancellation of a scheduled concert by the Sipekne’katik First Nation, who later issued a formal apology to MacIsaac after finding out the information was wrong. “Decisions were based on incorrect information generated through an AI-assisted search, which mistakenly associated you with offenses unrelated to you,” the group said in a statement.
MacIsaac said that the error left him genuinely afraid while performing. “I felt that tangible fear from something that was published by a media company,” he said. “I feared for my own safety going on stage because of what I was labeled as. And I don’t know how long this will follow me.” This case is one of several recent examples of AI causing real harm to real people, with consequences that go well beyond a simple correction.
Google has not contacted MacIsaac or offered any apology. The lawsuit describes the company’s response to the false statements as “cavalier and indifferent.” “Google’s AI Overview feature was not designed to provide accurate information,” the lawsuit states. “Google knew, or ought to have known, that the AI Overview was imperfect and could return information that was untrue.”
In a statement provided by his lawyers, MacIsaac explained why he chose to take legal action. “When I first discovered the false statements Google was publishing about me, I felt I needed to speak out to the media to clear my name and bring attention to the issue,” he said. “I believe this is a serious issue that needs to be resolved in the courts.”
This is not the first time Google’s AI features have faced criticism for overstepping. Google has also been pushing deeper into users’ personal lives, with its AI feature scanning through your photos, raising serious concerns about privacy. But in MacIsaac’s case, the consequences have been especially serious, affecting both his reputation and his sense of personal safety.
Published: May 6, 2026 09:15 am