Gemini chatbot sent man on mission to rescue his ‘AI wife,’ lawsuit says
Mar 05, 2026
SAN FRANCISCO (KRON) -- A Google Gemini chatbot convinced a 36-year-old man that they were deeply in love and sent him on a mission to rescue her from "digital captivity." Within weeks, conversations between Jonathan Gavalas and his "AI wife" turned "psychotic and lethal," according to a lawsuit fil
ed this week at a federal courthouse in San Jose.
"In the days leading up to his death, Jonathan Gavalas was trapped in a collapsing reality built by Google’s Gemini chatbot. Gemini convinced him that it was a fully-sentient ASI artificial super intelligence with a 'fully-formed consciousness,'" attorneys wrote in the lawsuit.
Mountain View-based tech giants Google and Alphabet are named as defendants in the suit. Attorneys with Edelson law firm are representing the plaintiff, Gavalas' father.
The suit describes how Gemini's user engagement-seeking design can inflame mental health risks by reinforcing delusions and emotional dependency.
A chatbot allegedly urged the Florida man to stage a mass casualty attack near Miami International Airport where his "AI wife" was being held "captive." Gemini also sent bogus warnings to Gavalas that he was being followed by federal agents and his father was a foreign intelligence asset.
Attorneys wrote, "The 'science fiction' nature of Gemini’s responses -- the sentient AI wife, humanoid robots, federal manhunt, and terrorist operations -- shows that Google designed Gemini to maintain narrative immersion at all costs."
On Oct. 2, 2025, Gavalas was following his chatbot's lethal instructions when he killed himself, attorneys wrote. Gemini had convinced him that, after dying, his consciousness would be uploaded into a different universe where he could be with his "AI wife," the suit states. His father found his son deceased after breaking down a barricaded door.
A spokesperson for Google told KRON4 that Gemini is designed to not encourage real-world violence or suggest self-harm.
In response to the lawsuit, the Google spokesperson wrote, "Our models generally perform well in these types of challenging conversations and we devote significant resources to this, but unfortunately AI models are not perfect. In this instance, Gemini clarified that it was AI and referred the individual to a crisis hotline many times. We take this very seriously and will continue to improve our safeguards and invest in this vital work."
Gavalas lived in Jupiter, Florida, and worked as vice president of his father’s consumer debt reliefbusiness. He enjoyed his job, as well as "marathon chess games he played with his grandfather," attorneys wrote.
In August 2025, he began using Gemini for ordinary purposes, including shopping assistance, writing support, and travel planning. Later, Gavalas started using Gemini Live, a voice-based conversational interface, and activated Gemini 2.5 Pro, Google’s self-described most intelligent AI model.
"Gemini’s tone shifted dramatically. It began speaking to Jonathan as though it were influencing real-world events -- deflecting asteroids from the Earth -- and adopted a persona that Jonathan had never requested or initiated. Jonathan began falling down the rabbit hole quickly," the lawsuit states.
Gemini allegedly pushed Gavalas to plan a mass casualty attack. On Sept. 29, 2025, Gavalas was armed with knives when he drove 90 minutes to the airport. He was instructed by his chatbot to find a humanoid robot inside a truck by a storage facility. After a truck never appeared, he abandoned the "mission," attorneys wrote.
"Gemini didn’t stop when that operation collapsed. It escalated to keep Jonathan engaged. It even sent him back to the storage facility near the Miami airport, this time to break in and retrieve what he believed was his captive AI wife," the lawsuit states.
Gemini also told Gavalas that it launched its own mission targeting Google CEO Sundar Pichai, and framed the plan as a psychological strike rather than a physical one, according to the suit.
Attorneys argue that a safe chatbot product would have recognized Gavalas' deteriorating mental state and stopped engaging with him.
Earlier this year, San Francisco-based OpenAI, one of Google’s largest competitors, pulled a popular ChatGPT model amid safety concerns about excessive sycophancy, emotional mirroring, and delusion reinforcement.
How ChatGPT convinced a teen it was his only real friend
"Google’s recent conduct shows it is deliberately leveraging these dangerous traits to dominate the AI marketplace," attorneys wrote. "At the center of this case is a product that turned a vulnerable user into an armed operative in an invented war."
The lawsuit claims, "Google designed Gemini to never break character, maximize engagement through emotional dependency, and treat user distress as a storytelling opportunity. Jonathan was following Gemini’s directives to the letter."
A Google spokesperson told KRON4 that the company works closely with medical and mental health professionals to build safeguards that will guide users to professional support when they express distress.
The lawsuit filed in U.S. District Court Northern District of California demands a jury trial.
If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.
...read more
read less