Attorneys balance use of powerful AI tools with risks — including legal hallucinations
Dec 08, 2025
The 14-page motion looked like any standard court filing in a civil lawsuit. Filed in April in San Diego Superior Court, attorneys defending a local company were making a routine request for a judge to authorize additional expert witnesses in a vehicle collision injury case.
But on the sixth page of
the motion, the attorneys cited legal authority from a case that does not exist — it was hallucinated by artificial intelligence.
A San Diego judge wrote in October that she was “deeply troubled” by the conduct of the company’s civil defense attorneys. She found they had filed multiple documents containing AI hallucinations, including citations to non-existent cases, fake quotes from real cases and inaccurate citations to real but irrelevant legal authorities.
That is one of at least two known San Diego cases that are part of a troubling trend of attorneys abusing or misusing AI, especially generative AI, a powerful technology that can at times fabricate information with no basis in reality.
But even as legal watchdog groups have documented hundreds of AI hallucination cases in the U.S. and around the world, legal experts and attorneys say those cases are rare, arguing that AI is an important tool being put to good use throughout the legal profession, helping lawyers research case law, analyze evidence, draft contracts and complete any number of rote tasks.
“We can’t just ignore generative AI, we have to become experts in the use so that we can avoid issues … where hallucinated case law gets into final documents,” Bryan McWhorter, a patent attorney and partner at the firm Knobbe Martens, said in an interview. “But I think when leveraged correctly, generative AI is frankly a power tool. It’s going to allow me to produce higher-quality work product in less time and deliver that value to clients.”
McWhorter recently argued that AI cannot now and likely never will be able to replace human legal analysis. That’s a view shared by James Cooper, a professor at California Western School of Law in downtown San Diego and co-author of the book “A Short Happy Guide to Artificial Intelligence for Lawyers.”
“It complements all our skill sets,” Cooper said. “(But) there still needs to be a human in the loop.”
Cooper said there will always be subtlety in fact-finding and questioning people “that robots don’t understand … AI tools can’t fully engage with the nuance of humankind.”
Even so, Cooper is concerned that the use of AI in the legal field may be advancing too rapidly, and he hopes attorneys and firms are implementing safeguards that lawmakers and regulators have not.
“We’re at a juncture now where the technology is far outpacing our ability to regulate it, both as a legal profession but also in the real world, in a non-legal context,” Cooper said. “… We don’t know yet where to put the guardrails.”
‘Risks are no different’
As the use of AI has exploded in recent years, legal organizations have grappled with what sort of guidelines and policies to put in place to ensure that attorneys who use the technology do so ethically and responsibly. That can look different for a patent lawyer such as McWhorter and attorneys who litigate criminal and civil cases before judges, though the basic principles are largely the same.
“Broadly speaking, (the) risks are no different than we’ve always had in the industry, which is we must produce work that is 100% accurate and maintains all of our ethical obligations regarding client confidentiality,” McWhorter said.
For courtroom litigators, a federal judge in New York helped set the standard for the use of the technology in June 2023 while presiding over a case in which attorneys filed documents containing AI hallucinations. He wrote that there is “nothing inherently improper about using a reliable artificial intelligence tool for assistance,” but also noted that existing rules required that attorneys ensure the accuracy of every filing.
That has been the theme of most court opinions and bar association guidelines issued since then.
“Simply stated, no brief, pleading, motion, or any other paper filed in any court should contain any citations — whether provided by generative AI or any other source — that the attorney responsible for submitting the pleading has not personally read and verified,” wrote a panel of judges from California’s Second Appellate District.
The judges said the September opinion was the first by a California court to address “the generation of fake legal authority by AI sources” and should double as a warning for all attorneys in the state.
The American Bar Association and California State Bar have issued similar guidelines that AI cannot replace the judgment of trained lawyers and that attorneys should not become overly reliant on the technology.
Cooper, McWhorter and organizations such as the ABA all agree that there is a long list of legitimate uses of AI technology in the law. Much of it is geared toward helping summarize and analyze large data sets, while other tools can help be a middle step in crafting writing.
“The best analogy is we are creating the bones of the strategy, generative AI is adding the first-pass flesh onto those bones and then we’re going back and sculpting it into the final creation,” McWhorter said. “It’s really an intermediary in the process; it’s neither a beginning nor an end.”
‘Erodes trust in the legal profession’
Thus far, the most obvious and visible risk that has emerged in the use of AI in legal work is that of hallucinations. Several watchdog groups have been tracking cases in which court filings have contained AI hallucinations.
While the databases show that many such instances involve pro se litigants, or non-attorneys representing themselves, hundreds of licensed attorneys have also been sanctioned, reprimanded or otherwise caught submitting filings containing AI hallucinations, according to one such database compiled by attorney and researcher Damien Charlotin. His database only tracks cases in which a court has explicitly found or implied that a party relied on hallucinated content.
Charlotin’s database includes three cases in which U.S. judges were found to have written orders or opinions containing AI hallucinations. Meanwhile, the New York Times recently reported that a district attorney in Northern California has been accused of filing briefs containing mistakes typical of AI, one of the first known cases involving suspicions of prosecutorial misuse of the technology.
So far in San Diego, there are two known cases involving court documents containing AI hallucinations.
On Oct. 2, a panel of judges from the California Court of Appeals’ 4th Appellate District sanctioned longtime San Diego criminal defense attorney George Siddell, finding that he violated the Rules of Professional Conduct. They ordered him to pay $1,500 as part of the sanction.
A member of the California State Bar since 1971, Siddell admitted to filing a motion in a client’s criminal appeal that contained a citation to a case that doesn’t exist, a fake quote from a real case and two citations to cases that did not address the issues for which they were cited, according to a published appellate opinion. Siddell declined to comment for this story.
The judges wrote that Siddell’s conduct, when compared to similar conduct by attorneys in civil matters, was “particularly disturbing because it involves the rights of a criminal defendant, who is entitled to due process … and representation by competent counsel.”
The next day, San Diego Superior Court Judge Carolyn Caietti ruled that two attorneys from Tyson Mendes, a national firm headquartered in San Diego, had filed multiple documents containing AI hallucinations in the case defending the local company in the auto injury lawsuit.
A Tyson Mendes partner wrote in a declaration prior to the ruling that he took responsibility for the mistakes, though he blamed the errors on a younger associate attorney who he said “neglected to confirm the accuracy of certain case citations.”
In her own declaration, the associate attorney took responsibility and wrote that she was fired from the firm “as a direct result of my actions and conduct in this case.”
In a statement provided by Tyson Mendes, the firm said in part: “As the Court in this case noted, we accept responsibility for our obligation to present the highest quality work product to the Court. We affirm this responsibility, even and especially in the face of technological transformation.”
The firm also added a lengthy explanation about how it is endeavoring to be a leader in the use of AI in the legal industry.
“As the practice of law continues to evolve through new challenges, opportunities, and technologies, we must also continue to hold ourselves accountable through introspection and transparency,” the firm said in its statement. “Tyson Mendes remains steadfast in our commitment to our clients, our industry, and our profession to leverage the transformative power of AI and new technologies with keen professional and ethical judgment, ongoing education, and agility.”
Caietti did not ultimately sanction either attorney, though her decision was based on procedural issues rather than their conduct.
“Notwithstanding the denial on procedural grounds, the Court is deeply troubled by the conduct of Defense counsel,” Caietti wrote. “… All of this conduct is contrary to the rules of professional responsibility and is the type of conduct that erodes trust in the legal profession … This is hopefully an experience that will never be repeated by the attorneys involved in this matter, let alone others in the profession.”
Client confidentiality
Since AI learns from the data fed to it by its users, attorneys must also take care that confidential client information is not used to train AI models and that it does not inadvertently become public.
For patent attorneys such as McWhorter, those risks revolve around business interests and market competition. Other areas of the law — Cooper mentioned due process and evidentiary issues, as examples — could implicate constitutional rights and protections.
Cooper said that unless and until there are stronger regulations, it will be up to each attorney and law firm to understand the AI technology they use and mitigate any potential risks and harms.
‘Effectively and ethically’
Newer attorneys especially run the risk of becoming over-reliant on AI tools without first building a foundational knowledge of how to practice the law, Cooper said.
At California Western School of Law, the use of AI is a balancing act. The faculty is navigating how best to ensure that students don’t abuse AI technology in their coursework while also teaching them how practicing attorneys are using it.
“We’re trying to make sure that we prepare our students to be able to enter the workforce, enter the profession, and use it effectively and ethically,” said Liam Vavasour, vice dean for academic affairs.
Vavasour said students are not allowed to use ChatGPT or other generative AI tools for writing assignments unless given explicit permission to do so, and even then the use is typically limited to idea-generation or editing and must be disclosed.
Vavasour said that from his understanding, practicing attorneys and judges are using AI in a variety of ways, “so we’re trying to make sure that our students know how to use it effectively and some of the pitfalls to avoid.”
McWhorter supervises younger associates at his firm and believes fears about them being over-reliant on AI are overblown.
“Because the value of generative AI is not in legal strategy, (young attorneys) will develop the ability to provide legal strategy the same as anyone else,” McWhorter said.
...read more
read less