Disciplined Yalie: A.I. Didn’t Do My Schoolwork
Mar 27, 2026
Thierry Rignol and his attorney, Andrew Miltenberg.
The summer after his first year at Yale School of Management, Thierry Rignol was suspended for “not being forthcoming.” The charge was linked to Yale’s finding, after an investigation, that he used artificial intelligence (A.I.) to cheat
on a take-home exam. Rignol received an F in the course and a one-year suspension.
Months later, Rignol filed a lawsuit against Yale, four administrators, and two professors in federal district court, arguing that he was wrongfully punished by the university.
“I was never found guilty of using A.I., because I never did,” Rignol told the Independent. “So they pivoted at the very last moment to this vague idea that I was suspended for a year for not being forthcoming. What does that mean? That I didn’t confess early enough? Is this the Star Chamber?”
Representatives from Yale declined to comment on Rignol’s case, offering only a general statement about A.I. and education. In court, the university filed a motion to dismiss Rignol’s lawsuit, arguing that the disciplinary process followed Yale’s rules and that the court should defer to the university’s decision.
Whether or not Rignol used A.I. on his exam, his case points to a broader challenge facing universities: With few reliable methods for detecting A.I., accusations of misuse are easy to make and hard to prove. In light of that uncertainty, universities are struggling to enforce their rules for protecting academic integrity.
“You can essentially assume that if someone has access to a phone or computing device, they are using A.I.,” Kyle Jensen, a professor at Yale School of Management, told the Independent. “Prohibitions [on A.I. use] are necessarily an ‘on your honor’ system,” and enforcing them has grown “exceedingly difficult” as the technology has improved.
Perfect Punctuation Flagged
In July 2023, Rignol, who is currently 34, enrolled in Yale’s Executive M.B.A (Master’s of Business Administration) program for mid-career professionals. He travels to New Haven every other weekend for classes, while also working as a real estate investor in Texas.
“I specifically chose Yale because of its mission to educate leaders for business and society,” Rignol told the Independent. During his first year, he was a “top student, on track to graduate first in his class,” according to Rignol’s filings in the U.S. District Court of Connecticut.
That changed in June 2024. A teaching assistant in the course Sourcing and Managing Funds flagged Rignol’s exam for its length, perfect punctuation, and sophisticated formatting. Professor Geert Rouwenhorst elected to investigate further.
According to a letter sent to Wendy Tsung, an assistant dean at Yale’s School of Management, Rouwenhorst discovered that one of Rignol’s test answers showed “substantial overlap” with ChatGPT output, and that Rignol performed the worst on a question where A.I. would have been least useful. “Some answers to essay-type questions on the exam score high on the likelihood of being A.I.-generated using [GPTZero] as a detection tool,” wrote Rouwenhorst. He referred Rignol’s exam to the school’s Honor Committee for further investigation.
Two administrators, Dean Tsung and then-Dean Sherilyn Scully, met with Rignol in July 2024 to discuss the allegations, according to court filings from both sides. From Rignol’s perspective, the conversation was designed to pressure a confession.
“They sat me down, and they told me, point blank, if I confessed to something that I didn’t do, they’d go easy on me. If I didn’t confess, they would make sure that my visa was invalidated, which might lead to me being deported,” Rignol told the Independent. (Unlike most international students, Rignol, a French national, is not on a student visa. After the meeting, he confirmed that his investor visa would not be affected by the outcome of Yale’s disciplinary proceedings.)
Yale rejected Rignol’s characterization of the meeting in their Motion to Dismiss. The university’s rules do “not prohibit [Yale] from discussing potential implications of disciplinary proceedings with students,” reads the filing.
According to Yale’s filings, by mid-August, faculty members had asked Rignol on three separate occasions to provide the underlying Word document for his exam, which was submitted in PDF format. Rignol did not produce the document.
“I have the PDF file that you submitted. I am asking you to send us the Word file from which the PDF was produced,” Choi wrote to Rignol on Aug. 16, 2024. “You seem to be weighing whether you will cooperate or not. It’s your choice, but if history is a guide, failure to cooperate would be viewed by the Honor Committee as an extraordinary violation of the Honor Code.”
On Sept. 8, 2024, Rignol was formally notified of the university’s investigation into his alleged A.I. use. He met with the Honor Committee on Nov. 8, 2024, and provided the underlying Pages document afterwards, according to Yale’s filings.
After reviewing the document, the Honor Committee asked to inspect Rignol’s laptop. Rignol responded that he could not return to campus that day and offered to meet the following week.
By that point, the committee was assessing whether Rignol had cooperated with the investigation.
On Nov. 9, 2024, the Honor Committee sanctioned Rignol with a one-year suspension for “not being forthcoming.” Later that month, the committee failed him in the course Sourcing and Managing Funds for allegedly violating exam rules, according to emails Yale filed in court.
Hard To Prove, Hard To Contest
In an interview with the Independent, Andrew Miltenberg, Rignol’s lawyer, raised concerns with the procedure for deciding whether his client misused A.I.
“Nothing is more frustrating than trying to prove a negative, or defend yourself in a forum in which there’s an impossibly low evidentiary standard for the university to make its case, and you as a respondent don’t have the necessary tools to defend yourself,” Miltenberg, once described in Newsweek as the “go-to attorney for students accused of sexual assault,” told the Independent.
When accusing a student of misusing A.I., instructors, including Rignol’s, often reach for A.I. detectors: software that analyzes an essay’s wording, tone, and structure to determine its authorship. The developers of those tools have published near-perfect accuracy rates, appealing to instructors seeking firm evidence to support their instincts about an essay.
However, multiple independent studies have found that A.I. detectors regularly mistake human-produced writing as A.I.-generated. False positives are more common with non-native English speakers, such as Rignol, as their writing often defaults to the same simple words and phrases that A.I. prefers.
Humans, meanwhile, are no better at detection. Research shows that, when asked to identify A.I.-produced texts, human readers perform about as well as chance.
Some signs of A.I. usage — such as hallucinated citations and forgotten lines from prompts — can be difficult for students to explain away. Most cases, however, are far more ambiguous.
The uncertainty stems from a problem facing the entire writing industry: the heuristics for identifying A.I.-produced prose — such as the words “delve” and “tapestry” or the pattern, “It’s not X – it’s Y” — are also ordinary features of human writing. Those tells are becoming even less reliable as students consume more A.I.-generated content, leading their own writing to sound like ChatGPT.
Reliable detection is also challenging because students rarely copy-paste output into their essays. More often, they consult A.I. to brainstorm, outline, and edit — uses that can substantially shape an essay without leaving obvious evidence in the final text.
Professors Adapt
As A.I. has advanced and grown harder to detect, professors have begun questioning the logic of enforcement. Rather than stepping up efforts to catch cheaters, some instructors have redesigned their courses to permit more uses of A.I., while others have created assessments that remove access to the technology.
For example, at Yale School of Management, Jensen assigns work with no restrictions on A.I. use. Adapting his course has helped avoid A.I.-related integrity concerns, while also teaching his students how to use the tool effectively.
The business school “has a realistic view of students using these tools immediately upon graduating” and seeks to prepare them for an A.I.-infused world, he said.
Like Jensen, many Yale professors are “creatively engaging A.I. in the classroom” by “asking students to responsibly integrate A.I. into their work,” “learn[] through class assignments how to use A.I. as a partner or tutor,” or “think about A.I. critically,” Jennifer Frederick, an associate provost that oversees Yale’s academic initiatives, told the Independent.
As a whole, Yale wants students to learn “frameworks for engaging A.I. fluently and ethically,” she said.
Other instructors have opted to administer more in-class essays and oral exams, as they require students to demonstrate their knowledge without the assistance of A.I.
Not all assignments and courses can be redesigned in those ways, though, and many professors still want their students to take on the cognitive work of writing research papers and completing problem sets. But without a reliable way to detect A.I., those instructors have been left scrambling to find ways of enforcing their academic integrity standards.
In an interview with the Independent, Katherine Hatfield, a teaching assistant, recalled confronting a student she suspected of using A.I. about terms she used on her exam. The student could not answer Hatfield’s questions but denied using A.I., claiming to have forgotten the material she had studied after the test.
Another professor, who asked to remain anonymous, said he warned his class that their homework had been flagged for A.I. use. He threatened disciplinary action if they did not confess, even though he had no hard evidence against any particular student. A handful, he said, admitted to using A.I.
As more allegations are made with uncertain and incomplete evidence, accused students are losing confidence in the process for deciding cases against them.
Rignol, who is expected to graduate in May, is still pressing his claim against Yale in the U.S. District Court of Connecticut. Most recently, a judge in November granted Yale’s request to pause discovery while the court considers their motion to dismiss the lawsuit.
Miltenberg, a veteran attorney who specializes in cases of academic and workplace discipline, said cheating allegations are on the rise at every level of schooling. “As part of my practice, this issue…is becoming a very considerable part of what we see every day.”
Miltenberg said he receives between five to seven calls weekly from people seeking legal defense against accusations of A.I. use. The callers all seem at a loss over how to prove their innocence.
“It’s very easy to allege that someone’s cheated by using A.I., and it’s very hard to disprove that [they] haven’t,” he said.
The post Disciplined Yalie: A.I. Didn’t Do My Schoolwork appeared first on New Haven Independent.
...read more
read less