At a seminar I attended recently on “AI for lawyers” a speaker claimed there was a case in the US where a lawyer got caught filing a brief that contained AI-hallucinated citations - but then in his submission in the resulting disciplinary proceedings he got caught doing the same thing.
Speaking to people afterward some claimed it was an urban legend, but I have a recollection of seeing a media article about it.
I’ve been googling to try to get to the bottom of it and can’t find any support for the claim - but it’s damn hard to come up with search terms that will isolate this circumstance from the million and one stories about lawyers who have got caught using AI-hallucinated cites and been disciplined for it.
Have you…done an internet search for “lawyer AI false citations”? Because just as a quick search I find plenty of stories from legitimate journalism sites and lawyer blogs like these:
I can’t see that any of the articles you cite are relevant to my query. You may need to read it again.
Some of the articles you cite have the circumstance where AI-hallucinated cites were found in a brief, and those citations were then corrected but it was found that the brief still contained further AI-hallucinated cites.
That’s not what the speaker at the seminar claimed. They claimed that at a subsequent disciplinary hearing, the lawyer in question made a submission concerning whether they should be disciplined that in itself contained AI-hallucinated cites.
Perhaps one of the reasons that we keep hearing about these completely avoidable catastrophes is that catching your opponent even making a single mistake using an AI tool is an easy way to gain an upper hand in court, so everyone’s on the lookout for them.
That’s what happened here: it was the plaintiff’s legal team that first caught the mistakes, which included inaccurate or completely made up citations and quotations. The plaintiffs then filed a request for the judge to sanction [defense lawyer Michael] Fourte, which is when he committed the legal equivalent of shoving a stick between the spokes of your bike wheel: he used AI again.
In his opposition to the sanctions motion, Fourte’s submitted document contained more than double the amount of made-up or erroneous citations as last time, an astonished-sounding [New York Supreme Court judge Joel] Cohen wrote.
Since the OP has been answered, I’ll note this recent news. Two federal judges were caught using AI to generate documents, which had errors. They blamed members of their staff: a law clerk and intern.
question for the lawyers: Suppose you are a completely lazy bum so you decide to let AI write an entire brief, and it quotes , say, 10 prior cases–how much trouble is it to check those 10 cites to make sure they aren’t hallucinations?
Isn’t there an easy way to “google” the cite names–I know there’s a thing called to “Sheppardize” , is that the same thing?
This is your career on the line --not a sophomore student paper that counts for15 % of the grade one semester and then gets trashed.. Why not check the AI cites before you turn in your work ?
I was listening to a law podcast (“Serious Trouble”, highly recommended) where they talked about the recent judge case mentioned by @dtilque. One of the hosts speculated that some newly minted J.D.s these days may have gotten away with using AI throughout their degrees, and may not actually know how to check things themselves — or, at least, it’s not a step they regularly took in their law school classes.
Of course, I fully expect that every law school professor who’s paying attention will point to this anecdote and say “this is why you need to learn how to write these things yourselves and not use AI for them”, and rightly so.
A man sued the airline Avianca for being injured by a beverage cart during a flight;
Avianca, a corporation with deep pockets, responded by digging out a bunch of complicated precedent that, they claimed, implied the case should be dismissed;
The plaintiff’s attorney, rather than ponying up the money to access the obscure legal databases discussing these precedents, asked ChatGPT about them and how to refute them;
ChatGPT then came up with some citations, including a fictitious case called Varghese v. China Southern Airlines Co., Ltd., which the plaintiff’s attorney cited in response;
Avianca’s lawyers tried to find Varghese v. China Southern Airlines Co., Ltd. in their databases, couldn’t find it, and effectively said “WTF?”
And the pièce de resistance: in response to Avianca’s questions, the plaintiff’s lawyer went back to ChatGPT, which very helpfully made up whole or partial opinions for Varghese v. China Southern Airlines Co., Ltd. and other non-existent cases, and the plaintiff’s lawyer then submitted them to the court as proof that these cases existed.
When this all came out, the plaintiff’s lawyer eventually admitted that he had used ChatGPT for research, and that he thought it was just a very fancy search engine that was not in the habit of just making stuff up. This was all in 2022–2023, so I can kind of believe that someone not paying attention at that point in history would have been ignorant of the phenomenon of AI hallucinations. But that’s an explanation rather than an excuse IMHO.
Yes, actually reading law. Exercising professional judgment about what you read. Applying your knowledge about the law to the facts of the case you’re dealing with.
AIUI, a lawyer’s signature is required on any legal brief, certifying that what’s submitted is factual and relevant.
I’d certainly expect lazy lawyers to make sloppy use of AI. When they do, I’d expect judges to come down very hard, rejecting any and all attempts to use AI as an excuse. “This court does not tolerate dishonesty, nor any attempt to evade responsibility for such on the basis that the dishonest attorney also happens to be lazy.”
I’d imagine a judge would impose serious sanctions for this: say, formal public admission of dishonesty and a $50k fine.
I mean, it’s probably true that the law clerk and intern were the source of the errors, but a judge, just like a partner at a law firm, has a responsibility to check and corroborate all the work product that their junior attorneys generate.
I imagine this can be enforced to some extent by a professional order in every province?
I’m an engineer and the Québec Code des professions applies to lawyers and engineers and nurses and…Section II.32 has a list!
There are further lower level professional laws and regulations that apply.
I completely understand the pressure to accept information that suits what you are trying to do, but these claims of legal cases that don’t exist are the equivalent of using “test results” for tests that never took place. I guess some people gamble on not getting caught.