I am not a lawyer, but I am an AI subject-matter expert in my team at work, in the context of which one of my primary tasks is generally “please stop using AI, it is bad at this task and when it makes things up that could be bad for us” and the defenses I hear read very similarly.
In this specific case, it does look an awful lot like the lawyer knew that she’d screwed up but tried to spin it in a dumb way that the judge didn’t buy:
The misalignment between reported citations and electronic databases does not equate to falsity. Federal courts routinely encounter similar variances without imputing bad faith.
But from what I can tell, yes, the more common approach is to say that you had no idea ChatGPT could make things up. But it is always paired with the admission that you know you should still have double-checked, and that you throw yourself on the mercy of the court (one, two).
This is often in concert with saying that the research was done by someone else and the lawyer’s principle error is in not verifying it, or by appealing to general process issues (i.e., “I used AI to write an outline but assumed someone else would check it” (PDF)
The ur-example, in Mata v. Avianca, involved a lawyer claiming that he “thought ChatGPT was a search engine”—lawyer Devin Stone talks more about it through his whole video on the subject, but the excerpt is at this time index.
It does also appear that a new wrinkle, and why sanctions have been becoming more severe, is that in the intervening time period a lot of law firms have developed clear policies on AI, and the lawyers are invariably acting in contrivance of those policies (including another lawyer using it as a search engine (PDF), and one using it to edit and enhance his briefs (PDF)).
So even true believers are, at least on the record, going to run unavoidably up against having to admit that they were violating their firm’s policies, and this would, itself, not reflect especially well on them. And the training alongside those policies makes “I had no idea it could make things up” increasingly untenable.
An interesting point from an article on this subject, which may suggest this is going to get worse before it gets better (emphasis mine):
Charlotin thinks courts and the public should expect to see an exponential rise in these cases in the future. When he started tracking court filings involving AI and fake cases earlier this year, he encountered a few cases a month. Now he sees a few cases a day. Large language models confidently state falsehoods as facts, particularly when there are no supporting facts.
“The harder your legal argument is to make, the more the model will tend to hallucinate, because they will try to please you,” he said. “That’s where the confirmation bias kicks in.”