Should lawyers who use AI be severely punished?

No one is being sanctioned for merely using AI. They are being sanctioned for using bogus cites due to (1) relying in AI and (2) not catching/correcting the errors.

They can be true believers in AI all they want, but that doesn’t excuse the use of bogus cites.

Well, if they’re true believers, then it shouldn’t have even occurred to them that the cites could be bogus. That’s what I was wondering about. Do they express shock or disbelief? Do they refuse to believe that the cures are not real at first? Do they attempt to argue that they had a good faith belief in the accuracy of AI?

The sincerity of their belief is irrelevant. Intent is not required.

Legally, no, but I’m interested in the question anyway, because I’ve been wondering if what exactly the true believers would say (in terms of content) if you asked them directly how they think AI works and how much they trust its output, and the thought of that being tested in an actual court of law is interesting.

That’s why I was wondering about the mindset of these lawyers. Does their using AI for work mean that they trusted the results to be 100% accurate? Have any of them tried to tell the judge, “but the AI couldn’t have been wrong? What were the ones busted twice in the same case thinking?

I am not a lawyer, but I am an AI subject-matter expert in my team at work, in the context of which one of my primary tasks is generally “please stop using AI, it is bad at this task and when it makes things up that could be bad for us” and the defenses I hear read very similarly.

In this specific case, it does look an awful lot like the lawyer knew that she’d screwed up but tried to spin it in a dumb way that the judge didn’t buy:

The misalignment between reported citations and electronic databases does not equate to falsity. Federal courts routinely encounter similar variances without imputing bad faith.

But from what I can tell, yes, the more common approach is to say that you had no idea ChatGPT could make things up. But it is always paired with the admission that you know you should still have double-checked, and that you throw yourself on the mercy of the court (one, two).

This is often in concert with saying that the research was done by someone else and the lawyer’s principle error is in not verifying it, or by appealing to general process issues (i.e., “I used AI to write an outline but assumed someone else would check it” (PDF)

The ur-example, in Mata v. Avianca, involved a lawyer claiming that he “thought ChatGPT was a search engine”—lawyer Devin Stone talks more about it through his whole video on the subject, but the excerpt is at this time index.

It does also appear that a new wrinkle, and why sanctions have been becoming more severe, is that in the intervening time period a lot of law firms have developed clear policies on AI, and the lawyers are invariably acting in contrivance of those policies (including another lawyer using it as a search engine (PDF), and one using it to edit and enhance his briefs (PDF)).

So even true believers are, at least on the record, going to run unavoidably up against having to admit that they were violating their firm’s policies, and this would, itself, not reflect especially well on them. And the training alongside those policies makes “I had no idea it could make things up” increasingly untenable.

An interesting point from an article on this subject, which may suggest this is going to get worse before it gets better (emphasis mine):

Charlotin thinks courts and the public should expect to see an exponential rise in these cases in the future. When he started tracking court filings involving AI and fake cases earlier this year, he encountered a few cases a month. Now he sees a few cases a day. Large language models confidently state falsehoods as facts, particularly when there are no supporting facts.

The harder your legal argument is to make, the more the model will tend to hallucinate, because they will try to please you,” he said. “That’s where the confirmation bias kicks in.”

Which is an admission of professional incompetence.

It doesn’t matter if AI was used, or research was done by an articling student or para-legal, or if another lawyer worked helped write the brief. As I say to our new students every year: if I file something with the court, I’m responsible for it, 100%.

I will be filing a 20 page brief later today in a significant case. The list of authorities (statutes, cases, secondary matters) is 3 pages long. I’ve checked each one of those cites myself to be sure they stand for what I’ve cited them for. Independently, so has my extremely careful co-counsel.

That’s our professional obligation. Our names are going on it, and our professional reputations depend on being careful and accurate, so the court can rely on us.

I don’t know if I would use the phrase “true believer” to describe lawyers who get in trouble for using AI improperly. Corner-cutters? Ignorant of technology? Just plain stupid? I think that’s more accurate.

I attended a legal research session recently with a rep from one of the big Canadian legal research providers. She explained how they have integrated AI into their service. First, it’s been trained solely on their extensive database of court cases and statutes going back 40 years. Their AI is not allowed to go out into the world, so to speak. It can only access thst database.

Second, it has been fully integrated into their existing research tools, as an aid to finding relevant cases and ststutes and to help analyze them, by providing summaries and pointers to potentially similar matters in their database. Those summaries of cases don’t change with each search request. Once their AI has summarised a particular case, that’s the summary each researcher gets. That helps ensure that the researchers themselves will collectively note errors and flag them.

Third, their system cannot be used to write anything. It is truly a research aid, not a substitute for a lawyer doing the hard part, writing a brief.

One minor note: the case from the 10th circuit that is mentioned in my post involved a self-rep, not a lawyer. Her additional filing, about the variances in citation systems, reads to me as AI generated.

Any signs that this idiocy is going on outside the US, too?

Yes, there have been cases in Canada.

I think the answer to this is a simple yes. While I can only speculate, my observation is many, if not most, people don’t understand the fallibility of AI. I like it as a tool and am generally pro-AI, but people genuinely don’t understand how often it is wrong and makes up things.

Which is why I think that lawyers who use AI should be punished: using it is an act of professional incompetence. It’s not up to the job, not in a career where accuracy actually matters. Until somebody comes up with AI that doesn’t lie and hallucinate, it just be just plain forbidden for use in the legal profession.

They might as well being a surgeon using a rubber scalpel. You need to use the right tool for the job.

Agree. I constantly (like twice a week for the last year) have to explain and re-explain to my office colleagues how the technology works. When the response they’ve generated includes bad information, they assume the source corpus didn’t have the answer, or their prompt was poorly formed. They don’t understand that even if the information was present in the source material and even if the prompt is appropriately clear and specific, the statistical nature of LLMs means that some percentage of the response will always be confabulated, and this is unavoidable.

I’ve had to explain this to some people multiple times. They simply do not grasp that LLMs don’t process information as information. There are blobs of language that approximately correspond to information as we recognize it simply by probabilistic association, and LLMs have become spookily good at connecting the blobs in our prompts to the blobs in the source corpus and generating a new set of blobs to represent that connection — but the LLM doesn’t “know” anything. This seems to be an impossible hurdle of comprehension for many people.

On the up side, I’ve gotten to use the word “epistomology” more times in the last year than in my entire prior life collectively.

You can substitute “clerk” or “junior lawyer” for “AI” in your post and it reads pretty much the same.

They might as well be a surgeon getting the nurse or trainee to do the basic stuff subject to supervision. You need to use the right tool for the job.

I spend enough time (far too much) time correcting the gross errors of juniors to see your post as anything other than a laughable misunderstanding of how the practice of law works.

Only if they were drunk on the job or otherwise known to be incompetent at their job.

Yeah, you just don’t know what you are talking about, sorry. I do this week in week out and have done for decades and you never have AFAIK.

No you haven’t. LLM aren’t anywhere near that old.

You miss the point. I’ve supervised juniors for decades. If using AI should be punished for the reasons you outline, so should using clerks and junior lawyers.

So if I use an AI that has been trained solely on Canadian case-law and statutes as a research aid, and I review the cases and statutes it suggests to me, and I write my brief using use only those cases and statutes which in my judgment are relevant to the case I’m arguing, I should be severely punished for using AI?

No, because clerks and junior lawyers are sapient beings that can realize they’ve made a mistake and correct it, and learn from it. LLMs are things, and can’t do that. Which is why the comparison I used was somebody drunk on the job.

So presumably until a clerk is more experienced I should be punished for using them? This is the logical endpoint of your position.

You are hallucinating knowledge about what legal practice entails.

But they don’t always. You have no idea.