Should lawyers who use AI be severely punished?

I agree. And while the OP did indeed cite a famous case in which non-existenct citations (I presume they mean precedents) were cited in a filing, I was addressing the more general question in the title, “Should lawyers who use AI be severely punished?”. And again my answer is, I don’t care what they use, I care about the results, namely how accurate, relevant, and persuasive the lawyer’s filing is.

On the matter of precedents, the lawyer has a responsibility to ensure that any precedents he cites in support of his case are directly relevant and really do say what he claims they say in a directly comparable situation. That the lawyer didn’t even so much as bother to look up the cites, let alone carefully review them, speaks to a lazy and incompetent lawyer who submitted a filing that was blatantly misleading. It was, in effect, a pack of lies. Of course he should be punished for that. Lawyers don’t get to lie in court and get away with it. But don’t blame the AI, or any other tool he chooses to use as an assistant! Blame the lawyer.

I agree. That’s all that matters. Doesn’t matter if the lawyer on the pleading looked it up themself, assigned it to an associate, a law clerk, a paralegal, some guy in Mumbai, or AI.

For the past several decades, the practice of law (in the US, at least) has become more and more of a bottomline business, with essentially no job security. I would imagine it would be ever more unpleasant to try to establish oneself in the business as a fresh grad.

I agree with all that. We can generalize to all use of AI, which at the moment should be trust (a bit) but verify (a lot.) Anyone who doesn’t verify should be severely punished.

No. AI is at the point where it can give a good framework, but it needs to be proofread.

Creating an AI document and not proofreading it is a problem. But The AI does play a labor saving role in a lot of fields.

Also humans make mistakes all the time with or without AI.

I think lawyers using AI isn’t the issue—it’s how they use it. If someone blindly submits ChatGPT-generated content without verification, that’s negligence, not innovation. Tools are only as good as their operator. Punishment should reflect intent and harm, not just the use of AI. The legal field needs guidelines, not blanket condemnation of technology.

This has come up in a number of other threads, but it’s most appropriate here:

Note, the fees were noticeable to thee and me probably ($3000 each) but are almost certainly minimalistic to any more or less successful attorneys. To wit, quoting the article linked above:

Attorney Christopher Kachouroff and the law firm of McSweeney Cynkar & Kachouroff were fined $3,000, jointly and severally. Attorney Jennifer DeMaster was separately ordered to pay $3,000. This “is the least severe sanction adequate to deter and punish defense counsel in this instance,” US District Judge Nina Wang wrote in an order issued yesterday in the District of Colorado.

Again, least severe. I think, entirely IMHO, not severe enough. Especially since they had already corrected it once, and had apparently done the same on a prior case which the Judge cited.

Seriously, if the fines stay low enough, and the consequences are low, it’s likely to continue to be abused. I fully acknowledge it’s an evolving situation where Judges may be erring on the side of correcting without punishing, but I think it’s the exactly wrong message. As I said upthread, attorneys have been doing an amazingly bad job of self-policing, and getting comparative slaps on the wrist for risking millions of dollars of their client’s money (even if I hate the client in question) by throwing their work to an AI is unacceptable, especially when they’ve certified under oath that it’s been properly vetted.

Little late to the game, but I’d just add I think it’s more a problem of not knowing how to properly use AI. AI is very confident in its well written answers and that can be scary if you don’t know what you’re doing. In the near future, lawyers who don’t know how to use AI won’t last too long. AI is here to stay, it’s only getting better, so you better figure it out because there’s no non-AI way to compete with the volume and quality of work.

This is how much I use it. My immediate thought of how to handle a random brief written by random AI to determine if it’s valid or not, would be to…run it through AI to check the citations. LexisNexis AI is really good with cites because it’s a legal research platform and is the source of the actual case law, whereas say ChatGPT sucks because it’s just basically scouring the internet which is not a great source. Different AIs are great at some things and terrible at others. Saying “AI” can’t do this or that is pretty broad, it’s like saying “Apps” can’t do math. Candy Crush would suck at it, but the Calculator app is pretty good. That’s my simplistic way of thinking about “AI”.

Even then, they are wrong initially. The first time they are confidently wrong and if you don’t know better it looks legit. I know that. I’m a lawyer and I basically treat AI like a witness. You gotta show me how you know; why are you so confident. Asking AI to verify important details with sources and AI will admit it’s less confident, or change its answer, or verify it and then all good.

The new grads are great at it. It’s amazing. I make interns teach it to me.

With all that said, it’s always on the lawyer who signs the brief to be responsible and to be held accountable for the work product.

I keep hearing this kind of dictum from our corporate management, and can say with confidence that only one half of this statement is true. I find myself spending an increasing amount of time vetting and correcting the quality (and often basic factuality) of what is clearly chatbot-generated ‘work product’, and while I haven’t carefully tracked or plotted my effort I can see that it will quickly come to overtake the time I spend doing more useful tasks. And this is in engineering where it is pretty straightforward to demonstrate that an report or a specification is clearly nonsensical ‘analysis’ or a bunch of meaningless word salad. I can only imagine in legal documents where if the form of the language looks good it is easy to gloss over the lack of substance or miss on subtle counterfactuals.

Stranger

Ask an you shall receive.

[QUOTE=ChatGPT]How lawyers who improperly use AI should be punished depends heavily on the nature and consequences of the misconduct. Legal ethics already provides a framework for handling violations, and AI use is simply a new context within that structure. Here’s a breakdown:


:balance_scale: Types of Misuse and Appropriate Consequences

  1. Using AI to fabricate case law or citations (e.g., “hallucinated” cases)

Examples: Submitting briefs with fake citations generated by ChatGPT.

Consequences:

Mild (e.g., one-time error, no harm): Judicial reprimand, required CLE (continuing legal education) in tech competency.

Severe (e.g., pattern of dishonesty, intentional deception): Sanctions, fines, suspension, or disbarment for violating duties of candor and competence.

  1. Delegating legal judgment to AI without oversight

Examples: Letting AI write legal arguments without human review or misapplying the law due to blind reliance.

Consequences:

Negligence level: Referral for bar discipline, mandated supervision, or retraining.

Willful ignorance or recklessness: Suspension or malpractice liability.

  1. Violating client confidentiality via public AI platforms

Examples: Inputting client data into tools that store or reuse queries (e.g., public ChatGPT).

Consequences:

Unintentional exposure: Private reprimand, client notification, data protection training.

Intentional or repeated breaches: Disbarment, civil liability.

  1. Using AI-generated content to deceive or manipulate (e.g., fake evidence, altered transcripts)

Examples: Falsifying documents with AI tools.

Consequences: Immediate disbarment, possible criminal charges (fraud, obstruction).


:brain: Guiding Principles for Punishment

Proportionality: Punishment should match the harm done and the intent behind the misuse.

Deterrence: Enough to discourage reckless or unethical AI use by others.

Education: Often more appropriate than punishment when the issue is incompetence, not malice.

Responsibility: Lawyers must stay informed about technology they use—ignorance is not a defense.


:books: Relevant Standards

ABA Model Rules of Professional Conduct:

Rule 1.1 (Competence)

Rule 1.6 (Confidentiality)

Rule 3.3 (Candor to the tribunal)

Rule 8.4 (Misconduct)


If you’d like, I can draft model disciplinary guidelines or fictional examples (e.g., how a state bar might handle a lawyer who submitted fake case law using AI). [/QUOTE]

Thanks. That was thorough.

Today’s Chicago Tribune reported on lawyers for the Chicago Housing Authority used ChatGPT to draft pleadings, and cited nonexistent caselaw. (Apologies - the only links I could find require subscriptions.) They identified one lawyer - a partner - as having prepared the pleading, and said the firm fired her. Said she is now in private practice. No mention of court-imposed sanctions.

The first two times this happened, I laughed to myself. I hoped future lawyers would have learned from those earlier lawyers’ mistakes. Why I still hold such an optimistic view of human capacity to learn from example is beyond me. This is just plain stupid now. Throw the book at 'em.

That’s essentially what a federal judge in Alabama did recently. The state government filed a brief defending themselves from a claim brought by inmate who was stabbed 20 times while in prison. That is, he was stabbed on 20 separate occasions, not 20 times in one fracas.

The judge found the three most senior lawyers on the file, including the state deputy Attorney General, to be in breach of their duty of candour to the court and is discipining them. She also kicked the law firm off the case, absolved the junior lawyers of any responsibility for the actions of the senior partners, and rejected the “but how could we know that we can’t trust our computers?” defence:

The court has no difficulty finding that Mr. Reeves’s misconduct was more than mere recklessness. In the light of repeated general warnings from federal courts about the risks of bogus citations generated by AI, as well as the persistent specific warnings, policies, and expectations of his colleagues and law firm with respect to AI, Mr. Reeves’s misconduct was particularly egregious. Having been so extensively alerted of the risk that AI will make things up, and having blown through all of his firm’s internal controls designed to protect court filings from counterfeit citations, Mr. Reeves’s repeated decisions to parrot citations generated by AI without verifying even one of them reflect complete and utter disregard for his professional duty of candor. This is recklessness in the extreme, and it is tantamount to bad faith. Accordingly, the court will impose an appropriate sanction under its inherent authority.

And there’s another case brewing in Chicago, where last month a lawyer got thrown under the bus by the other partners on a file, for filing a brief with a hallucinated case. (Ironically, the lawyer there had written papers on the risks of AI and a lawyer’s ethical obligations, but tried the “but how could I have known?” defence, which did not go well.)

Done and dusted, thought the other lawyers from her firm. Painful but necessary to dispose of a colleague that way. Now let’s get back to defending the Chicago Housing Authority from claims that the lead paint in its buildings have contributed to permanent brain damage in children.

Except…

the lawyers on the other side went back and searched the previous filings by the law firm on the case, and found that they were riddled with AI hallucinations, as they explain in their filing with the court, asking for sanctions:

Following the Court’s July 17, 2025 hearing which primarily focused on the CHA’s citation to the fabricated case of Mack v. Anderson, Plaintiffs’ counsel more closely reviewed additional pleadings filed by Goldberg Segalla on behalf of the CHA. Based on that additional review, it is evident that the CHA’s misrepresentations, false claims, and reliance on non-existent case law were not limited to a single citation – or even a single pleading. Rather, now unearthed is a pattern of repetitive and continuous misrepresentations to the Court. The misrepresentations identified to date are addressed below. When applied to the appropriate legal standard, it is clear that severe sanctions are warranted.

Waiting to hear if the Chicago federal judge will follow the lead of the Alabama federal judge on the sanctions motion.

If I were ever to ever use ChatGPT to write a legal brief, I would (1) write out my own statement of facts, (2) derive and write out my own rule statements from case law that I have researched, and then (3) tell ChatGPT to write a CREAC- or IRAQ-style analysis for each issue, marrying facts to law, and explicitly limited to the facts and the law that I have provided in the prompt (hence the importance of steps 1 and 2). And then of course I’d carefully review the whole thing to make sure it made sense, which I would be able to do effectively having personally done steps 1 and 2 myself.

I’m guessing these people slip step 2. The underpants gnomes of the legal profession.

To get back to the OP’s question, it’s too broadly framed. If it referred to lawyers mis-using AI, I could certainly get behind it. But using AI properly, as a research tool , just like we use case search functions in WestLaw, and then actually reading the cases from a separate database like WestLaw? Why should you be disciplined for using a useful technology in the way it was designed?

That also assumes that LLMs are even capable of being made reliable , something that I doubt.

As for my opinion, I do think they should be severely punished in order to limit the damage to the profession and society at large.

Nor do I think that it’s “here to stay” as some have said; it’s simply too unreliable. I believe that either the fad will go by the wayside once it becomes obvious the costs outweigh any benefits, or civilization will outright collapse when the amount of AI-introduced error becomes unsustainable. And I don’t think that society is quite suicidal enough to pick the latter option.

Oh my.

À judge had to withdraw an order they made, relying on cases cited by counsel, after an appeals court pointed out that the trial judge was citing non-existent cases. The trial judge will no doubt be having words with the counsel who led them down the garden path and exposed them to correction in a very public way by the appeals court.

That will not go well for counsel.

We shouldn’t just stop in the courtroom with lawyers:

Mississippi Today: “AI ruling? Attorneys baffled by federal judge’s order that lists incorrect parties, wrong quotes”

Stranger