I’m not a law-talking’ person, but I’ve run across more than one article about lawyers using AI to create their references and such. Is this an infraction that deserves a special punishment, or just a referral to whoever is appropriate to sanction the lawyer with no added penalty? Here’s an example of such a case, but not the most egregious.
There are lots of ways to use AI that are perfectly legit. (so your question is too broad). However, using AI to provide case citations in a brief that the lawyer hasn’t verified and read is a serious offense and should be punished severely.
Oh, are there? I’ve only come across it in similar articles so i knew of instances like these.
This.
Sure. You could use it for some initial research, to help guide your more detailed research on case law and cites for creating your brief. The issues like in your OP are when AI is used to generate an actual brief, without then taking the due diligence to review what the AI spit out, and in trusting that the cites which an AI tool has provided are legitimate.
Sure, you can ask AI to summary depositions, or search expert’s publications for key words or concepts. “Has this expert ever commented on the standard of care for root canals?” Of course, there wouldn’t be news articles about that. It is newsworthy when a lawyer has AI write a brief, because that’s just bad lawyering
I think it should, though there’s always been a degree of risk for any lawyer who delegates a share of their work. I am reminded of a story my step mother told me, she had asked one of her firms paralegals to pull up a bunch of precedents for a case she was working on. She did a quick review, and they all looked good, so she went on to work on a number of other cases. When she went back to work on the brief, something was nagging the back of her mind, so she double checked.
The paralegal wasn’t wrong, but had (likely from error/lack of experience, because they weren’t fired) left off the two most recent precedents for the case, and if she had written the brief leaving those out… well, not a good look.
So there’s always been a strong need for “trust but verify” even in paid, professional legal assistance. With AI, take that trust with a massive cube of salt, and take the verify step very carefully.
Back to the punishment. I feel that the various Bar and other professional associations do a pretty terrible job of managing lawyers. Like a lot of predominantly self-policed groups, they seem veeeeeeeerrrrrryyyyy reluctant to do anything unless (duh) it it making the rest of the profession look bad in a highly visible way.
I do wonder though, if I was being represented by such a firm, what my legal recourse could be against said firm. Given the damage to my reputation, and possible legal consequences (delays, or loss of my case) I think it could be an interesting case where a punitive lawsuit again my (now former) lawyer could yield satisfying results. Sure, they may take years to eventually be sanctioned (if at all), but if they’re forced to close shop due to the consequences of a financial judgement against them, it’s still a step in getting them out of circulation.
What punishment is offered for an attorney who evinces reckless disregard for the truth? If an attorney cited a claim that she overhead in a bar, from memory, would that be sanctionable?
Citing AI should be treated similarly. It’s not that the person at the bar is necessarily wrong, it’s that relying on that cite demonstrates reckless disregard for the truth.
Same with citing AI.
Multiply the fines listed in the OP’s link by 10 times and it might help discourages offenses like this. But fining a global law firm with 1.3 billion dollars in annual gross revenue $31K doesn’t even seem like a slap on the wrist.
I agree. Personally, the big firms are likely too big, too connected, and have too much wealth to ever be held accountable. I’m speaking of targeting the low-hanging fruit as it were, the little guys who have an office in a “professional” strip mall area, though it’s probably a bit unfair. They’re the least likely to have the resources to do the best possible job in the first place, and have far more excuses for cutting corners than the big boys.
But I’m not speaking of fines precisely, more if the hypothetical person being represented by AI-using lawyers, have the right to sue said lawyer for gross negligence, or other breech of duty, and gut them that way, since, as I said, the Bar does a very poor job of self-regulation.
I agree with the general drift of the thread. Lawyers (and any other professionals) who understand the limitations of current “AI” tools and leverage them responsibly in the use cases for which they are appropriate… fine.
Lawyers who lazily substitute AI for actual work… disbar them immediately.
He should be disbarred.
I have to disagree with this as a general principle. Part of being a lawyer isn’t just to look cite caselaw and write briefs but to actually understand the theory and application of law at a deeper level than a layperson can, and this comes from experience, especially doing research. A lawyer who becomes dependent upon AI tools to do essential work like research or formulating a position is not going to develop that ‘legal muscle’ to think deeply about the case they are developing. This is true for other areas of professional expertise as well; physicians need to learn how to identify signs and symptoms to diagnose conditions instead of depending upon a program to feed them answers; a programmer needs to learn how to structure code and how to develop a high level architecture that solves a complex problem; and an engineer needs to understands fundamental principles of their discipline and how they apply to solving complex real world problems.
This isn’t to say that “AI”, i.e. machine learning isn’t useful and even crucial for improving performance in some kinds of tasks. I can see using a generative AI trained in legalese to do a first pass copyedit to a brief to assure that the language is consistent and clear, or a doctor using an ‘expert system’ to provide guidance for atypical medical conditions that are difficult to diagnose, or an engineer or scientist using machine learning to tease out subtle patterns or develop adaptive solutions to problems that defy normal formalistic or statistical approaches. But I’m increasingly seeing people using “AI” (which for most is just LLM-based chatbots) using these tools as a shortcut to doing the hard work and developing critical knowledge, and then getting answers that are not just wrong, but so evidently wrong that they should be recognizable immediately as totally off base just from the basic intuition that comes from experience in that field, but instead they don’t question the result and cut & paste a response into their work product without doing even basic error checking. Relying upon these tools contributes to the erosion of knowledge and functional experience, and often just basic skepticism that anyone should be applying to a result that comes without evidence that it has actually been rigorously worked out using accepted principles.
Stranger
Another article describing an instance where someone used AI to create legal citations that did not exist. You’d think lawyers would start being more careful.
If you are not a careful enough attorney to actually research and read the citations to establish how they are pertinent to your brief, then you probably aren’t careful enough to check and see if they even exist, either.
I’m seeing increasing use of AI in engineering and technical work, ostensibly to just summaries the analysis or research but so often it is so completely wrong and misleading it is clear that not only did the author not check the summary, they probably didn’t even do the work because the conclusions are obviously incorrect. I can’t wait until generative AI gets good enough to produce plausible-looking figures and gin up datasets that will defy easy means of falsification. I’ve used machine learning ‘AI’ tools for data analysis for years to tease out difficult to find patterns but I’m always diligent about trying to independently verify what I’ve seen because of the potential for ‘overtraining’ that produces bogus patterns, and I can’t imagine ever using an LLM to write a paper or memo for me.
Stranger
The common thread for all of these is people who use AI to save time, part of the savings being not looking at the result. I’m pretty sure there are tools to look up citations, right? Would using that reliable tool take more time than using AI and checking? I wonder how we can enforce checking?
Here is yet another example, this of a guy at King Features who used AI to generate a 15 book summer reading list. All of the authors exist - 10 of the 15 books do not.
Link:
A popular feature that you’re likely to see in a major metropolitan newspaper this time of year is a summer reading list.
And if you are a reader of the Chicago Sun-Times or The Philadelphia Inquirer, you recently saw such a syndicated list of new books by famous authors, including Percival Everett, who won the 2025 Pulitzer Prize for fiction, and Andy Weir, who wrote “The Martian.”
One problem: The authors are real, but the books they supposedly wrote are not. Turns out, the list was generated by artificial intelligence. Of the 15 books, only five are real. The rest? Made up by AI.
Has anyone asked ChatGPT how attorneys who improperly use AI should be punished?
Oddly, autocarrot doesn’t recognize ChatGPT..
IMHO, the question is too broad to be answerable. Asking whether a lawyer who used an AI assistant in the preparation of a case should be punished is like asking whether the lawyer should be punished for using any particular tool. Like all professionals, lawyers should be judged on the results they produce, not on the tools they use.
Suppose a lawyer used a legal encyclopedia for reference, and the encyclopedia turned out to be outdated or wrong? Should he be “severely” punished for using the encyclopedia? No, but he definitely should be accountable for failing to verify incorrect information. Again, judge them based on results, not the tools they use.
The idea that AI should never be used for anything of importance is becoming ridiculously untenable as the technology evolves. IBM’s Watson was developed specifically to be an AI assistant. LLMs are currently in the experimental stage, but will likely become even more advanced assistants as they evolve.
If it were Joe Blow’s legal encyclopedia, probably not a good thing to use. The reliability of the source is vital. It is not enough to cite a scientific paper when you are writing one, you need to cite a paper in a legitimate journal, and one which is adequately refereed. Those skilled in the field know the difference.
You said it yourself, LLMs are experimental. Once they get reliable, I agree using them could be legitimate. Relying on them today without careful checking of the results is not legitimate.
Our publisher did not allow us to use Wikipedia as a reference. I sometimes looked at the entry to get an overview, but I used other sources. Some wiki articles were great, but you can’t blindly use it as a source. Ditto for LLM results.
This. Very much.
Lawyers (some, not all, but not a microscopic percentage) blatantly lie and make shit up ALL THE TIME. Even more earn their bread through bluster and intimidation.
So much is made about them being a “profession,” “officers of the court,” and legal ethics. But the majority of the thousands of lawyers I’ve interacted with professionally over the years are just schmucks doing their jobs. Most are not necessarily more or less honest than pretty much any other non-blue collar worker.
I don’t see any problem using AI for just about anything you can figure to apply it to in the practice of law. But you oughtn’t rely on its accuracy and reliability. A responsible lawyer ought to read the very language of the statute, regulation, or caselaw they are citing. I would not rely on ANY 3d party representation over the original source. Generally, you want to at least skim the cas you are citing to put your cite in context. The question of what constitutes a decision’s “holding” can become quite esoteric.
BUT - it is not all that often that a specific citation - or a chain of cites - is going to carry the day. It is always so stupid in law shows when the lawyer stands up and says, “Smith v. Jones, your honor!” IMO, strings of cites and excessive footnotes are of limited value unless married to a persuasive argument. And if you are arguing common sense, it doesn’t necessarily become more sensible just because someone in a robe wrote something similar once.
In most areas of law, the major precedents are pretty commonly known, and barely need to be cited. If someone is arguing for law in a different jurisdiction to be applied or something, well, the opposing party ought to check those cites.
I don’t think there is a shortage of lawyers, so I generally would support stricter punishments. But that’s never gonna happen. I could imagine some period of suspension for the lawyer in question, and some showing by the firm that they were conducting training. I don’t understand why the defense did not notice the BS cites. Usually briefs are required to be formatted in a manner allowing for computer cite checking. And it is kinda stupid for the judge to say he was “nearly persuaded” by the cites - before he checked them.
Don’t know what the result will be, or whether a malpractice action by the client might result.