Lawyer caught using AI-hallucinated cites when defending a disciplinary hearing for using AI-hallucinated cites?

Yes, courts and the law societies are both working on providing guidance for the use of generative AI.

The leading case was one from BC last year, which has triggered a fair bit of work in the area from the BC Law Society. If you Google “BC law society AI” you get a lot of hits.

In the Zhang case, the BC court ordered the lawyer to pay costs for the time thrown away by the opposing firm, but declined major sanctions, accepting the lawyer’s apology and explanation that she did not know about the pitfalls of AI.

In early 2024, that apology made sense. If it were to happen today, not so much.

Here’s a good summary put together by one of the major nationsl firms, McCarthy Tétreault.

https://www.mccarthy.ca/en/insights/blogs/techlex/landmark-decision-about-hallucinated-legal-authorities-bc-signals-caution-leaves-questions-about-requirement-disclose-use-ai-tools

At least from what I have seen, these lawyers are not using AI with the express (or even tacit) purpose of obtaining advantage by having AI hallucinate cases in their client’s favour. Rather they are simply having AI generate submissions to save time and effort, and are recklessly not checking whether any of it is hallucinated.

I honestly don’t know how that could be. My exams (1) were on software that prevented me from using the internet and (2) in any event, answers were expected (required) to cite to and apply cases covered in the course readings.

The exception would be first year legal writing or upper level writing courses (of which I had to take only one). But with legal writing, the prompt was at the direction of the professor, according to a set of facts developed by the professor, and so we were effectively channeled into a specific area of law and the whole point was whether we could find and interpret the cases with the strongest precedent one way or the other.

Now, for the upper level writing course, it was a but more free-wheeling, but then there were also discussions of the proposed topic with the professor and a collaborative development of possible sources. Plus the professor was an expert in the particular area of law, leaving not much room to get by with AI.

There are ways around that. If nothing else, by using the Internet on a different device, like a phone. Which the test proctor should have caught, but might not have, especially if they’re older and less familiar with modern smart devices. Or if they’re also as lazy as the lawyer who got caught, and just didn’t watch the test as closely as they should have.

Law school exams are lengthy. Lengthy prompts (pages and pages of obscenely complex fact patterns) and lengthier answers expected. And, again, the answers all need to cite to cases covered in the coursework itself.

AI, let alone on another device that would require entering in the prompt and then copying the answers back onto the answer space, while not only in front of a proctor but also surrounded by other students… would be impractical to say the least.