The Reminger Report: Emerging Technologies
The Reminger Report: Emerging Technologies
AI Hallucinations in the Courts: Lessons from Emerging Case Law
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this episode of the Reminger Report Podcast on Emerging Technologies, Zachary Pyers and Kyle Wallace discuss the growing legal challenges posed by lawyers’ and judges’ reliance on artificial intelligence—particularly the issue of AI‑generated “hallucinated” case citations. The conversation traces the evolution of these problems from early cases like Mata v. Avianca to the more recent Shahid v. Esaam, where an appellate court discovered that a trial court had unknowingly incorporated faulty, AI‑generated citations into its decision. Kyle and Zach explore the ethical obligations of attorneys under rules like Rule 11, the judiciary’s response to AI‑related errors, and why the continued rise of AI calls for vigilance, verification, and responsible use rather than abandonment.
REMINGER REPORT PODCAST ON EMERGING TECHNOLOGIES
Kyle Wallace
ZBP Zachary B. Pyers, Esq.
KW Kyle Wallace, Esq.
| ZBP | Welcome to this episode of the Reminger Report Podcast on Emerging Technologies. Today I am joined by one of my associates, Kyle Wallace. Kyle, thanks for coming on and talking with us today.
| KW | I appreciate it.
| ZBP | So today we’re going to be talking about one of the, what seems to be a reoccurring theme on this podcast, which is artificial intelligence. Now I know today we’re going to take a slightly different spin on it, not necessarily guided by our own actions but some of the actions that we’ve seen that have arisen. So tell us just a little bit about some of the problems from a 10,000 foot overview that we see artificial intelligence having or lawyers having with artificial intelligence as they try to kind of implicate it or implement it into their practice.
| KW | I think one of the key issues with AI currently is improper citations and that can be a multitude of things. We’re seeing a lot of AI hallucination right now where we’re seeing things like fictitious cites or improper holdings of maybe a potentially real case and that’s starting to have implications in the legal field just as attorneys that aren’t reviewing their citations. It’s starting to almost mislead judges and courts with citations that just aren’t accurate which will have a domino effect in our legal system.
| ZBP | Now I note that people, legal experts, have been sending both what I would call extreme warnings on both sides of this. At least when we talk about AI and artificial intelligence, I know we’ve seen a lot of people indicating that AI is not going to change the profession at all, that we’ll never be replaced, that nothing will ever change. And then I’ve also heard the other spectrum where lawyers will become obsolete in 10 years because artificial intelligence is going to replace all of us. I happen to believe we’re somewhere in the middle, but I know that we’ve seen reports even from Chief Justice John Roberts back in 2023, warning the profession that they need to at least be cognizant of artificial intelligence and the role that it will play in our legal practices and that we need to be aware of and prepared for it. So talk to us about the case that brings us here today.
| KW | So actually there’s a couple cases prior that are worth noting because I think it shows the development of how AI is being used. Originally we had some cases coming out of the federal courts - Mata v. Avianca. It’s an original case of improper use of citations. Now this was an instance that a lot of courts have seen of just attorneys utilizing artificial intelligence for an improper reason. Whether it’s taking shortcuts or relying too heavily on technology, we’re seeing them starting to implement AI into motions, important motions like dispositive motions. The Mata case kind of stands for that where we had an attorney that was using artificial intelligence and relied on sources in the cites that the AI produced. The judge was fortunate enough to catch that once they were going and fact-checking the attorneys cites. They noticed that for one they couldn’t find this case and two, they found cases with a similar caption completely out of left park with the proposition they were referring to. So this started to bring that trend of courts needing to double back on attorney citations with this high use of AI nowadays which kind of brings us to the case we’re talking about today which is Shahid v. Esaam. This was one few cases we’ve seen where a court actually adopted AI-produced citations at the district level and now it’s on appellate review regarding improper citations used by the court which ultimately was resulted from adopting a plaintiff’s use of improper citations.
| ZBP | So the difference being in some of the earlier cases you described is, the court, the trial court caught the improper citations, this hallucinations or what have you, but in this situation the trial court didn’t catch it and, in fact, utilized the incorrect citation in their own decision I’m assuming, and then the case went up on appeal.
| KW | Correct. It actually, they utilized it an order dismissing the case from my understanding. The case was then re-opened off of a service issue completely irrelevant of the improper citations but as the appellate court reviewed the case, they said hold on here, what are these citations. Then it started to open up an investigation on not only the appellate cites in that case but then they looked at the trial court cites itself and had a whole issue and had to brief that issue.
| ZBP | How, so tell me, as essentially the trial court I don’t want to say is on trial, but as the trial court is, their decision is before the appellate court, what is the fallout from this? What is the appellate court doing or looking at when it’s looking at the trial court’s decisions which, or the trial court’s use of citations which may or may not be hallucinated?
| KW | So they reviewed the citations for both the substantive of the cite and the cite itself and then they bipartisan out the issue of hallucinations and just improper propositions. We’re starting to see AI hallucinate their cites and just make up names. Generally they’re generalized names. I think in this case they used Johnson & Johnson. They’re probably a million Johnson & Johnson cases. That really leaves the court kinda hamstrung on what Johnson case if it’s not substantively incorrect, what case they’re referring to, and then we’re also seeing cases of real citations, real cases, and then just improper holdings to an extent that’s preposterous to try to align a case with an attorney that’s getting drafted by AI’s proposition. So the court kind of points out that AI is here. It’s currently here and it’s probably going to stay. However, that, it’s going to be utilized. Attorneys at their level, before even representing to the court, do have an ethical obligation once, and a civil procedure obligation, to ensure what they’re signing their name on is accurate, correct, stands for the proposition they’re dealing with. And that’s what the appellate court’s really speaking about. From the appellate side, the parties’ side of this issue is whether or not AI, judicial notices and judicial orders start to come around forbidding AI use, that’s not necessarily the issue here. It’s the fact that there is a humanized factor to using AI and these attorneys have to not just solely rely upon it but actually do the due diligence themselves to ensure that what the tool that they’re using is actually standing for.
| ZBP | It’s one of the things that I think we’ve oftentimes discussed or at least I’ve discussed with people in this industry, is that one of the things that we already as lawyers have an obligation when we sign our pleadings and we sign discovery responses, whenever our signature goes on any of these type of documents, especially in the civil litigation context, we have both in federal and in state court here in Ohio, Rule 11 which indicates that we as lawyers and as officers of the court have signed these documents representing to the court that the stuff that we put in them is both true and accurate. And so when you find these situations where lawyers or in this case judges are signing documents and I’m going to put the judges aside just for a minute. At least as practitioners, I said we have always had these obligations, for me to doublecheck and make sure that the citations and the propositions of law related to those citations are accurate. And so when we talk about having an extra burden of having to verify either that you didn’t use AI or that you used AI but then you checked everything, I’m saying, aren’t these additional verifications kind of redundant to the extent that we already signed the document indicating that the stuff was truthful and accurate under Rule 11 and couldn’t we just be sanctioned under that rule? And so it kind of leaves me, obviously that is a rhetorical question, and I understand the courts need, because it sometimes may seem like people don’t think that this AI issue falls under Rule 11, otherwise they would be doublechecking, but I agree with your point. Now we’re seeing this fallout from the use of AI now affecting the judiciary as well.
| KW | I think that this is one of those cases of first impression. Will it be the last? I doubt it and this might just be the first one caught. AI has caught on like wildfire. I think it’s here to stay. It’s the elephant in the room that we may as well learn from because I truthfully don’t think it’s going anywhere. I think it can be resourceful and I think these cases highlight, while in a negative posture, the benefits that AI can bring of that primary, preliminary stage of drafting a pleading or doing research. I think if it stands for that proposition and that use, I think it’s an extremely beneficial for both the legal system from a client-rearing perspective because it will save time on the research. It will save time on how long it is to draft, and then going forward it makes attorneys better because the use of technology inherently is for the better so now we’re getting more efficient and brighter attorneys utilizing technology, encapsulating all the facets of the legal profession. I think while these put AI in a negative light, I think there is some good to be seen from these cases to put attorneys on notice of the proper and improper uses of it and just to branch off from there.
| ZBP | Awesome. Kyle, I appreciate you coming on today and talking to us about this, especially this recent development. Now that we’re seeing, for lack of a better term, the fallout spread, I know it’s super helpful for us to have this background knowledge as both a warning, right, and also to just educate us as practitioners what we need to be doing. So thank you for taking the time to come on today.
| KW | I appreciate it. Thank you.