fbpx

AI Gone Wrong: The Case of the Fake Legal Citations

  • Home
  • /
  • Blog
  • /
  • AI Gone Wrong: The Case of the Fake Legal Citations

 

In a legal first, a Missouri appeals court has hit a litigant with a hefty $10,000 sanction for using artificial intelligence to generate almost two dozen completely fake citations in a court filing. It’s a cautionary tale about the potential misuse of AI that could prompt new court rules.

 

The case involved Jonathan Karlen, an O’Fallon man appealing a $311,000 judgment he was ordered to pay a former employee for unpaid wages and attorney fees. In a blatant attempt to deceive the court, Karlen’s appeal brief cited 24 legal cases and authorities – except 22 of them were totally made up, “AI hallucinations” as the court put it.

 

The ruse began to unravel when Kruse’s attorneys quickly realized something was off. As one lawyer said, “He was making claims in his argument that just didn’t comport with the law.” Red flags went up that the cited cases couldn’t possibly be saying what Karlen claimed they did.

 

It turns out Karlen had hired a legal consultant in California promising low-cost help, not realizing the person had simply used AI to auto-generate the citations and content of the brief. Karlen later confessed and apologized, saying it “was absolutely not my intention to mislead” – but the appellate judge wasn’t having it.

 

In a scathing rebuke, the judge called Karlen’s submission of “bogus citations” for any reason a “flagrant violation” of his duty of candor to the court that simply could not be tolerated. Legal experts say it’s the first sanctions order of its kind in Missouri related to fictitious AI-generated legal citations, which are apparently becoming an issue in other states as well.

 

According to one former judge, courts may now need to “enact rules that require lawyers and pro se litigants to verify all citations and to identify any portions of their submissions that are generated by artificial intelligence.”

 

The learning: Misrepresenting authorities to a court is a severe ethical breach. While AI presents new challenges, the core principles of accuracy and candor still apply. Lawyers and litigants would be wise to tread carefully with generative AI tools in legal work for now, lest they find themselves in Karlen’s shoes – stuck with a hefty sanctions bill rather than a successful appeal.

 

What do you think about this case and its implications? Have you seen other examples of generative AI being misused in legal proceedings or other professional contexts? I’d love to hear your thoughts in the comments.

 

Log in or Register to save this content for later.
>