Lawyer used ChatGPT to do his job, it spectacularly failed

A lawyer used ChatGPT to do his job and it spectacularly failed; a robot in a courtroom next to the ChatGPT logo


A lawyer used ChatGPT to do his job and it spectacularly failed; a robot in a courtroom next to the ChatGPT logo

A lawyer user ChatGPT to do his job for him, relying on the popular AI chatbot to write a 10-page court submission. As expected, the AI experiment did not go very well at all.

Levidow, Levidow and Oberman lawyer Steven Schwartz used OpenAI’s chatbot to improve an important court document for his client.

Schwartz’ AI-assisted document was integral to the case of airline passenger Roberto Mata. The passenger’s case aims to sue airline company Avianca for bodily harm to their knee after a serving cart hit collided with them in 2019.

The lawyer used ChatGPT to bolster his case to the Manhattan Court. However, the AI program filled the document with made up past precedents and fake cases to help Mata’s case.

In the document, ChatGPT added six different cases that allegedly existed between 1999 to 2019 that were related to Mata’s claim. However, when asked to provide evidence for these past cases, the legal firm could not.

Furthermore, the defendant’s lawyers also checked out some of the claims added within the document. The legal team argued that they were not able to locate the mentioned cases “by caption or citation, nor any case bearing any resemblance to it”.

The lawyer that used ChatGPT to write the legal document was heavily criticised by the case’s ruling judge, P Kevin Castel. Schwartz’ document was cited as an “unprecedented” case of “bogus judicial decisions, with bogus quotes and bogus internal citations”.

In evidence supplied to the court, the ChatGPT-using lawyer asked the AI program if the supplied cases were real or not. When asked if a specific citation was “a real case”, the AI argued that it was.

The lawyer also asked the AI program if the other supplied cases were real. ChatGPT, fully believing the information it provided was correct, replied: “the other cases I provided are real and can be found in reputable legal databases.” They could not.

As with all AI programs, ChatGPT may write as if it understands what’s being asked, but it doesn’t ever really know what it’s saying. As such, AI programs are rife with misinformation and will often distribute it to users.

Since the legal document was filed, the ChatGPT-using lawyer has apologised for his reliance on the technology. The lawyer has offered a full apology, claiming the AI program is “a source that has revealed itself to be unreliable”.

Steven Schwartz has since claimed that he “greatly regrets” using the AI program for his job and that it won’t happen again.

This Article's Topics

Explore new topics and discover content that's right for you!

NewsAI