100 years + of collective legal malpractice experience

Artificial intelligence may provide new avenues for legal malpractice

On Behalf of | Jun 14, 2023 | Legal Malpractice, Trial Errors |

Widespread media reports are dire about predictions of harm from artificial intelligence (AI) – up to and including an actual, apocalyptic doomsday. Lawyers are sizing up whether and how AI might support or enhance the practice of law. After all, an AI-powered program reportedly passed a bar examination.

But it is becoming apparent that attorneys need to tread carefully regarding AI in their law practices. A high-profile incident illustrates the potential for harm.

AI hallucinations

ChatGPT is an AI-powered chatbot that functions through dialog with its user and can create written textual content – to the chagrin of teachers and professors everywhere. ChatGPT is a product of OpenAI, which is entirely transparent on its website about ChatGPT being a work in progress still in the research stage. The company broadly invites user feedback for improvement. (As of this June 14, 2023, writing, the application is free to use from their website.)

OpenAI explains that ChatGPT may create writing that is “incorrect or nonsensical,” can “exhibit biased behavior” and has produced “harmful and untruthful outputs.” To answer a user’s question, if it cannot identify the correct answer, it may make up a response – and state it in convincingly sophisticated language. When a chatbot writes completely invented, untrue text in this way, the industry dubs it a hallucination.

Lawyer-user beware

After learning about ChatGPT from his adult children, a New York attorney with three decades of experience recently asked ChatGPT to write a brief in support of his injured client’s case. After filing the resulting brief in federal court in Manhattan, the other side discovered that the filing cited cases that do not exist – nor did the language quoted from those cases within the brief. In other words, the bot hallucinated them, a phenomenon unknown to the lawyer.

Then, his law partner, without checking the contents, signed the brief because the first lawyer was not licensed in federal court.

The original lawyer in a sanction hearing reportedly told the judge that his use of nonexistent cases was unintentional, that he had no idea the bot could create untrue information and that he thought ChatGPT was a “super search engine.” According to The New York Times, he also said that he was “embarrassed, humiliated and deeply remorseful.”

Final thoughts

We do not know whether the court sanctioned the attorney and/or his partner. Nor do we know if the unfortunate incident harmed the client’s case or interests. But the scenario shows how legal malpractice could occur when a lawyer relies on AI for their work on behalf of a client.

In a legal malpractice claim, an attorney must have breached the applicable duty of care to the client and that breach must have harmed the client’s interests. Taking the focus away from the AI aspect of the New York incident, it is standard practice that an attorney before filing a pleading with a court should first check all the case citations and quotations against the original sources for accuracy, to confirm that the cases are still good law and that the quotes are correctly worded.

Arguably, the New York lawyer did not follow the accepted practice of cite checking and that omission breached his duty of care. However, we do not know if or how the breach harmed the client. Did the judge let them file a new brief or did he perhaps refuse to extend a filing deadline, leaving the AI-composed brief as the official argument for the client?

While this example probably could resolve based on a basic pre-existing aspect of the duty of care, future products and applications could raise ethical and problematic issues for lawyers and their clients. Identifying the scope of an attorney’s duty of care in an unforeseen AI scenario could become crucial to future questions of legal malpractice.