A Utah lawyer has been sanctioned by the state court of appeals after a filing he used was found to have used ChatGPT and contained a reference to a fake court case.
Richard Bednar, an attorney at Durbano Law, was reprimanded by officials after filing a ‘timely petition for interlocutory appeal’, that referenced the bogus case.
The case referenced, according to documents, was ‘Royer v. Nelson’ which did not exist in any legal database and was found to be made up by ChatGPT.
Opposing counsel said that the only way they would find any mention of the case was by using the AI.
They even went as far as to ask the AI if the case was real, noting in a filing that it then apologized and said it was a mistake.
Bednar’s attorney, Matthew Barneck, said that the research was done by a clerk and Bednar took all responsibility for failing to review the cases.
He told The Salt Lake Tribune: ‘That was his mistake. He owned up to it and authorized me to say that and fell on the sword.’
According to documents, the respondent’s counsel said: ‘It appears that at least some portions of the Petition may be AI-generated, including citations and even quotations to at least one case that does not appear to exist in any legal database (and could only be found in ChatGPT and references to cases that are wholly unrelated to the referenced subject matter.’
The court said in their opinion: ‘We agree that the use of AI in the preparation of pleadings is a research tool that will continue to evolve with advances in technology.
‘However, we emphasize that every attorney has an ongoing duty to review and ensure the accuracy of their court filings.’
As a result, he has been ordered to pay the attorney fees of the opposing party in the case.
He was also ordered to refund any fees that he had charges to clients to file the AI-generated motion.
Despite the sanctions, the court did ultimately rule that Bednar did not intend to deceive the court.
They did say that the Bar’s Office of Professional Conduct would take the matter ‘seriously’.
According to the court, the state bar is ‘actively engaging with practitioners and ethics experts to provide guidance and continuing legal education on the ethical use of AI in law practice’.
DailyMail.com has approached Bednar for comment.
It’s not the first time a lawyer has been sanctioned for using AI in their legal briefs, after an incredibly similar situation in 2023 in New York.
Lawyers Steven Schwartz, Peter LoDuca and their firm Levidow, Levidow & Oberman were ordered to pay a $5,000 fine for submitting a brief containing fictitious case citations.
The judge found the lawyers acted in bad faith and made ‘acts of conscious avoidance and false and misleading statements to the court’.
Prior to the fine Schwartz admitted that he had used ChatGPT to help research the brief in the case.