If you have any doubts about both the power and the pitfalls of artificial intelligence, I have a frizz-induced trial for you.
This involves the well-known Morgan & Morgan law firm submitting a legal brief cited nine cases as precedents that appear to be useful to clients.
Problem: The eight cases were completely fictional. They simply do not exist.
Members of the Orlando-based Morgan & Morgan legal team relied on artificial intelligence to write legal submissions. And when I went looking for case law that chatbots might be useful, it clearly created them.
Talk about truly “artificial” intelligence.
The chatbots not only invented plaintiffs and defendants, like Wyoming vs. the U.S. Department of Energy, but also produced dates and case numbers. It’s incredibly concrete horse hockey.
I asked the company founder John Morgan how that happened.
His response: “One of our lawyers relied solely on AI and were covered.”
Except that he didn’t say he was “macked.”
I think it’s safe to say that John came up with that statement without assistance from AI.
And in fact, the company basically offered Mea culpa, which was completely corrupted, and said the same in a filing that cast court mercy.
“This serious, unfortunate submission serves as a tough lesson for me and our company as we enter a world where artificial intelligence is more intertwined with everyday practices,” declares John’s son Mike’s chief lawyer. This is written in the submission form. “Artificial intelligence is a powerful tool, but it is a tool that needs to be used with caution. There are no shortcuts in the law. I would like to apologise to the courts and the opposing lawyers on behalf of me and our company. , the deepest humility.”
The legal drama that unfolds in Wyoming courtrooms has yet to attract much attention in mainstream media. But in legal and high-tech circles, that goes crazy.
“The lawyers face the anger of judges after AI cites a case consisting of a fiery hoverboard case,” the Tech News site The Register rang.
“The ChatGPT, a major law firm, fails. I asked David Ratt, a legal analyst in his original jurisdiction, asked.
That’s a good question. And it provides a warning story for those who are the first to rush to approach and rely on their brains other than themselves.
The cases include Morgan clients suing hoverboard makers and Walmart chains that sold hoverboards that allegedly fell into flames. When Walmart’s lawyers tried to examine many of the legal precedents cited by the Morgan team, they discovered that most of them were absent.
Spend your days with Hayes
Subscribe to our free Stephenly newsletter
Columnist Stephanie Hayes shares thoughts, feelings and interesting business with you every Monday.
You’re all signed up!
Want more free weekly newsletters in your inbox? Let’s get started.
Check out all options
I think the legal term for that is after the fact of Malarkey-o.
Naturally, the judge was uninterested and asked the Morgan team for an explanation.
That said Ludwin Ayala, Morgan’s lawyer for South Florida, “we were unable to verify the information” and “were embellished and took responsibility.”
“With a repentant heart, I sincerely apologize to this court, to my company, and to my colleagues representing the defendant for this mistake and the embarrassment I may have caused,” Ayala said on February 13th. I’ve submitted it. “Last week, professionally and personally, should not repeat what I can guarantee.”
The judge seemed to appreciate the candidness and count. US District Judge Kelly Rankin ruled Monday that Ayala could no longer tackle the lawsuit, but he imposed $1,000 on Morgan’s fine and other supervising lawyers alone, apologised for cleanliness and opposed. praised the lawyer for offering to pay the fees. The judge pointed out that the arrested lawyers were citing fake cases because the AI wasn’t doing the same thing.
Yes, yes – other cases. This is sadly not the first. In the 2023 case, the lawyer representing the man suing the Avianca airline also submitted a legal summary citing a fictitious case invented by ChatGpt. When Reuters News Service investigated its lawyers, it turns out that almost two-thirds said they were seeking help from AI.
It was my personal experience that suggests that many people who rely on AI do so proudly and are often idiots.
perhaps. Again, those who don’t rely on AI unconsciously for the facts (such as Facebook posts or hot take on random relatives) will be able to pose a false information that someone else has given them with authority. I will never try to explain it.
One of the most frightening things about this story is that when me and others asked the chatbot about details of the fake cases, the chatbot provided them.
Yes, asking AI about stories that lied to begin with may continue to produce more detailed and complicated lies.
We all have one of the biggest problems with lies is that those who say they lie often end up making up for more lies to justify the first. Well, that’s clearly a problem not only for humans, but also for the technology that was supposed to cover us. This is a good reason to examine everything you hear.
©2025 Orlando Sentinel.