Tampa – Tim Burke, a federal incident in Tampa Media – A conflict that combines topics such as Fox News, American media, and complex questions about free speech – this week we made another too modern turn courtesy of artificial intelligence.
One of Burke’s lawyers relied on AI tools, including ChatGpt, to investigate and write the latest moves to dismiss some of the charges against him. The result was a legal memo filled with errors, nonexistent citations and misstatements of the law.
The matter was not noticed by the judge overseeing the case.
Two days after the documents were filed, US District Judge Kathryn Kimball Misell ordered it to be attacked from the incident’s records.
“Burke’s allegations contain significant misrepresentations and misquotations of the case law and history that are likely related,” Misert wrote. The document “cites cases for facts and legal propositions that they do not support, as they say what they don’t say.
Miserle has allowed Burke’s attorneys to file a new motion with appropriate and accurate case citations and citations. She also instructed her to do another filing explaining why and how the error occurred.
In their response, Burke’s legal team condemned “sensible dependence on AI” along with time constraints and geographical challenges between Tampa and Maryland lawyers.
Burke’s lead lawyer, Mark Rush, is a former federal prosecutor with expertise in cybersecurity and computer crime. According to the defence response, it was he who drafted the motion. Burke’s other lawyer, Michael Maduch, was busy with unrelated trials and did not review the claims before they were filed.
Rush “takes “soon, exclusive liability for these errors and Mr. Burke is not responsible for these inaccuracies,” the response states.
The judge had not taken any punitive actions against the lawyer. However, at a court hearing Tuesday afternoon, she gave them a harsh warning against making future mistakes.
“I expect to cite what is done by humans and what is done by humans,” Miser said.
In her written order, the judge identified at least nine instances of nonexistent citations and case law falsehoods.
One she highlighted as the worst was a quote in her opinion in the 2001 case of the 11th Circuit Court of Appeals, known as United States v. Lewis.
The dismissal claim includes ostensibly a quotation from the case, saying, “(a) the law criminalizing innocent conduct must clearly describe the line between permitted and prohibited conduct, and cannot constitutionally require the defendant to prove the facts that exempt him.”
Want to break news in your inbox?
Subscribe to our free newsletter
You will receive real-time updates on major issues and events from Tampa Bay and beyond.
You’re all signed up!
Want more free weekly newsletters in your inbox? Let’s get started.
Check out all options
The quote does not appear anywhere in that ruling. Furthermore, the Lewis case does not support that proposition.
Mizelle also listed at least seven quotes that were falsely attributed to the case she said she might support Burke’s argument. She also focused on other “other issues” and cited examples of real quotes stemming from the wrong court.
In his response, Rush wrote that he had “substantial legal research and writing” to prepare the motion. He used “Deep Research” in the “Pro” version of Westlaw, an online legal research service.
“The combination of these tools produced the final product,” says Response.
Given the size, complexity and scope of the document, Rasch wrote that he should have asked for additional time to submit it. He apologised to the court, Maduch and Burke.
In court Tuesday, Miser said he did not believe the error was due to a lack of enthusiastic legal advocacy from Burke’s team. She said she believes Burke has received good representation from both of his lawyers.
Burke, 46, is a nationally recognized media consultant who worked for major companies such as HBO and ESPN. He is renowned for his ability to find and promote obscure online content.
He is charged with 14 federal crimes related to the acquisition and distribution of videos he found online. Federal prosecutors accused them of invading private computer systems to obtain the video.
His lawyers allege that Burke found the videos by accessing them with credentials available on public websites. They claimed he was a journalist who brought bright material to the public good. They say the lawsuit against him violates his initial right to amend.
This case is set for the September ju trial trial.
Like many professions, lawyers and courts have incorporated artificial intelligence tools into their work in recent years. In a survey last year conducted by Thomson Reuters Company, 63% of lawyer respondents reported using AI for their jobs, while 12% said they use it regularly.
Burke is far from the first case of looking at AI-generated phantom citations and misstatements of the law.
Earlier this year, Florida-based personal injury giant Morgan & Morgan sent an email to lawyers warning that AI could place false information in court documents. It happened briefly in an incident that turned out to contain citations from eight cases in which the company that was processed did not exist. A chatbot used to write fictitious short generated dates and case numbers as well.
The person responsible for the error apologised in court, admitting his false dependence on AI. The judge banned lawyers from further work on the case and fined the company $1,000.
Earlier this month, a California judge ordered the plaintiff’s law firm to pay $31,000 after filing a brief that found the judge contained numerous false and inaccurate citations produced by AI, the MIT Technology Review reported.
Maura Grossman of Buffalo, New York, is an attorney and professor of computer science at the University of Waterloo, Ontario, Canada, and speaks openly about the issues that AI is causing in courts. In an email to the Tampa Bay Times, Grossman wrote that he doesn’t think technology itself is a problem, but he relies heavily on it.
She noted that most lawyers would never submit briefs drafted by the clerk or junior associate without thoroughly checking accuracy. When using AI, the same rigor must be applied.
“I’m a bit surprised at the persistence of the error given how much negative publicity we’ve got in the legal industry,” Grossman wrote. “It’s a little more understandable in a self-expression lawsuit that doesn’t have access to the same case law database that lawyers do, but lawyers don’t have real excuses.”
Grossman said lawyers are “infused by all the hype and flowability and authority” of technology without considering its limitations. She believes the solution is more education.
“If you need to check every last word, it may be a little (return on investment), but that’s where we are now,” she writes. “If you don’t, it’s very dangerous.”