Michael D. Cohen, the one-time fixer for former President Donald J. Trump accidentally gave his lawyer false legal citations concocted by Google’s artificial intelligence program Bard, he said in court documents unsealed Friday.
The mock reports were used by the attorney in a motion submitted to federal judge Jesse M. Furman. Mr. Cohen, who pleaded guilty in 2018 to campaign finance violations and served a prison sentence, had asked the judge to put an early end to court supervision of his case now that he is out of prison and has complied with the terms of his release. of. .
The chain of misunderstandings and mistakes that followed ended with Mr Cohen asking the judge to exercise “discretion and mercy”.
In an affidavit released Friday, Mr. Cohen explained that he had not kept up with “emerging trends (and associated risks) in legal technology and failed to realize that Google Bard was a productive text service that, like ChatGPT, could show referrals and descriptions that looked real but actually weren’t.”
He also said he did not realize that the attorney who filed the motion on his behalf, David M. Schwartz, “would drop the cases in his deposition wholesale without even confirming that they existed.”
The episode could have implications for a Manhattan criminal case against Mr. Trump in which Mr. Cohen is expected to be the star. The former president’s lawyers have long attacked Mr. Cohen as a serial storyteller. Now, they say they have a brand new example.
The bad-star filing was at least the second this year by attorneys in Manhattan federal court in which attorneys cited false rulings emanating from artificial intelligence. The legal profession, like others, is struggling to explain a new technology meant to mimic the human brain.
Artificial intelligence programs like Bard and ChatGPT generate realistic responses by making risky guesses about which chunks of text should follow other sequences. Such programs are based on billions of text examples taken from all over the Internet. Although they can synthesize vast amounts of information and present it convincingly, there are still bugs to work out.
The three reports in Mr. Cohen’s case appear to be hallucinations created by the Bard chatbot, taking bits and pieces of real cases and combining them with robotic imagination. Mr. Schwartz then conflated them in the motion he submitted to Judge Furman.
Mr Cohen, in his statement, said he understood Bard to be “a supercharged search engine”, which he had used in the past to find accurate information online.
Mr. Schwartz, in his statement, acknowledged that he used the reports and said that he had not independently reviewed the cases because Mr. Cohen indicated that another lawyer, E. Danya Perry, provided suggestions for the motion.
“I sincerely apologize to the court for not personally vetting these cases before submitting them to the court,” Mr. Schwartz wrote.
Barry Kamins, a lawyer for Mr. Schwartz, declined to comment Friday.
Ms. Perry said she only began representing Mr. Cohen after Mr. Schwartz filed the motion. He wrote to Judge Furman on December 8 that after reading the document already filed, he could not verify the case law cited.
In a statement at the time, he said that “in accordance with my moral obligation to candor in court, I advised Judge Furman on this matter.”
He said in a letter released Friday that Mr. Cohen, a fired former lawyer, “did not know that the allegations he identified were not true and, unlike his lawyer, was under no obligation to confirm as much.”
“It must be emphasized that Mr. Cohen did not engage in any misconduct,” Ms. Perry wrote. He said Friday that Mr. Cohen had no comment and that he had consented to unsealing the court documents after a judge questioned whether they contained information protected by attorney-client privilege.
The tale emerged when Judge Furman said in an order Dec. 12 that he could not find any of the three decisions. He ordered Mr. Schwartz to provide copies or “a thorough explanation of how the motion came to state cases that do not exist and what role, if any, Mr. Cohen played.”
The matter could have significant implications, given Mr. Cohen’s central role in a case brought by the Manhattan district attorney that is scheduled to go to trial on March 25.
The district attorney, Alvin L. Bragg, accused Mr. Trump of orchestrating a hush money scheme centered on a payment Mr. Cohen made during the 2016 election to a porn star, Stormy Daniels. Mr Trump has pleaded not guilty to 34 felony charges.
Seeking to counter claims by Mr Trump’s lawyers that Mr Cohen is unreliable, his defenders said Mr Cohen had lied on Mr Trump’s behalf but had been telling the truth since he split with the former president in 2018 and pleaded guilty to the federal charges. .
On Friday, Mr. Trump’s lawyers immediately seized on the Google Bard disclosure. Susan R. Necheles, a lawyer representing Mr. Trump in the upcoming Manhattan trial, said it was “typical Michael Cohen.”
“He is an admitted sorcerer and has pleaded guilty to multiple felonies and this is just further evidence of his lack of character and continued criminality,” Ms Necheles said.
Ms. Perry, the lawyer now representing Mr. Cohen in the motion, said Mr. Cohen’s willingness to have the records unsealed showed he had nothing to hide.
“He relied on his lawyer, as he had every right to do,” he said. “Unfortunately, his attorney appears to have made an honest mistake in not verifying the references in the document he drafted and filed.”
A spokeswoman for Mr. Bragg declined to comment Friday.
Prosecutors may argue that Mr. Cohen’s actions were not intended to deceive the court, but rather, by his own admission, were the product of a sad misunderstanding of the new technology.
The issue of chatbot-based lawyers exploded into the public spotlight earlier this year after another federal judge in Manhattan, P. Kevin Castel, fined two lawyers $5,000 after they admitted to filing a legal brief filled with nonexistent cases and referrals, all from ChatGPT.
Such cases seem to be rippling through the nation’s courts, said Eugene Volokh, a UCLA law professor who has written about artificial intelligence and the law.
Professor Volokh said he had counted a dozen cases in which lawyers or litigants representing themselves were believed to have used chatbots for legal research that resulted in court filings. “I strongly suspect that this is only the tip of the iceberg,” he said.
Stephen Gillers, professor of legal ethics at New York University School of Law, said: “People need to understand that genetic AI is not the bad guy here. It promises a lot.”
“But lawyers can’t treat AI as their co-counsel and just parrot what it says,” he added.
The non-existent cases cited in Mr. Schwartz’s motion — United States v. Figueroa-Flores, United States v. Ortiz, and United States v. Amato — had corresponding summaries and notes that they had been affirmed by the U.S. Court of Appeals for the Second Circuit.
Judge Furman noted in his Dec. 12 order that the Figueroa-Flores petition actually referred to a page from a decision issued by a different federal appeals court and “has nothing to do with supervised release.”
The Amato case named in the motion, the judge said, actually involved a decision by the Board of Veterans Appeals, an administrative tribunal.
And the reference to the Ortiz case, Judge Furman wrote, seemed “to add up to nothing.”
William K. Rashbaum contributed to the report.