"The Irony": Can attorneys trust experts in the age of AI?
In 2023, a new law[1] went into effect in Minnesota to regulate the use of deep fake technology to influence elections. In 2024, Christopher Kohls, a YouTube personality known as “Mr Reagan,” and Rep. Mary Franson of the Minnesota House of Representatives challenged the new law as a violation of the First Amendment and sought preliminary injunctive relief prohibiting its enforcement. Minnesota’s Attorney General (and former Congressman) Keith Ellison filed a response memorandum opposing the preliminary injunction motion. In support of the state’s position, Ellison submitted two expert declarations that provided background on artificial intelligence (AI) and the dangers of deepfakes to free speech and democracy.
Unfortunately, the Plaintiffs discovered that one of the expert declarations, written by Prof. Jeff Hancock of the Stanford University Social Media Lab, included fabricated material. Specifically, the declaration included citations to two imaginary articles and incorrectly cited the authors of a third article. Prof. Hancock admitted that he had used GPT-4o to help draft his declaration and had not verified the citations. Thus, Ellison acknowledged the fake citations and requested the court to allow him to file an amended declaration from Prof. Hancock, as Ellison’s office was unaware of the fabrications.
On January 10, 2025, U.S. District Judge Laura M. Provinzino of the United States District Court for the District of Minnesota rejected this request and granted the Plaintiff’s motion to exclude the expert testimony of Prof. Hancock.[2] The court did not hold back when pointing out how grievous this mistake was in light of Prof. Hancock’s alleged expertise. The court writes, “The irony. Professor Hancock, a credentialed expert on the dangers of AI and misinformation, has fallen victim to the siren call of relying too heavily on AI—in a case that revolves around the dangers of AI, no less.” The court accepted that this was an innocent mistake, but also questioned Prof. Hancock’s diligence in submitting such a document under penalty of perjury.
Additionally, the court took Ellison’s word that he did not know the declaration included fabricated citations. However, the court reminded him that “Federal Rule of Civil Procedure 11 imposes a ‘personal, nondelegable responsibility’ to ‘validate the truth and legal reasonableness of the papers filed’[3] in an action.”
This raises the question of what benefit is an expert if their expertise and credibility cannot be relied upon? Attorneys may utilize experts to provide in-depth expertise and support in matters as it is unrealistic for any attorney to have that type of knowledge across a wide array of subjects. Legal matters frequently involve complex technical, scientific, or specialized information beyond general knowledge. By consulting with experts in various fields, attorneys can provide the necessary support for their positions and arguments. If attorneys were required to validate everything presented by an expert, it would require the attorney to be a pseudo-expert as well. At the same time, AI, such as large language models like ChatGPT, provide tools for many to appear as experts when they are not. (A question for another time: Could Prof. Hancock’s actions inadvertently demonstrate that experts are unnecessary for attorneys in the age of AI?)
Luckily, the court is not going to this extreme; instead, the court suggests that attorneys “ask their witnesses whether they have used AI in drafting their declarations” and query what verification was performed. Judge Provinzino is essentially providing a needed update for the Rules of Civil Procedure in the age of AI. It would be an undue burden to require an attorney to validate the expertise of a heart surgeon. But there is no doubt it would behoove an attorney to confirm if their expert utilized AI and then verify the sources, at the very least.
What separates the matter before the court is that it is specifically about truth and misinformation; thus, extra diligence by all parties should have been exerted to ensure the validity of the information presented. “The Court thus adds its voice to a growing chorus of courts around the country declaring the same message: verify AI-generated content in legal submissions!” Just remember, if you are submitting a declaration on misinformation, under penalty of perjury, it is probably best to validate the declaration’s information and avoid “the irony.”
[1] Minn. Stat. § 609.771.
[2] Kohls v. Ellison, 2025 WL 66514 (D. Minn. Jan. 10, 2025).
[3] Pavelic & LeFlore v. Marvel Ent. Grp., 493 U.S. 120, 126–27 (1989).