Attorneys advocating for businesses and the families who own them.
A7303871.jpg

Briefs

FSOlegal
briefs


Search for past Briefs

 
 

SEVENTH CIRCUIT CAUTIONS EVEN PRO SE LITIGANTS AGAINST UNVERIFIED AI USAGE

As officers of the Court, attorneys have an obligation to be sure that citations made to the Court are not simply made up. Lawyers have been sanctioned for relying upon artificial intelligence to prepare legal filings that contained ‘hallucinated” citations. The Seventh Circuit noted just last month that AI large language models generate “output that is fictional, inaccurate or nonsensical.” Jones v. Kankakee Cnty. Sheriff's Dep't, No. 25-1251, 2026 WL 157661, at *2 (7th Cir. Jan. 21, 2026). What this means in part, as a practical matter, is that AI will make up cases or citations that do not exist or cite a case for things that it does not say.

In Jones, the Seventh Circuit examined a case where a pro se litigant (meaning a litigant who represents himself or herself) filed a brief that the Court strongly believed was full of such AI hallucinations. The Court specifically admonished even pro se litigants must not rely upon unverified AI results. As lawyers, we often see AI being used for research purposes that generates results containing such hallucinations.

While artificial intelligence can be a valuable tool, for legal research it is still quite fraught with peril. Before anyone relies upon any AI created or generated legal research or summary, they should be sure to review the sources allegedly cited, verify that they exist and say what the AI summary says they say. Better yet, before you do, you could simply contact your lawyer.