Concerns are rising in the judiciary as two federal judges acknowledged that artificial intelligence was improperly used to draft court orders, leading to significant errors. The situation unfolded after U.S. District Judge Julien Xavier Neals from New Jersey and U.S. District Judge Henry Wingate from Mississippi responded to inquiries from Sen. Chuck Grassley, R-Iowa, who heads the Senate Judiciary Committee. Grassley noted that the affected court orders, which were unrelated, were “error-ridden,” raising alarms about the integrity of the judicial process.
Both judges confirmed that the flawed rulings did not undergo the essential scrutiny typically demanded in legal proceedings. In communication released by Grassley’s office, Neals pointed to a specific incident involving a June 30 draft decision in a securities lawsuit. He explained that this decision was “released in error – human error” and promptly withdrawn upon discovery. An intern, without authorization, had utilized OpenAI’s ChatGPT for legal research, against both chamber and law school policies. Neals stressed, “My chamber’s policy prohibits the use of GenAI in legal research for, or drafting of, opinions or orders.” Following this incident, he has vowed to implement a written policy that clearly defines the rules for all law clerks and interns moving forward.
Meanwhile, Judge Wingate recounted a similar lapse occurring in his chambers. His law clerk used a generative AI tool named Perplexity to assist with drafting, leading to “a lapse in human oversight.” Wingate admitted that the draft order released on July 20 contained “clerical errors,” resulting in its removal and replacement. “This was a mistake. I have taken steps in my chambers to ensure this mistake will not happen again,” he remarked, indicating a commitment to refine oversight in the future.
Sen. Grassley, observing the serious implications of reliance on AI in judicial rulings, expressed appreciation for the judges’ transparency in acknowledging their mistakes. He stated, “Honesty is always the best policy. I commend Judges Wingate and Neals for acknowledging their mistakes and I’m glad to hear they’re working to make sure this doesn’t happen again.” Grassley emphasized that each federal judge and indeed the judiciary as a whole has a fundamental duty to protect litigants’ rights while maintaining fairness in the legal system. He called for the judicial branch to formulate robust policies governing the use of generative AI to ensure that such technology does not compromise the integrity of the courtroom.
The implications of this situation extend beyond these individual cases. It raises broader questions about the responsibility of legal professionals and the acceptable use of technology within the courtroom. Judges across the nation have been cracking down on instances of AI misuse within court filings, underscoring the increasing importance of safeguarding against inaccuracies that could undermine legal outcomes. Fines and sanctions have been meted out to lawyers found to be in violation of proper protocols regarding AI.
This brewing controversy signals an urgent need for procedural safeguards in the judicial process, particularly as artificial intelligence continues to permeate various sectors. The reliance on technology presents both benefits and risks, especially when human oversight is diminished. As the legal community grapples with these challenges, the statements from Judges Neals and Wingate, along with the sentiments from Grassley, point toward a future where accountability and enhancement of traditional practices will be essential.
"*" indicates required fields
