“U.S. Federal Judges Admit AI-Linked Errors in Court Rulings After Senate Inquiry”

0
245
Sen. Chuck Grassley
Sen. Chuck Grassley Sen. Chuck Grassley

WASHINGTON — Two U.S. federal judges have acknowledged that court rulings issued from their chambers contained factual and legal errors after staff members used artificial intelligence tools to help draft the decisions. The admissions came in response to an oversight inquiry led by Senate Judiciary Committee Chairman Chuck Grassley.

U.S. District Judge Julien Xavier Neals of New Jersey and U.S. District Judge Henry Wingate of Mississippi confirmed that staffers had used AI platforms, including ChatGPT and Perplexity, to prepare court orders over the summer. The rulings, which were later flagged for inaccuracies, bypassed standard review protocols typically required before judicial release.

District Judge Julien Xavier Neals and Judge Henry Wingate

In letters made public Thursday, both judges admitted the rulings were “error-ridden” and said they have since implemented stricter internal policies governing AI use. “These documents did not receive the level of scrutiny normally applied in chambers,” Judge Neals wrote, adding that the staff acted without formal authorization.

Senator Grassley criticized the lapse, calling it “a troubling breach of judicial responsibility.” He urged the federal judiciary to establish clear guidelines for AI use in legal proceedings, warning that unchecked reliance on generative tools could undermine public trust in the courts.

Legal experts say the incident highlights growing concerns about the role of artificial intelligence in sensitive government functions. While AI tools offer efficiency, critics argue they lack the nuance and accountability required in legal decision-making.

In response, the Administrative Office of the U.S. Courts said it is reviewing the matter and considering broader policy recommendations. “We recognize the need for transparency and safeguards as technology evolves,” a spokesperson said.

Civil liberties groups have expressed alarm over the revelations. “AI should never be used to shortcut justice,” said a representative from the American Civil Liberties Union. “These errors could have real consequences for litigants.”

The controversy has reignited debate over the ethical boundaries of AI in public service, with lawmakers and legal scholars calling for urgent reforms to ensure human oversight remains central to judicial integrity.

LEAVE A REPLY

Please enter your comment!
Please enter your name here