A PYMNTS Company

Panel Takes Steps Toward Regulating AI-Generated Evidence in Courtrooms

 |  November 11, 2024

A US federal judicial panel took a significant step on Friday in addressing the challenges posed by artificial intelligence (AI) in the courtroom, agreeing to begin developing rules to regulate the introduction of AI-generated evidence. The move follows growing concerns about the potential impact of technologies like generative AI, which can create text, images, audio and video, including potentially misleading “deep fake” content, reported Reuters.

The Advisory Committee on Evidence Rules of the U.S. Judicial Conference convened in New York, where committee members expressed both urgency and caution regarding the need to adapt judicial procedures to evolving AI technologies. U.S. District Judge Jesse Furman, who chairs the committee, highlighted the risks of leaving the judiciary unprepared to handle emerging AI-related issues. While acknowledging the complexities of crafting new regulations, Furman emphasized the importance of moving forward to avoid being caught off guard by advances in machine learning and AI.

“I think there’s an argument for moving forward to avoid getting caught completely flat-footed,” Furman remarked, acknowledging that the process of rulemaking can take years, and the technology is advancing rapidly.

The committee’s deliberations come amid a broader national conversation on the role of AI in the legal system. In his annual report last December, Chief U.S. Supreme Court Justice John Roberts noted the potential benefits of AI tools for both litigants and judges but underscored the need for careful consideration of how these technologies should be used in litigation.

During Friday’s meeting at New York University Law School, the committee reached a consensus to proceed with creating a rule addressing the reliability of AI-generated evidence. This rule would mirror the standards used for expert testimony under Rule 702 of the Federal Rules of Evidence, ensuring that AI-produced data undergoes rigorous scrutiny regarding its reliability and accuracy.

However, while the committee was united in moving forward with this rule, there was less consensus on whether a separate rule should be established to address concerns about “deep fakes”—audio or video evidence that could be fabricated using AI technology. Some members, like U.S. Circuit Judge Richard Sullivan of the 2nd U.S. Circuit Court of Appeals, expressed skepticism about the immediate threat posed by deep fakes, questioning whether such a flood of claims is truly imminent.

Still, there was agreement that the committee should prepare for the possibility of future challenges. “It seems like a good idea to have something in the bullpen as it were rather than nothing,” said Daniel Capra, a law professor at Fordham School of Law and reporter to the committee. Capra will be involved in drafting the proposed rule, which is expected to be reviewed for public comment in May.

The ongoing efforts reflect broader concerns within the legal community about how AI technologies, particularly generative models like OpenAI’s ChatGPT, are reshaping the landscape of legal proceedings. As these technologies evolve, the judiciary faces the difficult task of balancing innovation with the need for fairness and accuracy in legal processes.

According to Reuters, the committee’s work is seen as a proactive step in addressing these challenges, ensuring that the judiciary remains equipped to handle the complexities posed by AI in litigation.

Source: Reuters