Supreme Court Justice Roberts Cautions on the Mixed Impact of AI in the Legal Arena
In a thought-provoking year-end report published on Sunday, U.S. Supreme Court Chief Justice John Roberts explored the dual nature of artificial intelligence (AI) within the legal profession. While acknowledging its potential to enhance access to justice and streamline legal processes, Roberts urged “caution and humility” in the face of evolving technology that has both promising benefits and inherent drawbacks.
Roberts, in his 13-page report, adopted an ambivalent stance, emphasizing that AI had the potential to increase access to justice for indigent litigants, revolutionize legal research, and expedite case resolution, all while reducing costs. However, he also highlighted the significant privacy concerns associated with AI and the technology’s current inability to fully replicate human discretion.
“I predict that human judges will be around for a while,” Roberts wrote, “But with equal confidence, I predict that judicial work – particularly at the trial level – will be significantly affected by AI.”
The Chief Justice’s commentary represents his most significant discussion to date on the impact of AI on the legal system. This comes at a time when lower courts grapple with the challenges of adapting to a technology capable of passing the bar exam but prone to generating fictitious content, referred to as “hallucinations.”
Roberts stressed the necessity for caution in deploying AI, referencing instances where AI-generated hallucinations led lawyers to cite non-existent cases in court papers, calling it “always a bad idea.” Although he did not delve into specifics, Roberts mentioned that the phenomenon had made headlines in the past year.
Recent incidents, such as former President Donald Trump’s lawyer Michael Cohen inadvertently including fake case citations in court filings, have raised eyebrows about the reliability of AI-generated content. This has prompted a federal appeals court in New Orleans, the 5th U.S. Circuit Court of Appeals, to propose rules regulating the use of generative AI tools like OpenAI’s ChatGPT by lawyers appearing before it.
The proposed rule aims to ensure transparency and accountability, requiring lawyers to certify that they either did not rely on AI programs to draft briefs or that any text generated by AI underwent human review for accuracy before being included in court filings.