Roberts stated that artificial intelligence has the potential to increase access to justice for individuals facing poverty, fundamentally change legal research, and assist courts in resolving cases faster and at a lower cost. However, he also highlighted privacy concerns, emphasizing that current AI technology cannot replicate human discretionary decisions.
On December 31, 2023, Chief Justice John Roberts of the United States Supreme Court released an end-of-year report, focusing on the positive roles and threats posed by artificial intelligence in the legal system. He urged a “prudent and humble” approach.
In the 13-page report, Roberts mentioned that AI has the potential to increase access to justice for individuals facing poverty, revolutionize legal research, and help courts resolve cases faster and more cost-effectively. However, he also noted privacy concerns, stating that current AI technology cannot replicate human discretionary decisions.
The U.S. Courts are Debating How to Deal with the “Illusion” Created by AI
Regarding filling out legal documents, Roberts expressed that some AI technologies can simplify legal applications and save funds. He wrote, “These tools have the welcome potential to eliminate any mismatch between available resources in our court system and urgent needs.”
“Law professors report with a mix of awe and anxiety that AI can apparently get a B on law school assignments and even pass the bar exam,” Roberts wrote. “AI may soon seem indispensable in legal research. AI clearly has tremendous potential to greatly increase access to critical information for lawyers and non-lawyers alike. But it is equally clear that it may infringe on privacy and dehumanize the law.”
“I predict human judges will be around for a while,” Roberts wrote. “But I am equally confident in predicting that judicial work—especially at the trial level—will be significantly affected by artificial intelligence.”
Roberts’ comments mark his most significant discussion to date on the impact of AI on the legal system. Simultaneously, many lower courts in the United States are debating how best to adapt to new technologies that can pass the bar exam but are prone to generating fictional facts. They refer to the “illusion” created by AI, which can “spout nonsense” in a serious manner.
Roberts emphasized that “any use of artificial intelligence requires caution and humility.” He mentioned an example where the “illusion” of AI led lawyers to cite non-existent cases in court documents, saying it is “always a bad idea.” He did not provide detailed information about the incident but mentioned that it became a headline news this year.
For instance, Michael Cohen, former lawyer of Donald Trump, recently admitted in court documents that he mistakenly provided a false citation generated by an AI program (Google ChatGPT Bard), which later made its way into a formal court filing. Other examples of lawyers relying on AI-generated “illusion” content in legal briefs have also been recorded.
Last month, according to Latest report, a federal appeals court in New Orleans released a rule aimed at regulating the use of generative AI tools like OpenAI’s ChatGPT by attending lawyers. This may be the first court among the 13 federal appeals courts in the United States to propose such rules. The proposed rule from the Fifth Circuit Court of Appeals would also require lawyers to attest either not relying on AI programs to draft briefs or having a human review the accuracy of any text generated by AI in court filings.
Legal Decision Still Require Human Judgment

Roberts has long been interested in the intersection of law and technology. In some rulings, he has written majority opinions that often require the government to obtain search warrants to search digital information seized from a person’s cellphone in custody and collect large amounts of location data about cellphone company customers.
In 2017, Roberts was asked if he could “foresee a day when AI-driven intelligent machines will assist in factual fact-finding for the court, and even more controversially, in judicial decision-making?” He answered yes. “That day is here,” he said. “It puts a tremendous pressure on the way the Department of Justice operates.”
In the end-of-year report, Roberts wrote that this pressure continues to grow. “The use of AI in criminal cases to assess flight risk, recidivism, and other discretionary decisions involving predictions has raised concerns about due process, reliability, and potential bias,” he wrote. “At least for now, studies indicate that the public has consistently perceived a ‘fairness gap’ between human and AI decisions, reflecting a view that, despite all its flaws, human judgment is fairer than anything spit out by a machine.”
Roberts concluded, “Legal decisions often involve gray areas and still require the application of human judgment.” “For example, judges weigh the sincerity of a defendant’s statements during sentencing,” he wrote. “Subtle differences matter: a trembling hand, a quivering voice, a change in tone, a drop of sweat, a momentary hesitation, a brief interruption in eye contact can make all the difference. Most people still trust humans, not machines, to perceive these clues and draw correct inferences.”
Roberts wrote that appellate judges won’t be replaced quickly. “Many appellate decisions depend on whether lower courts have abused their discretion, a standard that inherently involves gray areas of specific facts,” he wrote. “Others focus on the open question of how the law should develop in new areas. AI is based primarily on existing information that can provide information but cannot make such decisions.”
Leave a comment