By: Daan Kingma (European Law Blog)
In this post, author Daan Kingma (European Law Blog) discusses the growing influence of artificial intelligence, particularly ChatGPT, in the legal world. From courts in Colombia issuing rulings drafted by AI, to judges in the UK and China praising or even relying on automated summaries, the use of AI in judicial settings is no longer a distant concept. While some embrace these tools for their efficiency and accessibility, others raise alarms about their implications. The Netherlands joined this trend when a lower court judge controversially used ChatGPT to resolve a neighbor dispute—prompting widespread debate among Dutch legal professionals about the tool’s compatibility with fundamental rights like party autonomy and the right to be heard.
Kingma turns his attention to six Dutch court decisions in which ChatGPT was explicitly referenced—either by judges or litigants—to assess how the technology is being integrated into legal reasoning. These rulings mark a key moment in the ongoing conversation about the role of AI in judicial processes. By analyzing these cases, Kingma aims to uncover whether a pattern or legal reasoning framework is emerging around the admissibility and reliability of AI-generated information in Dutch courts. He also situates these developments within broader EU legal debates around judicial AI regulation, highlighting the complex legal terrain that must be navigated as these tools become more widely used.
To understand this trend, Kingma outlines the evolution of AI in the legal domain. Early efforts focused on expert systems that mimicked legal reasoning through logical structures, but the recent shift toward data-driven AI—fueled by machine learning and massive text corpora—has significantly expanded the capabilities of such tools. ChatGPT, built on Large Language Models (LLMs), demonstrates this shift by offering rapid summaries, legal research, and even simulated legal reasoning. Despite its promise, it carries serious risks: from fabricating sources to embedded bias and a lack of transparency in how conclusions are reached…