In February, a Colombian judge asked ChatGPT for guidance on how to decide an insurance case. Around the same time, a Pakistani judge used ChatGPT to confirm his decisions in two separate cases. There are also reports of judges in India and Bolivia seeking advice from ChatGPT.
These are unofficial experiments, but some systematic efforts at reform do involve AI. In China, judges are advised and assisted by AI, and this development is likely to continue. In a recent speech, the master of the rolls, Sir Geoffrey Vos who is the second most senior judge in England and Wales, suggested that, as the legal system in that jurisdiction is digitized, AI might be used to decide some “less intensely personal disputes”, such as commercial cases.
Some say AI judges are the future of justice. AI doesn’t need a lunch break, can’t be bribed, and doesn’t want a pay rise. AI justice can be applied more quickly and efficiently.

Machine learning is a form of AI that improves at what it does over time. It is often quite powerful, but no more so than a very educated guess. One strength is that it can find correlations and patterns in data that we don’t have the capacity to calculate. However, one of its weaknesses is that it fails in ways that are different to the way people do, reaching conclusions that are obviously incorrect.
AI currently has many faults. AI was tricked into recognizing a turtle as a gun. Facial recognition often has issues correctly identifying women, children and those with dark skin. So it’s possible that AI could also erroneously place someone at a crime scene who wasn’t there. It would be difficult to be confident in a legal system.
When AI is used in legal processes, and it fails, the consequences can be severe. Large language models, the technology underlying AI chatbots such as ChatGPT, are known to write text that is completely untrue. This is known as an AI hallucination, even though it implies that the software is thinking rather than statistically determining what the next word in its output should be.
This year, it was discovered that a New York lawyer had used ChatGPT to write submissions to a court, only to discover that it cited cases that do not exist. This indicates that these types of tools are not capable of replacing lawyers yet, and in fact, may never be.

(The drawing above of the New York lawyer is an artist’s rendition of what he might look like)
It is not clear that legal rules can be reliably converted into software rules. Individuals will interpret the same rule in different ways. When 52 programmers were assigned the task of automating the enforcement of speed limits, the programs that they wrote issued very different numbers of tickets for the same sample data.
Automated government systems fail at a scale and speed that’s very difficult to recover from. The Dutch government used an automated system (SyRI) to detect benefits fraud, which wrongly accused many families, destroying lives in the process.
The Australian “Online Compliance Intervention” scheme is used to automatically assess debts from recipients of social welfare payments. It’s commonly known as “Robodebt”. The scheme overstepped its bounds, negatively affecting hundreds of thousands of people and was the subject of a Royal Commission in Australia. (Royal Commissions are investigations into matters of public importance in Australia.)
Judging is not all that judges do. They have many other roles in the legal system, such as managing a courtroom, a caseload, and a team of staff, and those would be even more difficult to replace with software programs.

Daren c
No thanks, these “AI” Have the creators bias built in. 100%.
Nick H
Once again technology used for the wrong reasons, people forget that all software can be hacked
Charles R
Sure would be great to unbiased judges and prosecuters, but this ain’t it. Probably only in heaven.