Law School Tests Trial With Jury Made Up of ChatGPT, Grok, and Claude
Briefly

Law School Tests Trial With Jury Made Up of ChatGPT, Grok, and Claude
"Looming over the proceedings even more prominently than the judge running the show were three tall digital displays, sticking out with their glossy finishes amid the courtroom's sea of wood paneling. Each screen represented a different AI chatbot: OpenAI's ChatGPT, xAI's Grok, and Anthropic's Claude. These AIs' role? As the "jurors" who would determine the fate of a man charged with juvenile robbery."
"AI's inroads into legal settings continues to be a contested subject as many lawyers leveraging AI tools are blasted for committing egregious errors with the tech. Typically, an AI goes wrong by citing either misquoted or fabricated caselaw, a symptom of the tech's fundamental problem of "hallucinating" misinformation that it presents as fact, which the industry is still nowhere close to solving. Judges have handed out harsh punishments, including fines and sanctions, to attorneys who have turned in shoddy AI-sabotaged work."
Three AI chatbots—OpenAI's ChatGPT, xAI's Grok, and Anthropic's Claude—served as jurors in a mock juvenile robbery trial at the University of North Carolina School of Law. The exercise raised issues of accuracy, efficiency, bias, and legitimacy when AI participates in legal decision-making. Professional use of AI in real court cases has sometimes produced egregious errors through misquoted or fabricated caselaw and 'hallucinated' misinformation presented as fact. Judges have imposed fines and sanctions on attorneys whose AI-assisted filings contained such errors. Despite these risks, AI tools are increasingly used in legal practice.
Read at Futurism
Unable to calculate read time
[
|
]