Generative artificial intelligence is rapidly being implemented across various sectors, including legal and education. However, concerns about its reliability are emerging, especially with its tendency to generate inaccuracies, referred to as hallucinations. This issue has significant implications; for instance, lawyers have encountered fictitious court cases in AI-generated legal documents. Critics argue that while AI may produce text that sounds credible, it can mislead professionals. Researchers at the University of Glasgow suggest that LLMs create these errors through predictive text generation rather than true comprehension, pointing out the need for a more accurate term for these inaccuracies.
Large language models (LLMs) remain prone to casually making up information, a phenomenon known as hallucination, which raises questions about the reliability of AI-generated content.
Hallucinated legal texts appear legitimate with citations and statutes, creating an illusion of credibility that can mislead even experienced professionals.
The tendency of ChatGPT to assert false information means it is, according to some skeptics, a non-starter for business customers where accuracy is paramount.
Academics argue that these models do not solve problems or reason but predict plausible sentences, suggesting better terminology for errors would be 'bullshit' rather than hallucinations.
Collection
[
|
...
]