04/30/2025 / By Willow Tohi
In a groundbreaking study published April 28, researchers from the University of Southampton revealed that non-experts trust legal advice generated by ChatGPT more than counsel from licensed attorneys—if the source remains undisclosed. The findings, presented at the CHI 2025 human-computer interaction conference in Japan, underscore the growing challenges posed by AI’s role in decision-making and the urgent need for public education on artificial intelligence literacy. Conducted across three experiments involving 288 participants, the study demonstrates that users are drawn to AI’s confident tone and concise language over human legal advice, even when errors or “hallucinations” risk misinformed outcomes.
Led by Dr. Eike Schneiders, an assistant professor of computer science at the University of Southampton, the research tested participants’ responses to legal hypotheticals covering traffic law, property disputes and planning regulations. Participants received advice generated either by ChatGPT or by qualified lawyers. When the source remained anonymous, 62% preferred AI-generated counsel, while those explicitly told the origin of each response showed no statistically significant preference for lawyers, despite knowing their expertise.
“The participants who knew the source of the advice still placed nearly equal trust in ChatGPT,” Schneiders told conference attendees. “This suggests a fundamental shift in how people assess authority—algorithmic confidence over human expertise.”
Crucially, the study found a key difference in how advice was framed. Lawyers’ responses were often longer and used simpler language, prioritizing clarity. ChatGPT’s examples, however, were shorter but more lexically complex, “striking the right balance between brevity and technicality,” Schneiders said. Participants perceived complexity as a sign of validity, even when answers contained inaccuracies.
AI-generated content’s risks—most notably, the so-called “hallucinations”—were a focal point of the team’s analysis. These errors, where systems invent falsehoods or illogical conclusions, are a chronic flaw in large language models like ChatGPT. In one 2023 court case, a New York attorney’s AI-drafted brief falsely cited a nonexistent legal precedent, highlighting how hallucinations can jeopardize justice. The Southampton study noted that participants often failed to detect such inaccuracies.
“When AI advice confidently cites fabricated statutes or misstates procedures, the consequences could be dire,” study co-author Dr. Tina Seabrooke said. “Lawyers may prioritize thoroughness, but that leaves room for ambiguity—ambiguity AI masks with polished phrasing.”
The third experiment evaluated participants’ ability to discern AI from human counsel. Guessing randomly would have netted an accuracy score of 0.5; humans averaged 0.59, indicating weak but statistically significant awareness of AI influence. “People can sense machine input,” Schneiders said, “but not well enough to reliably act on that intuition.”
The findings amplify calls to balance AI’s utility with safeguards. The EU’s proposed AI Act aims to require transparency labels for AI-generated content, but researchers argue this alone is insufficient. Instead, improving public AI literacy ranks as the most urgent need, ensuring users critically assess algorithmic outputs.
“Two steps must dominate,” Schneiders emphasized. “First, policymakers should mandate clear disclaimers so people know when AI is guiding their choices. Second, citizens must learn to treat AI as one tool among many—useful for brainstorming but never a final authority.”
The study’s authors advised users to treat AI like a preliminary guide. “It can identify a legal area you need to explore or suggest keywords for further research,” Schneiders noted. “But trust your instincts—then verify with an expert.”
The University of Southampton’s research arrives as AI infiltrates domains once considered off-limits—from courtrooms to doctor’s offices to military drones. While its efficiency is undeniable, the study’s revelations about misplaced trust reveal a pressing vulnerability: people’s reliance on machines may outpace their ability to question them.
As institutions grapple to regulate AI’s rapid evolution, Schneiders’ team underscores the stakes. “Hallucinations aren’t harmless if they land the wrong sentence in court or deny Medicare coverage to a senior,” he said. “Protecting public safety requires vigilance—both in policy and in every individual’s critical thinking.”
For better or worse, the age of algorithmic counsel is here. Navigating it safely means learning to distinguish AI’s promises from its perils—a lesson that defines humanity’s next legal battle.
Sources for this article include:
Tagged Under:
AI hallucinations, algorithms, artificial intelligence, computing, future tech, Glitch, information technology, lawyers, legal advice, research, robots
This article may contain statements that reflect the opinion of the author
COPYRIGHT © 2018 CYBORG.NEWS
All content posted on this site is protected under Free Speech. Cyborg.news is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. Cyborg.news assumes no responsibility for the use or misuse of this material. All trademarks, registered trademarks and service marks mentioned on this site are the property of their respective owners.