Sorry, students of ChatGPT: Real learning takes legwork
Briefly

Sorry, students of ChatGPT: Real learning takes legwork
"The researchers concluded that, while large language models (LLMs) are exceptionally good at spitting out fluent answers at the press of a button, people who rely on synthesized AI summaries for research typically don't come away with materially deeper knowledge. Only by digging into sources and piecing information together themselves do people tend to build the kind of lasting understanding that sticks, the team found."
"'In contrast to web search, when learning from LLM summaries users no longer need to exert the effort of gathering and distilling different informational sources on their own - the LLM does much of this for them,' the researchers said in a paper published in October's issue of PNAS Nexus. 'We predict that this lower effort in assembling knowledge from LLM syntheses (vs. web links) risks suppressing the depth of knowledge that users gain, which subsequently affects the nature of the advice they form on the topic for others.'"
Over 10,000 participants took part in experiments comparing understanding gained from AI-generated summaries and from assembling information via traditional web searches. Participants using ChatGPT and similar tools developed a shallower grasp of their assigned subjects, produced fewer concrete facts, and tended to echo information present in other AI users' responses. Large language models provide fluent, ready-made answers that reduce users' effort to gather and distill multiple sources. This reduced effort suppresses depth of knowledge and alters the nature of advice people form. Deeper, lasting understanding emerges primarily when users dig into primary sources and synthesize information themselves.
Read at Theregister
Unable to calculate read time
[
|
]