
"AI language models like the kind that power ChatGPT, Gemini, and Claude excel at producing exactly this kind of believable fiction when they lack actual information on a topic because they first and foremost produce plausible outputs, not accurate ones. If there were no patterns in the training dataset that closely match what the user is seeking, language models will create the best approximation based on statistical patterns learned during training."
"The irony runs deep The presence of potentially AI-generated fake citations becomes especially awkward given that one of the report's 110 recommendations specifically states the provincial government should "provide learners and educators with essential AI knowledge, including ethics, data privacy, and responsible technology use." Sarah Martin, a Memorial political science professor who spent days reviewing the document, discovered multiple fabricated citations."
Advisory recommendations for provincial AI education included guidance to teach learners and educators essential AI knowledge, ethics, data privacy, and responsible technology use. Multiple citations within those recommendations were found to be fabricated or erroneous. AI language models can generate plausible but inaccurate content and can fabricate citations even when able to search the web. Reviewers and a former advisory member described the fabricated citations as demolishing trust. A co-chair declined an interview while education officials acknowledged a small number of potential citation errors and said references are being investigated and checked. The issue underscores risks in relying on automated or insufficiently reviewed sources for important educational policy.
Read at Ars Technica
Unable to calculate read time
Collection
[
|
...
]