Social media incentivised spread of Southport misinformation, MPs say
Briefly

Social media business models incentivize the spread of dangerous misinformation, as concluded by MPs following the 2024 Southport murders. Current online safety laws have significant gaps, prompting calls for multimillion-pound fines for platforms failing to address harmful content. Advances in generative AI raise concerns that the upcoming misinformation crisis could exceed previous incidents. The committee recommended visible labeling of AI-generated content and accountability for social media in curating information. Additionally, state-sponsored disinformation may constitute foreign interference. The Online Safety Act is inadequate for addressing pervasive misinformation.
The Commons science and technology select committee called for new multimillion-pound fines for platforms that do not set out how they will tackle the spread of harmful content through their recommendation systems.
Rapid advances in generative artificial intelligence could make the next misinformation crisis even more dangerous than last August's violent protests after three children were killed by a man wrongly identified online.
Neither misinformation nor disinformation are harms that firms need to address under the OSA, which only received royal assent less than two years ago.
Social media companies are not just neutral platforms but actively curate what you see online, and they must be held accountable.
Read at www.theguardian.com
[
|
]