Flagging Misinformation on Social Media Can Greatly Reduce its Engagement and Toxicity
Briefly

Flagging Misinformation on Social Media Can Greatly Reduce its Engagement and Toxicity
"A large, independent study published in the prestigious journal PNAS suggests that crowd-sourced fact-checking to a platform's users can work spectacularly well at stopping lies from spreading. "We've known for a while that rumors and falsehoods travel faster and farther than the truth," said Johan Ugander, an associate professor of statistics and data science in Yale's Faculty of Arts and Sciences, deputy director of the Yale Institute for Foundations in Data Science, and co-author of the new study."
"But what if the users themselves had a platform to tag what's false? That's the premise behind Community Notes, a fact-checking feature on X (formerly Twitter). Instead of a top-down approach, this system lets regular users propose and rate notes that add context to potentially misleading posts. It uses a clever "bridging-based" algorithm to find consensus, meaning a note only gets promoted if people who typically disagree with each other both rate it as helpful."
Social media is saturated with sensational, misleading posts that spread faster and farther than corrections. Crowd-sourced fact-checking by platform users can substantially reduce the spread of falsehoods when promoted by consensus mechanisms. Community Notes on X allows regular users to propose and rate contextual notes instead of relying on top-down fact-checkers. A bridging-based algorithm promotes notes only when raters who typically disagree both find them helpful, improving credibility and reach. The system is imperfect and vulnerable to perversion, but community-driven moderation offers a scalable tool to curb misinformation.
Read at ZME Science
Unable to calculate read time
[
|
]