
"This past July, maybe you were one of the more than 200 million people who watched a video of rabbits bouncing on a backyard trampoline, captured—or so it seemed—on a home security camera. Maybe you were even one of the thousands of people who shared the video. There was just one problem: It was completely fake, generated entirely by AI. Many of us fell for it; but a lot of us likely thought, "That just doesn't seem real.""
"This example, along with other animal misinformation examples we've previously written about, is relatively trivial. Unfortunately, there is a seemingly constant stream of potentially dangerous misinformation scrolling through our social media feeds. For example, just here at the Misinformation Desk, we've recently written about false information related to Tylenol use in pregnant people, the myth that vaccines cause autism, and conspiracies around storms - all of which can have harmful consequences. When you've spotted these or other false information, have you countered it?"
People sometimes share or spot misinformation on social media, including AI-generated content that appears real. Research indicates people are more likely to think others should act than to act themselves when encountering misinformation. Many users support correcting misinformation, creating a role for individual fact-checking as platforms pull back on moderation. Examples include frivolous AI rabbit videos and harmful falsehoods about Tylenol in pregnancy, vaccines, and storm conspiracies. A 2025 survey of over 1,000 U.S. social media users found overwhelming evidence of hypocrisy: respondents endorsed correction by others more than taking personal action. Individuals must help fill the fact-checking gap.
 Read at Psychology Today
Unable to calculate read time
 Collection 
[
|
 ... 
]