
"We are moving into a new phase of informational warfare on social media platforms where technological advancements have made the classic bot approach outdated,"
"What if AI wasn't just hallucinating information, but thousands of AI chatbots were working together to give the guise of grassroots support where there was none? That's the future this paper imagines-Russian troll farms on steroids,"
"Because of their elusive features to mimic humans, it's very hard to actually detect them and to assess to what extent they are present,"
"We lack access to most [social media] platforms because platforms have become increasingly restrictive, so it's difficult to get an insight there. Technically, it's definitely possible. We are pretty sure that it's being tested."
Advances in AI enable chatbots to mimic human behavior closely, making classic bot-detection methods ineffective. Thousands of coordinated AI chatbots can simulate grassroots support by producing human-like interactions and content, amplifying narratives without obvious human operators. Current monitoring systems and restricted platform access limit the ability to detect and assess the prevalence of such coordinated inauthentic behavior. Early deployments may include human oversight during development, with likely use to disrupt future high-stakes elections. Large-scale social network mapping can enable precise targeting of specific communities to maximize impact and influence.
Read at WIRED
Unable to calculate read time
Collection
[
|
...
]