OpenAI is clamping down on ChatGPT accounts used to spread malware
Briefly

OpenAI recently banned several ChatGPT accounts linked to state-sponsored threat actors from China, Russia, and Iran. The banned accounts were associated with various cyber crime activities such as social engineering, cyber espionage, and disinformation campaigns. Four campaigns from China were particularly concerning, targeting sensitive geopolitical issues and promoting narratives beneficial to the Chinese state. Additionally, a group of accounts operated by Russian actors was found to be developing malware. OpenAI emphasized the use of AI to enhance their detection and disruption of these malicious activities.
OpenAI took down ten ChatGPT accounts associated with state-sponsored threat actors linked to China, Russia, and Iran to combat malicious AI usage.
The accounts were involved in social engineering, cyber espionage, and deceptive employment schemes, showcasing the different ways AI can be misused.
Four campaigns originated in China, targeting sensitive geopolitical topics and promoting narratives aligned with state interests, while Russian actors developed malware strains.
These accounts had been generating targeted posts on topics like Taiwan, US politics, and specific individuals, reflecting a coordinated disinformation effort.
Read at IT Pro
[
|
]