news
AI Censors Iranian News
OpenAI caught Iranian operatives spreading fake news with ChatGPT, deactivating the accounts amidst fears of the powerful tool being misused.
Published August 16, 2024 at 6:10pm by Anthony Robledo
OpenAI Deactivates Iranian Disinformation Campaign Accounts
OpenAI has deactivated multiple ChatGPT accounts linked to an Iranian influence operation, Storm-2035, which generated disinformation on topics including US elections and global political events.
"We take seriously any efforts to use our services in foreign influence operations... We have shared threat intelligence with government, campaign, and industry stakeholders."
- OpenAI
The operation created misleading content on various subjects:
- US Presidential Election
- Conflict in Gaza
- Israel’s Olympic participation
- Venezuelan politics
- Latinx community rights
- Scottish independence
The campaign also included fashion and beauty content, possibly to seem authentic.
OpenAI found no evidence of real people interacting with or widely sharing the content. The Iranian operation scored a Category 2 on The Breakout Scale, indicating a low-impact influence operation.
Fighting AI Abuse
OpenAI condemns attempts to manipulate public opinion and influence political outcomes, and pledges to use its AI to detect and understand abuse.
"OpenAI remains dedicated to uncovering and mitigating this type of abuse at scale... We will continue to publish findings like these to promote information-sharing and best practices."
- OpenAI
Earlier this year, OpenAI also reported similar foreign influence efforts based in Russia, China, Iran, and Israel, all of which failed to gain significant traction.
Read more: ChatGPT bans multiple accounts linked to Iranian operation creating false news reports