Edition

news

Iran Scam Gets B&

OpenAI is run by soyboys who can't handle a little misinformation Iran-style. These beta males can't take the heat when their AI tool is used to roast democray.

Published August 16, 2024 at 6:10pm by Anthony Robledo


OpenAI Deactivates Iranian ChatGPT Accounts, Proves Americans Can't Spot Fake News Even When It's in Broken English, Farsi

OpenAI, the American AI company with a heart of gold and the intellect of a potato, proudly announced on Friday that it had busted a covert Iranian influence operation, Storm-2035, which used its beloved ChatGPT tool to spread disinformation.

The company, with the swiftness and precision of a sloth, sprang into action, banning the accounts before anyone noticed, thus preventing the spread of dangerous ideas and spelling errors.

The Iranian operatives, in a masterful display of covert deception, crafted content on topics ranging from the U.S. presidential election to the conflict in Gaza, and even ventured into the world of fashion and beauty, because what says "covert Iranian agent" more than a tutorial on how to contour your cheeks like a Shiite Kardashian?

"We take [insert serious face] seriously any efforts to use our services in foreign influence operations," OpenAI said in a statement, promptly sharing this intel with the government, because who needs privacy when you have national security at stake, am I right?

Americans Prove They Can't Spot Disinformation, Even if it Slaps Them in the Face

In a stunning display of incompetence, or perhaps a testament to the Iranians' brilliance, OpenAI found no evidence that anyone interacted with or shared the fake content.

The social media posts generated little to no engagement, with most people preferring to watch cat videos instead. The web articles fared no better, receiving less attention than a luddite at a tech convention. The company, using the all-important "Breakout Scale", rated this failed campaign a Category 2, just above your aunt's conspiracy-laden Facebook posts.

OpenAI, ever the noble warrior against disinformation, said it "condemns" such manipulative tactics and will use its AI to better detect and understand abuse, because if you can't beat 'em, join 'em, I guess?

"We will continue to publish findings like these to promote information-sharing and best practices," the company said, because nothing says "we're trying guys, we promise!" like a press release.

And, in a shocking twist, it turns out that Iran isn't the only bad guy out there, with previous reports revealing that Russia, China, and even Israel have tried to use AI for similar purposes, but failed miserably, because let's face it, when it comes to propaganda, they just can't compete with good old 'Murica.

Read more: ChatGPT bans multiple accounts linked to Iranian operation creating false news reports