OpenAI Says Russian And Israeli Groups Used Its Tools To Spread Disinformation
August 1, 2024

OpenAI revealed that it had stopped misinformation campaigns from Russia, China, Israel, and Iran in its first-ever report on how its AI tools are being used for covert influence operations, which was issued on Thursday.

Malicious actors leveraged the company’s generative AI models to produce, distribute, and translate propaganda content across social media platforms. The report states that none of the campaigns was successful or attracted a sizable audience.

Researchers and legislators have expressed serious concerns about generative AI’s ability to increase the amount and calibre of misinformation on the internet as the field has snowballed. With varying degrees of success, AI firms like OpenAI, have attempted to alleviate these worries and place guardrails on their technology.

OpenAI Says Russian And Israeli Groups Used Its Tools To Spread Disinformation

OpenAI report uncovers global misinformation campaigns using AI tools

One of the most thorough reports from an AI company about the usage of its tools for propaganda is OpenAI’s 39-page study. Over the previous three months, governmental and private actors were involved in five covert influence campaigns which OpenAI said its researchers had discovered and banned accounts linked to them.

Two operations in Russia produced and disseminated content critical of the US, Ukraine, and other Baltic states. One of the operations created a bot that posted on Telegram using an OpenAI model to troubleshoot code. Operatives from China’s influence operation produced text in English, Chinese, Japanese, and Korean, which they subsequently shared on Medium and X.

Complete essays criticising the United States and Israel were written by Iranian actors and translated into French and English. An Israeli political company named Stoic managed a network of fictitious social media profiles that disseminated a variety of posts, some of which accused US student demonstrations against Israel’s Gaza War of being antisemitic.

Authorities and researchers already knew about many misinformation spreaders that OpenAI removed from its site. In addition to Meta banning Stoic from its platform this year for breaking its rules, the US Treasury sanctioned two Russian men who were purportedly behind one of the ads that OpenAI identified.

The growing threat of generative AI in global disinformation campaigns

OpenAI Says Russian And Israeli Groups Used Its Tools To Spread Disinformation

In addition, the research emphasises that while generative AI is being used in disinformation operations to enhance specific areas of content creation—like creating postings in foreign languages that are more convincing—it is not the only propaganda weapon.

The report said that although AI was deployed in all these activities, not all did so exclusively. Instead, in addition to more conventional formats like handwritten words or memes that were stolen from the internet, they shared a wide variety of content, including content produced by AI.

Although none of the campaigns had much of an impact, their use of the technology demonstrates how hostile actors are discovering that generative AI enables them to produce propaganda on a larger scale. With AI tools, content creation, translation, and uploading can be done more quickly, lowering the threshold for disinformation operations.

Malicious actors have attempted to sway politics and public opinion in several nations throughout the past year by utilising generative AI. There is now more pressure on businesses like OpenAI to limit the use of their tools as election campaigns have been disrupted by deepfake audio, AI-generated pictures, and text-based ads.

In conclusion, OpenAI’s report on its AI tools and misuse in covert influence operations underscores the growing threat of generative AI in spreading misinformation. Although the identified campaigns from Russia, China, Israel, and Iran had limited impact, they highlight how AI can enhance the scale and sophistication of propaganda. This speaks to the urgent need for robust safeguards from AI companies to prevent misuse and protect political stability and public trust.

(Tashia Bernardus)

© All content copyright The Hype Economy. Do not reproduce in any form without permission, even if you have a paid subscription.