China and Russia also use ChatGPT. OpenAI uncovered five covert influence operations

Luc Williams

The start-up was taken under the microscope covert “influence operations”which are “insidious attempts to manipulate public opinion or influencing political outcomes, without revealing the true identity or intentions of the entities behind them.” The activities of influencer groups are different from those undertaken by disinformation networks, Ben Nimmo, principal investigator on OpenAI's intelligence and investigations team, said during a Wednesday press conference because they can often provide information that is consistent with facts, but taken out of contextthus showing a falsified reality.

New reportcreators of ChatGPT It comes at a time when concerns about… the impact of artificial intelligence on elections scheduled for this year around the world. In its findings, OpenAI listed ways in which networks of influence used available tools for deceive people more effectively. Attention was paid to using artificial intelligence to generate images and longer texts with fewer linguistic errors than would be possible if humans created them themselves. At the same time, the start-up reported that propaganda campaigns prepared with the help of services offered by OpenAI did not significantly increase their reach.

Indoctrination in a new way

“Over the last year and a half, there have been many questions about what might happen if propaganda activities (influence operations) will be used generative artificial intelligence”Nimmo added.

Though propaganda networks they have been using the platforms for a long time social mediais the use in them generative artificial intelligence tools is relatively new. OpenAI stated that all identified operations used AI-generated materials alongside more traditional formats such as handwritten texts or Memes, and then published on major social networking sites. In addition to using artificial intelligence to generate images, text and bio on social mediasome influence networks have also used OpenAI products to increasing your productivityby creating article summaries or code debugging for bots.

Unwanted ChatGPT “users”.

The five networks identified by OpenAI included groups such aspro-Russian “Doppelganger”,pro-China network “Spamouflage” and Iranian operation known as the International Union of Virtual Media (IUVM). OpenAI also discovered previously unknown networks originating from Russia and Israel, which the start-up identified for the first time.

New Russian groupwhich OpenAI named “Bad Grammar”used the company's artificial intelligence models, as well as the application for Telegram messaging, to create a content spam channel, the company said. First, the secret group used OpenAI models to debug code that could automate posting to Telegram, and then generated comments in Russian and English to respond to these Telegram posts using dozens of accounts. There were comments on the account cited by OpenAI arguing that United States should not support Ukraine. “I'm sick of these brain-damaged fools playing games while Americans suffer. Washington must clearly set its priorities, otherwise they will feel the full force of Texas! – OpenAI quoted one of the entries.

OpenAI has identified some AI-generated content, noting that the comments contained typical AI error messages, such as: “As an AI language model, I'm here to help.” The company also said it uses its own artificial intelligence tools to identify and defend against such influence operations.

Special services exercises

In most cases, messages sent online were not widely popular or users identified the published content as generated by artificial intelligence. Despite the limited reach, “this is not the time for complacency,” Nimmo said. “History shows that influence operations that have yielded no results for years can suddenly explode if no one flags them.”

Nimmo also admitted that there are likely groups using AI tools that the startup is not yet aware of. “I don't know how many (impact) operations are still in progress,” Nimmo said. “But I know a lot of people are looking for them, including our team.”

Other companies such as Meta, have regularly disclosed similar information about influence operations in the past. OpenAI claims that shares threat indicators with industry partners, and one of the goals of the report is to help others detect them. The company said it plans to share more reports on this topic in the future.

About LUC WILLIAMS

Luc's expertise lies in assisting students from a myriad of disciplines to refine and enhance their thesis work with clarity and impact. His methodical approach and the knack for simplifying complex information make him an invaluable ally for any thesis writer.