Propagandists in China, Iran and Russia are using artificial intelligence to create content designed to deceive Americans ahead of the November presidential election, federal intelligence officials said Monday.
In a conference call about foreign election interference efforts organized by the Office of the Director of National Intelligence, officials said the U.S. intelligence community has concluded that AI has made it easier to create disinformation, but has not fundamentally changed the way those actors operate.
“The IC considers AI a malign influence accelerant, not yet a revolutionary influence tool. In other words, information operations are the threat, and AI is an enabler,” said one ODNI official, referring to the U.S. intelligence community. The official requested not to be named as a condition for participating in the call. “Thus far, the IC has not seen it revolutionize such operations,” he said.
In its assessment of the impact of disinformation, the official noted that U.S. adversaries struggle to avoid detection by Western AI companies, have not developed particularly advanced AI models of their own, and struggle to effectively disseminate AI-generated content.
ODNI declined to provide specific examples of the disinformation that it was referring to but said that, in general, the number of election interference efforts was increasing ahead of November.
The ODNI call comes after the National Security Agency said earlier this year it had detected hackers and propagandists increasingly using AI to help them seem like convincing English speakers.
In January, an NSA official said hackers and propagandists around the world were increasingly using generative AI chatbots like ChatGPT when trying to communicate with potential victims. In August, OpenAI, the company behind ChatGPT, said it had banned accounts linked to an attempted Iranian operation that in part aimed to create content aimed at influencing the U.S. election.
Russia has by far the biggest disinformation operation aimed at the U.S. election and correspondingly has created the most AI-generated content, the official said, including text, images, audio and video, the official said. Its propagandists also still rely on human actors for some videos, as in one identified by Microsoft and Clemson University researchers in which actors stage a video of a fake attack on a Trump supporter.
As in previous calls, officials reiterated that Iran preferred to hurt Trump’s campaign, while China runs down ballot and general anti-democracy influence operations but is not pushing one candidate over another. Russia, on the other hand, wants Trump to beat any Democratic candidate given his policy positions on Ukraine.
Federal officials have formally accused Russia of masterminding two sprawling influence campaigns aimed at influencing American voters: covertly funding a media company that paid right-wing influencers to publish videos, and maintaining fake news sites that appear to have little viewership.
The U.S. has also said Iran is behind an operation in which hackers stole files from Republican nominee Donald Trump’s campaign and sent them to media outlets, which generally have refrained from publishing them.
Russia and Iran have denied wrongdoing.
Russia has a much more sophisticated understanding of American politics than Iran, the intelligence official said Monday. Iranian online propagandists that pretend to be American have pushed immigration as a divisive issue. Russia, on the other hand, understands it’s more effective to target voters in swing states.