AI Could Be an Amplifier of Disinformation, Says Arizona Secretary of State

According to Arizona’s Secretary of State, Adrian Fontes, artificial intelligence (AI) could be a “magnifier” of disinformation before the upcoming US elections. Fonte expressed his fears on Sunday on NBC’s Meet the Press.

Fontes’ statement came in the aftermath of President Biden’s post on X (formerly Twitter) that “AI and the companies working on the tech are going to transform the world, but first, they must earn our trust.” The president said that he is committed to doing anything in his power to promote safe and responsible innovation. He urged companies to join him in that commitment.

AI Disinformation Is a Magnifier

Fontes recalled his training sessions in the Marine Corps. He said that, in the boot camps and other military training, they had to assess their enemies’ weapons. He said they had to train as much as they could against the enemy weapons.

Also read: Microsoft Warns of AI-Driven Disinformation Campaigns in Upcoming Elections

The Secretary compared AI to weapons and said that AI is not a weapon; rather, it’s a magnifier and an amplifier of mis and disinformation. 

“What I wanted to do is make sure that our elections officials were familiar with it, we had processes to deal with it and address it within each of our counties, because our elections are run at the county level, as well.”

Adrian Fontes

“We also had a tabletop exercise among several for elections officials for the media so that our media partners could know how to react to it and recognize it,” he added.

AI was not the only part; the Secretary, while mentioning domestic terrorism, said that “terrorism is defined as a threat or violence for a political outcome. That’s what this is.”

The “Fake” Joe Biden Call Affirms AI Disinformation

Potential AI threats to elections are not new. Earlier in January this year, a fake robocall in the voice of the US President Joe Biden was a hot discussion topic in TV shows and news articles. The fake call was tailored carefully, starting with saying, “What a bunch of malarkey,” a term Biden had used earlier. 

After this touch of persona, the call continued on, saying,

“Voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again. Your vote makes a difference in November, not this Tuesday.”

It was a clear attempt of dissuading voters from polling on that particular day. The Robocall again made rounds two days back when the political consultant, Steven Kramer, who was behind the call was charged $6 million and 24 criminal charges altogether.

AI Could Be an Amplifier of Disinformation, Says Arizona Secretary of State
Barack Obama announces Biden as his vice-presidential running mate in Springfield, Illinois. Source: The White House.

The fine handed to Kramer was the first for generative AI-related offences, said The Federal Communication Commission. The commission also proposed a fine of $2 million for Lingo Telecom , which was accused of transmitting the robocalls.

While Lingo Telecom said it was not involved in the production of these calls, the actions it took were in accordance with industry standards and federal regulations. However, the concerning part is that the calls appeared from the personal number of Kathy Sullivan, a former state Democratic Party chair. 

Sullivan wrote in an email on Thursday that “there is a steep price for trying to rig an election,” according to the Associated Press.

Disinformation Campaigns Are More Affordable

AI has made scamming more cheap and affordable for the ones who want to practice it. Kramer, who owns a firm specializing in get-out-the-vote projects, said that he paid $500 to send the calls to voters to call their attention to the “AI problem.” He previously told AP that he paid $150 to a magician to create the recording.

Also read: Warren Buffett Warns of AI Scams to Become a Growth Industry

Wired reported last year that building an AI disinformation campaign costs only $400. It reported a developer who used common AI tools to generate anti-Russia tweets and articles, but the project aimed to highlight how cheaply and easily it could be done.

According to the news outlet, the person who goes by the name of Nea Paw designed the campaign to educate people on AI dangers and disinformation. Paw said in an email,

“I don’t think there is a silver bullet for this, much in the same way there is no silver bullet for phishing attacks, spam, or social engineering.”

Disinformation researchers say that AI could be used to craft highly personalized campaigns to boost disinformation, and to run social media accounts. The point to note is that Facebook and Instagram recently blocked thousands of accounts that were in some way connected to the networks often associated with China’s Communist Party communication mechanism.

AI Could Be an Amplifier of Disinformation, Says Arizona Secretary of State
Warren Buffett says AI scamming will be the next big ‘growth industry.’

Researchers say that fake accounts are now using more sophisticated methods to run their disinformation campaigns, as they now use organic methods to increase their reach and look authentic. Not to forget the words of the legendary investor Warren Buffet, who said that AI scamming could be the “fastest-growing industry of all time.” 

Scamming and election disinformation have connections in that both are used to misguide people. As Buffet says, “It has enormous potential for good and enormous potential for harm, and I just don’t know how that plays out.”


Cryptopolitan reporting by Aamir Sheikh

Stay up to date

on all important crypto news!

The most important news, once a week. No spam.