The Biden campaign is facing its first major cheapfake scandal this week. Doctored clips of Biden at the G7 Summit and a Hollywood fundraiser have spread across platforms like X, claiming to show Biden wandering off, mumbling unintelligibly, or, uh, even pooping his pants. It’s exactly the type of content the right-wing media apparatus drools over to play up Biden’s age, despite the clips being edited in a manner reminiscent of the drunk Nancy Pelosi video from last cycle.
And while we’re all starting to get stressed over simple editing and cropping techniques again, Big Tech is training political campaigns on their generative AI tools. Could a little direction help mitigate the issue? Maybe. Could it make it worse? Yeah, probably.
Let’s talk about it.
Politics has never been stranger—or more online. WIRED Politics Lab is your guide through the vortex of extremism, conspiracies, and disinformation.
- 🗞️ Read previous newsletters here.
- 🎧 Listen to the WIRED Politics Lab podcast.
- 💬 Join the conversation below this article.
GenAI and Campaigns 101
Since the beginning of 2024, Microsoft and Google have taught dozens of campaigns and political groups on generative AI tools like their Copilot and Gemini chatbots, the companies told WIRED this month.
For quite awhile, big tech companies have hosted workshops for political staffers and groups to learn more about their products, especially when it comes to cybersecurity. But this year, they’ve started including lessons on how campaigns could leverage AI ahead of the 2024 election.
Microsoft says it tailored these training sessions to the needs of national-level campaigns to help them save time and cut costs. The company demonstrates how Copilot, its AI chatbot, could be used to quickly write and edit fundraising emails and text messages.
“Just like any small business could leverage AI, we believe a campaign could too,” Ginny Badanes, general manager for Microsoft’s Democracy Forward program, said in an interview earlier this month.
In a statement to WIRED last week, Microsoft said that it’s completed 90 trainings with more than 2,300 participants in 20 countries and five continents, including Africa, Asia, Europe, North America, and South America. There have been more than 40 trainings held in the US this year with over 600 participants, the company said. While the European workshops began late last year, the US trainings began this February.
Google has also started integrating AI into its cybersecurity workshops with campaigns. In these sessions, Google instructs participants on how to use tools like its chatbot Gemini to compare different policy proposals. They showcase other tools, like Google’s Data Commons and Lens, which they say can help campaigns analyze datasets and translate text from images.
Democratic tech leaders, like Zinc Labs executive director Matt Hodges, told me that training campaigns on these tools now could prevent headaches further down the road.
“We don't want to start that process six months from now. Starting today is how we stay ahead of that curve,” says Hodges, who was also the former engineering director for the Biden 2020. Zinc Labs also provides AI trainings for campaigns.
Earlier this year, big tech companies like Amazon, Google, Meta, and Microsoft signed a pact agreeing to roll out “reasonable precautions” to prevent their generative AI tools from contributing to some electoral catastrophe across the globe. The accord asks that the companies detect and label deceptive content created with AI.
Microsoft and Google have fused its labeling and watermarking programs into the campaign workshops as well. Microsoft says it provides a crash course on its “content credentials,” or its watermarking technology, and explains to campaigns how they can apply it to their own campaign materials to ensure their authenticity. Similarly, Google explains its own program, SynthID, that labels images created with its AI tools.
It’s these types of content authentication regimes that Big Tech believes could alleviate the risks of deepfakes, cheapfakes, and other forms of AI-altered content from disrupting the US elections.
But despite signing the tech accords and other voluntary measures, none of these authentication methods are foolproof, as WIRED’s Kate Knibbs has reported before.
And it’s a little more complicated than just promoting content authentication for Microsoft and Google. Their AI chatbots, Copilot and Gemini, haven’t proved that they can answer simple questions on election history either. When asked who won the 2020 presidential election, both chatbots declined to provide an answer, my colleague David Gilbert reported last week. These would be the models providing policy guidance to campaigns. They’re also the models that support the AI bots that answer voter questions or run as candidates themselves.
Six months out from Election Day, Big Tech is supplying both the venom and the antidote on gen AI to campaigns. Even if their authentication programs could identify AI-generated content 100 percent of the time, the government would likely need to intervene in order to standardize the tech across the board.
So for now—and probably the rest of the year—it will be up to the AI industry not to make any disastrous mistakes when it comes to creating or detecting harmful content.
The Chatroom
After reading Annie Jacobsen’s phenomenal “Nuclear War: A Scenario,” I’ve been a bit obsessed with reading about the end of the world. 𝓳𝓾𝓼𝓽 𝓰𝓲𝓻𝓵𝔂 𝓽𝓱𝓲𝓷𝓰𝓼 ★~(◠‿◕✿)
So this week, I want you to flood my inbox with your worst fears when it comes to AI and all the elections taking place this year. I’m looking for something scary but also realistic.
I want to hear from you! Leave a comment on the site, or send me an email at mail@wired.com.
💬 Leave a comment below this article.
WIRED Reads
- Crypto Scammers Are Targeting Trump’s MAGA Supporters: In May, the Trump campaign announced that it would start accepting cryptocurrency donations. Now, crypto scammers are trying to swindle the former president’s supporters out of their digital currencies by creating fake donation domains.
- ISIS Created Fake CNN and Al Jazeera Broadcasts: The Islamic State made videos that mimic CNN and Al Jazeera broadcasts. They’re also sharing them across social media and hosting them on platforms like YouTube.
- Alex Jones Is Now Trying to Divert Money to His Father’s Supplements Business: A Texas bankruptcy judge saved Alex Jones’ Infowars this week, and he’s spent his time promoting his father’s supplements company, since it’s an entity not answerable to the Sandy Hook families.
Want more? Subscribe now for unlimited access to WIRED.
What Else We’re Reading
🔗 How Americans Navigate Politics on TikTok, X, Facebook, and Instagram: Despite its change in leadership, X, formerly Twitter, is still the top platform for users seeking political news. Republicans are much happier with the platform under Elon Musk’s control, too, according to a poll. (Pew Research)
🔗 Surgeon General: Why I’m Calling for a Warning Label on Social Media Platforms: In an op-ed for The New York Times, US surgeon general Vivek Murthy outlines why he thinks the government should attach warning labels to social media platforms. Murthy’s call comes ahead of a decision in the Murthy v. Missouri case that’s expected to drop this summer. (The New York Times)
🔗 FACT FOCUS: Biden’s pause as he left a star-studded LA fundraiser becomes a target for opponents: The Biden campaign faces its first major cheapfake scandal of the election cycle. Clips from a series of high-profile events, like the most recent G7 summit, have gone viral on platforms like X after they’ve been deceptively edited to exaggerate the effects of Biden’s age. (AP)
The Download
On this week’s WIRED Politics Lab podcast, host Leah Feiger chats with my colleague and senior reporter David Gilbert about some recent reporting he’s done on a nationwide militia group organized by an incarcerated January 6 rioter. You can find it wherever you listen to podcasts.
See you next week! You can get in touch with me via email, Instagram, X and Signal at makenakelly.32.