Home » Want to hear this audio clip of ‘Donald Trump’ yelling at me?
News

Want to hear this audio clip of ‘Donald Trump’ yelling at me?

I was telling one of my fellow activists that I was able to log on to a podcast livestream at the Turning Point USA conference, where Donald Trump was answering questions. I said Trump got mad over a question I asked, so I played the audio clip below.

To be honest, Trump never interacted with me, but I really wanted to hear my friend’s reaction. A “deepfake”—a portmanteau of “deep learning” and “fake”—is a form of synthetic media in which artificial intelligence is utilized to create a digital copy of a person’s likeness or voice. The audio clip you just heard was created through an app called voice.ai., which deploys a technique called “voice cloning.” This type of simulation intersplices a different person’s voice over your own.

Voice cloning technology was developed by several researchers and organizations over the years, with a notable milestone being the creation of WaveNet in 2016 through one of Google’s subsidiaries. The new technology was a significant upgrade in generating natural-sounding speech. Back then, voice cloning could cost up to thousands of dollars. Now it’s free, and widely available.

The free version of the app that I used required me to mimic Trump’s speech patterns on my own, and I practiced by watching clips of him fighting with reporters. Even more advanced artificial intelligence can not only do that for you, but it can also mimic voice inflection and even the person’s breathing patterns. This is accomplished through training an algorithm using a small sample of the targeted individual’s voice. With well-known national figures, there are copious amounts of footage online that can easily train an algorithm.

By the way, my friend did fall for it. But when she asked to listen to it again, she said it sounded a little “off.” Considering I literally spent less than five minutes on the recording, and it was only my third take—not to mention that it was free—I think it worked pretty well. (I wanted to keep improving upon it, but my family was getting annoyed at my Trump impressions and ever so slightly but firmly requested that I put a sock in it.)

For those who put a little bit more time and effort into this than I did, you can get something better, like what “The Daily Show” did for Joe Biden.

Then, to take your deepfake to the next level after you get the audio down pat, you can use another AI program to make the person being imitated appear to say or do something that they did not. The technology behind getting AI to generate a video also uses advanced algorithms and machine-learning techniques to analyze not just the speech patterns, but the individual facial movements and gestures, which is then simulated. It appears as if the person is really speaking and behaving like he or she would if it was on live video.

There are much better deepfake videos out there than the one of Biden I decided to show you below, but the reason I find this one so impressive is the fact that it’s a “real-time deepfake.” The deepfake Biden you will see was responding to live questions in real time. 

Already, there are deepfakes of the major presidential candidates for 2024. While this has serious implications and one can easily think of many reasons these can be used for nefarious purposes, AI-generated deepfake videos don’t have to be harmful. When done right, they are a lot of fun, like deepfake Simon Cowell singing on his show or remaking “Pulp Fiction” starring Jerry Seinfeld. Even AI politicians can be fun, such as Biden talking about a magical pistachio or this clip that was widely shared on social media, including by Don. Jr., of Florida Gov. Ron DeSantis playing the role of Michael Scott.

(That was spot on, including his “deer-in-the-headlights” look.)

RELATED STORY: Is the Trump-bot apocalypse nigh? Tech company releases AI version of each presidential candidate

If you wanted to create your own personalized avatar, that’s now possible as well. People can send a two-minute video of themselves speaking to a company called HeyGen, and get back a digital avatar of themselves that will say whatever you decide to type into a text box.

The concern arises when AI is abused. Right now, one of the biggest concerns over the ongoing actors strike is that a studio can recreate a performer’s image, likeness, or voice without an actor’s consent. Although they are protected against commercial appropriation on current contracts, they want to add a clause of “informed consent” onto future contracts—and the studios are balking.

RELATED STORY: Background actors say fears of being replaced by AI come from their experiences

There is also a particularly acute danger when dealing in politics. Campaigns move at a rapid pace and stories can explode overnight. With a large segment of the population distrusting of national media and little fact-checking of sources, deepfaked candidates saying and doing outrageous things might become the norm, especially since now anyone with a laptop or iPhone can make one. Speaking of bad actors, it appears the GOP has already gotten a head start.

One of the very first ads the Republicans put out this campaign season already contained fake AI images. It was a smear ad against Joe Biden featuring a dystopian America that only exists in Republicans’ heads. DeSantis’ team also created a fake photo of Trump hugging Dr. Anthony Fauci in one of their ads. Of course, Trump’s kids aren’t above malicious use of AI, either. Eric Trump tweeted a fake AI image to pretend his dad’s arraignment in New York was met with adoring crowds, and Donald Trump Jr., who knew this was a deepfake of Anderson Cooper on CNN, lied about it anyway.

The real problem can come in a close race, where anything can put a candidate over the edge. These tools then become dangerous weapons, according to Darrell West, senior fellow at the Brookings Institution’s Center for Technology Innovation.

“It’s going to be very difficult for voters to distinguish the real from the fake. And you could just imagine how either Trump supporters or Biden supporters could use this technology to make the opponent look bad.

There could be things that drop right before the election that nobody has a chance to take down.”

Could it happen this year? Not only are people expecting it, but it already has happened. The night of this year’s mayoral election in Chicago, a Twitter account masquerading as a real news organization tweeted out a video of candidate Paul Vallas. The video was completely AI-generated, and racked up thousands of views before it was finally taken down. Vallas was defeated by four points in the general runoff election this past April.   

x

The 2016 election was the first that featured an egregious amount of online, social media-driven misinformation. Entire troll farms were deployed to take advantage of the loose oversight by tech companies that were more interested in getting views and ad revenue than being accurate. The memes and fake stories fooled millions, such as Hillary Clinton having an FBI agent killed, #Pizzagate, or that Trump personally arranged to have his private plane rescue Marines. (Trump not only pushed that last fake story, but Sean Hannity even shared a doctored photo of the event.) Next year’s election, however, will make all the time we spent on fake news stories seem quaint. Now you’ll hear the fake words coming out of politicians’ own mouths.

As for how media outlets are handling this new threat, most still haven’t figured out their strategy. Unfortunately, they are running out of time. Vanity Fair has asked several mainstream news outlets, such as The Wall Street Journal and The Washington Post, how they are preparing to deal with the avalanche of AI-generated content that is expected to happen in 2024. No one responded.

Some sites have responded by enacting a blanket ban on AI content. Reddit and even Pornhub have banned deepfakes from their platform because they classify them as “non-consensual content.” (Seriously, how is Pornhub taking the lead on this?) That hasn’t been the case for other outlets.

OpenAI, the company behind the infamous ChatGPT, recently showed off a new AI model dubbed DALL-E 2, which capitalized off their successful DALL-E in 2021. This new version can draw virtually anything in any style, and it can look hyperrealistic. For the older version, OpenAI banned the uploading of photos that depict real people. That is no longer the case with the new version. The company promises their “new detection and response techniques” will prevent misuse. Critics are skeptical. Earlier this year, a user utilized another AI image-generator, Stable Diffusion by Stability A.I., to create a pornographic deepfake of actress Emma Watson.

Then there are companies like Midjourney, which emerged onto the scene last year and has assumed a dominant role in the realm of AI-crafted visuals, boasting a staggering user base of 16 million people on its Discord server. According to Reuters, the company’s guidelines do not have any restrictions regarding political subject matter. This application, which offers a range of pricing options from no-cost access to a monthly subscription fee of up to $60, dependent on variables such as image volume and processing speed, has garnered significant favor among AI researchers and innovators have attested to the software’s proficiency in generating remarkably lifelike portrayals of both public figures and politicians.

According to experts who talked with NPR, it’s the state and local elections that are most in danger of being exploited with this new technology. Most of the minimal guardrails that sites do employ only apply to national figures. Furthermore, those few states laws proactively passed to protect their citizens have proven to be toothless. California was the first state to pass a ban on deepfakes in political ads, which makes sense, but they soon discovered it was nearly impossible to enforce. Criminal laws require intent, which is hard to do with deepfakes. In many cases, it’s nearly impossible to even find the right party to sue, and that’s assuming law enforcement is cooperative. Furthermore, the law also only applied to California, so if the ad was developed elsewhere, it had no effect.

A woman watches manipulated deepfake videos.

Minnesota is trying a different tactic that goes after people who share the deepfakes. They are currently trying to pass a law criminalizing the sharing of fake, nonconsensual sexual images as well as the sharing of a deepfakes designed to hurt a political candidate. There is concern that this will run into a First Amendment issue, especially the part about causing harm to a political candidate.

There’s no single effective measure to combat malignant deepfakes, but there is a holistic approach that includes education, lobbying, and collaboration. Organizations and state agencies should run free education campaigns for the public and law enforcement on how dishonest people utilize deepfake media, as well as show methods to identify them.

Probably the best and most effective strategy would be to partner with industry leaders in technology to verify audio and video. Digital watermarks embedded within the video, identifying the device used to create them, can provide authentication. There’s already an open technology standard that can cryptographically sign any content created by a device, such as a phone or video camera, and document who captured the image, where, and when. The cryptographic signature is then held on a centralized, immutable ledger.

Another promising tool is called Amber Authenticate. The tool operates passively on a device, recording video in the background. At set intervals chosen by the user, the platform creates “hashes,” which are encrypted versions of the data. These hashes are permanently stored on a public blockchain. If the same video segment is processed again, the hashes will differ if any alterations to the audio or video occur. This helps detect potential tampering or manipulation of the file’s content.

The Brennan Center has proposed ideas to get ahead of the problem on the disinformation front, including having the executive branch designate a lead agency to coordinate governance of AI issues in elections; having the Cybersecurity and Infrastructure Security Agency create and share resources with election offices; and having the Federal Election Commission update its ad disclosure requirements on the use of AI.

Although this issue should not be partisan, like everything else, it appears that one side—the Democrats—are taking this more seriously. This is likely because most Democrats don’t need to use AI to make their opponents say hateful, stupid, or racist things, but Republicans sure do.  Think about it: What ridiculous thing could Trump say that hasn’t been said already in one of his Truth Social rants? Although I can’t imagine Biden even trying to use deepfakes, I can certainly imagine his enemies using it.

Lately, the targets of deepfakes seem to primarily be Biden or Democratic stars like Elizabeth Warren. A deepfake Marjorie Taylor Greene, Donald Trump, Ted Cruz, or Lauren Boebert would probably just be a waste of time. To be fair, someone did make deepfake images of Trump showing an arrest that never happened, but Trump actually seemed to relish the photos as they played right into his martyred hands. In fact, he even had T-shirts made to sell to his supporters of a fake mugshot that also never happened.  

However, for candidates that don’t have a cult-like following and can’t grift off federal indictments, nor run on a platform of bigotry and vitriol, deepfakes pose a real threat to our elections and need to be addressed sooner rather than later.

RELATED STORY: Iowa school district is using ChatGPT to generate a list of books to ban. What could go wrong?

Newsletter

September 2023
M T W T F S S
 123
45678910
11121314151617
18192021222324
252627282930