Home » AI is no joke when it causes $500 billion in market losses
News

AI is no joke when it causes $500 billion in market losses

x

Though the market recovered almost as quickly as it shed those billions on Tuesday morning, it’s unclear how much effect the fake images—and the mostly fake accounts that spread them—had on the overall market results for the day. It’s also unclear just how much money may have vanished in the form of fees applied to funds, including many retirement funds, where investors are charged each time the fund buys or sells stock.

Most of the change in the stock market was probably not generated by human beings hitting the panic button out of concern over some possible catastrophe; most stocks aren’t traded by human beings. Massive movements, like the one that sent the S&P crashing down and back up on Monday, are managed by a different kind of AI that runs evaluations that sweep up information from all directions.

But this situation was not completely free of human beings. Someone ordered those images from Midjourney or a similar AI-based image generator. Someone put them on social media. Someone probably started the market tumble.

But none of those human beings were critical to this event. With half a day’s coding or less, it would be perfectly possible to create a crisis bot that would sift through the current news, order up images of a plausible disaster, mount them on social media, boost them with thousands or tens of thousands of retweets and links, push them with apparently authoritative accounts, and pitch them in a way tailor-made to trigger a response from the bots that operate the stock market, the bond market, the commodities market, or just about any other aspect of the economy.

They could do it regularly, randomly, or on targeted occasions. They could do it much more convincingly than these two images—and in ways that were much more difficult to refute. Whether what happened on Monday was a trial balloon, cyber warfare, or someone just farting around, we should be taking the results of that action very, very seriously.

Two fake, easily refuted images made $500 billion vanish. Next time, the images could be more plausible, the distribution more authoritative, and the effect more lasting.

There’s also nothing that says any future AI-created damage will be limited to the economy. Despite some dire warnings in 2016 and 2020, those elections remained largely free of “deepfake” videos and audio recordings using altered voices. That will not be the case in 2024. You can bet on it.

Everything that previously took at least a modicum of knowledge and a few hours of effort is much, much easier now. In fact, it’s so easy that ordinary scammers can spoof not just a phone number, but the voice of the person of a friend or relative when they call to explain why they desperately need an infusion of cash.

The next time someone produces a tape like the one in 2012 where Mitt Romney spilled his guts to millionaire donors, or even Donald Trump’s 2016 “Access Hollywood” video, how will you know if it is real? Candidates will just declare any unflattering revelation to be fake. If someone sent Fox News a video today that purported to show Joe Biden making a deal with China to abandon Taiwan in return for a billion dollars, do you think they wouldn’t show it? Imagine the fictions they could create and source to Hunter Biden’s laptop.

Given enough time, experts can determine whether an image, video, or audio recording is a fake, but not before they’ve spread widely. Every refutation can be countered by more fakes. And all the debunking in the world won’t sway people who have an ideological interest in believing those fakes, or stop those fakes from spreading.

What happened on Monday went by so fast that it was easy to miss, and even easier to dismiss. We can’t afford to do either.

When the leaders of AI companies appeared before Congress last week, they practically begged to be regulated.

x

Right now, human beings both author and understand the code behind the large-model, limited-purpose AIs that dominate the news cycle. But even with that, it’s impossible for humans to understand the decisions that these systems make based on the interaction of the millions, or billions, of documents they have been fed. Very soon, our understanding won’t even extend to the code itself, as the code will be written and modified by other AI systems.

The threat from these systems isn’t some far-future concern. It’s not a science fiction scenario that involves Skynet or the robot uprising. This is a right here, right now problem in which these systems are already powerful enough to eliminate millions of jobs, change the direction of the economy, and sway the outcome of an election. Like a hammer, they are tools. Like a hammer, they can also be weapons.

Until we put some regulations on these systems, we are all part of the experiment, like it or not. If we don’t put that regulation in place almost immediately, there’s a very good chance that it will be too late.


Dimitri of WarTranslated has been doing the essential work of translating hours of Russian and Ukrainian video and audio during the invasion of Ukraine. He joins Markos and Kerry from London to talk about how he began this work by sifting through various sources. He is one of the only people translating information for English-speaking audiences. Dimitri’s followed the war since the beginning and has watched the evolution of the language and dispatches as the war has progressed.

Newsletter