Home » ‘We have no moat’: Big tech prepares to lose AI race to open-source
News

‘We have no moat’: Big tech prepares to lose AI race to open-source

Campaign Action

As the Google engineer mourns, individuals using open-source tools are “doing things with $100 … that we struggle with at $10M.” He insists that Google’s only option, if it wants to rejoin the cutting edge, is to stop treating AI as something that’s special to them and try to learn from the people who are already outpacing big in-house projects.

“We have no secret sauce,” writes Sernau. “Our best hope is to learn from and collaborate with what others are doing outside Google.”

On Thursday evening, the popular AI imaging tool Midjourney (which was used to generate the imagery for this video about Donald Trump’s plans for 2024) received another of its frequent updates. The latest version provides images with even more startling realism, making it possible to create what appear to be photographs of actual events that can’t be distinguished from the real thing without the most careful scrutiny.

On Friday, CNN Business reported that a portfolio of stocks selected by the latest version of OpenAI’s ChatGPT had “far out performed some of the most popular investment funds in the United Kingdom,” generating a 4.9% gain over a period of six weeks, while the average investment fund lost 0.8%.

Tools like Midjourney and ChatGPT have been capturing the largest part of the national conversation when it comes to rapid changes in the AI field over the past year. As Business Insider reports, major companies, including the creators of these tools, have been scrambling to gobble up software talent in a race to propel their AI efforts into the lead. However, as Sernau’s note makes clear, those efforts are failing and the tools being advanced by major companies are far from the “bleeding edge” of what’s out there.

What’s most astounding may be how quickly this happened, because the open source community really didn’t lay their hands on a large proprietary AI model until the beginning of March when the code behind Facebook’s Large Language Model Meta AI (Llama) leaked online. That tool landed out there with absolutely no instructions and no clues on how to modify it into something other than the sort of chatbot Facebook had created. It didn’t matter.

“A tremendous outpouring of innovation followed,” wrote Sernau, “with just days between major developments.”

COVID-19 may not have escaped from a lab, but that Llama leak certainly went viral. The number of iterations across the internet now measures in the thousands, if not millions. A month after it was first leaked, there are versions of Llama now circulating that not only match the tools at Google, OpenAI, and Microsoft, they greatly exceed them in their ability to be customized and “tuned” for specific purposes.

All of this makes it clear just how futile efforts to slow the rapid pace of change in this field, such as the letter published at the end of March in which over 1,000 experts called for a “pause” in AI development, really are.

That letter warned that AI experiments “pose profound risks to society and humanity” and that AI labs were “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one–not even their creators–can understand, predict, or reliably control.” But the call for “all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4” was already futile when it was addressing a collection of companies all seeking to gain an edge over competitors.

Now those “AI labs” are anyone, anywhere. The barrier to entry is only some knowledge of coding and an idea—and the idea alone will do in a pinch.

Mass shootings in the United States have averaged one per day throughout 2023, but the last time a nuclear weapon was used in anger was in 1945. There are many reasons for that difference, but one big one is simply this: There are several orders of magnitude more people who are capable of initiating gun violence than there are who can order the release of a nuclear weapon. The more people who are capable of doing something, good or bad, the more likely it is to be done.

The sheer number of people now iterating on AI tools like Llama is exactly why they are running rings around companies with massive resources. It’s also why it’s likely that the next AI breakthrough won’t come with a press release.

Right now, large-language AI models have proven themselves astoundingly capable of certain tasks. However, many experts believe that they are not on the path to the kind of “general artificial intelligence” that could lead to systems profoundly more capable than their creators. We had better hope those experts are right.

But as that March letter trying to put the cork back in a bottle that had already spilled shows, it’s not just Google that is falling behind. There are a lot of fingers out there, and we don’t have a good way of knowing how close any of them are to building a big red button.


Dimitri of WarTranslated has been doing the essential work of translating hours of Russian and Ukrainian video and audio during the invasion of Ukraine. He joins Markos and Kerry from London to talk about how he began this work by sifting through various sources. He is one of the only people translating information for English-speaking audiences. Dimitri’s followed the war since the beginning and has watched the evolution of the language and dispatches as the war has progressed.

Newsletter