Home » OpenAI Knows GPT-4 Is Risky—But Won’t Do a Thing About It
News

OpenAI Knows GPT-4 Is Risky—But Won’t Do a Thing About It

On Tuesday, OpenAI launched GPT-4, its latest and most powerful large language model (LLM) yet. Rumors have swirled about the bot’s supposed prowess since the release of ChatGPT, powered by predecessor GPT-3, in Nov. 2022. Now, it can safely be said that GPT-4 mostly lives up to the hype.

OpenAI released a 98-page technical report along with a stream demonstrating the LLM and its abilities. This includes passing major tests like the bar exam, the LSAT, and AP tests with flying colors; pharmaceutical drug discovery; writing coherent books; creating video games; developing an app that recommends movies; turning a napkin sketch into an entire website; over explain why novelty phone chargers are funny; and even generating entire lawsuits with just one click.

Sam Altman, CEO of OpenAI, said in a tweet that GPT-4 was “our most capable and aligned model yet.” He later added that “it is still flawed, still limited, and it seems more impressive on first use than it does after you spend more time with it.”

Unlike its predecessors, GPT-4 is multimodal, which means that it can accept image and text inputs to generate text outputs. This gives it an entirely new dimension to work with. Instead of just asking it a question or typing in a prompt and getting a response, users can upload a photo that it can respond to. It gives a glimpse into the potential future of these LLMs where things like video, audio, and maybe even live streams can be used as inputs.

The new step forward is undoubtedly exciting for AI evangelists and tech companies looking to cash in on the growing bot trend. With an upcoming API release, GPT-4 can soon be incorporated into existing apps and software, allowing it to streamline and automate work like never before.

For the rest of us, though, the usual thoughts and fears of a robot apocalypse inevitably start creeping in—and OpenAI’s launch of GPT-4 gives a lot of cause for concern.

This is something that OpenAI is seemingly aware of, too. In the accompanying technical report on GPT-4, there are sections like “Potential for Risky Emergent Behaviors” where the authors highlights a few of the potential pitfalls of the bot—including one instance in which it was able to convince a human to successfully solve a CAPTCHA code for it via text message. GPT-4 is also not immune to the biases that have plagued LLMs in the past and still hallucinates (meaning it makes mistakes and errors).

“Great care should be taken when using language model outputs, particularly in high-stake contexts, with the exact protocol (such as human review, grounding with additional contexts, or avoiding high-stakes uses altogether) matching the needs of specific applications,” the authors of the paper wrote.

People have a right to know how their machines work—especially if the machine can hurt them.

The report also noted that GPT-4 could have an outsized impact on “the economy and workforce,” which should be a “crucial consideration for policymakers and other stakeholders.” This includes everything from AI and generative tools replacing jobs resulting in workforce displacement, to augmenting current tasks and roles like call centers, writing, and coding.

Ominously, the paper seemed to warn that while GPT-4 is incredibly powerful, it’s really just the beginning. It will help usher in more sophisticated models, but it could also create unforeseen ripple effects and consequences on entire industries and emerging technologies.

“We think it is important that workers, policymakers, and researchers not focus overly on just the current state of capabilities,” the authors wrote. “We expect GPT-4 to accelerate development of new applications built on top of generative models, and that these applications will often solve more complex tasks than the model on its own.”

Despite the authors devoting a good portion of the report to acknowledging these risks, critics say that OpenAI has done very little in the way to actually do something about them. And moreover, they accuse the company of being incredibly cagey about how the bot actually works.

Throughout nearly 100 pages of the technical report, the authors (who only go by “OpenAI”) don’t explain how GPT-4 was built, trained, and what datasets were used to train it. Nor is there any information about the hardware used or the energy required to operate it.

“I think what’s bothering everyone is that OpenAI made a whole paper that’s like 90-something pages long,” William Falcon, CEO of Lightning AI and creator of the open-source Python library PyTorch Lightning, told VentureBeat. “That makes it feel like it’s open-source and academic, but it’s not. They describe literally nothing in there.”

“Please @OpenAI change your name ASAP,” David Picard, an AI researcher at Ecole des Ponts ParisTech, said in a tweet. “It’s an insult to our intelligence to call yourself ‘open’ and release that kind of ‘technical report’ that contains no technical information whatsoever.”

This, of course, was done purposefully. OpenAI even acknowledges the intentional lack of transparency in the paper, stating very clearly that the company has done so for one reason only: money.

“Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method,” the paper said.

This makes GPT-4 the most secretive and opaque product from OpenAI yet. That’s especially ironic when considering that OpenAI’s original north star was supposed to be an open, transparent non-profit dedicated to advancing “digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return,” the company wrote in an early blog post. “Since our research is free from financial obligations, we can better focus on a positive human impact.”

It seems that, with the launch of GPT-4, OpenAI hase all but done away with any of the founding principles that they originally set out with. What does it mean then that its latest, most powerful model is a black box within a black box?

“They are willfully ignoring the most basic risk mitigation strategies, all while proclaiming themselves to be working towards the benefit of humanity,” Emily Bender, a professor of computational linguistics at the University of Washington, said in a tweet.

For any emerging tech, but especially with AI, transparency is vital. Not only does it give researchers and users the ability to learn why exactly these models work, it also lets them know what kinds of biases and harm that they could pose before they use them. People have a right to know how their machines work—especially if the machine can hurt them.

Given all the ink that OpenAI spilled on the potential risks posed by GPT-4, one would think that they would do the simple step of laying out their process in building the bot. Unfortunately, the paper makes it seem like they’re more interested in passing the buck off to vague “policy makers” and “stakeholders”—putting the responsibility of figuring out the dangers of their creation on the rest of us. So the buck doesn’t stop with OpenAI unless there’s a dollar sign on it. It stops with us.

The problem here, of course, is that regulators like the U.S. Congress is glacially slow to implement any kind of meaningful policy especially when it comes to emerging technology. Sure, there have been proposals like the AI Bill of Rights—but that only exists as a conceptual framework more than hard policy. If they do ever get around to it, they’re going to run up against bulwark tech lobbyists and corporations looking to protect the government from interfering with their bottom line.

GPT-4 is undoubtedly a big step forward for AI. Not only is it potentially the most sophisticated LLM of its kind, but it’s also capable of going beyond text and using images as inputs as well, which gives a look into the future of the emerging tech. But, it’s also a black box, one that has the potential to fundamentally change the landscape of entire industries and livelihoods—and unleash a world of danger and chaos along the way. How OpenAI intends to ensure a bright future and reduce the potential for something threatening remains to be seen.

Newsletter

March 2023
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031