Home » How Congress Fell for Sam Altman’s AI Magic Tricks
News

How Congress Fell for Sam Altman’s AI Magic Tricks

OpenAI CEO Sam Altman made a bold demand in front of the Senate Judiciary Committee on Tuesday: Regulate the very industry that his company is at the forefront of.

“I think if this technology goes wrong, it can go quite wrong and we want to be vocal about that,” Altman said at the hearing. “We want to work with the government to prevent that from happening.”

Altman’s testimony included his thoughts about the dangers of artificial intelligence, and warnings that tools like ChatGPT—developed by OpenAI—had the power to destroy jobs, even if the hope was it could create new ones. Altman went as far as to recommend that lawmakers create a separate agency to regulate AI.

When compared to the pugilistic Congressional tech hearings from Facebook to TikTok in the past, Tuesday’s rendition was surprisingly cordial—likely buoyed by the fact that Altman shared a dinner with around 60 lawmakers the evening before, where he reportedly gave them a demonstration of ChatGPT’s capabilities.

Many who were present reacted as if they had just witnessed a magic show, rather than a tech demo.

“I thought it was fantastic,” Rep. Ted Lieu (D-CA) told CNBC. “It’s not easy to keep members of Congress rapt for close to two hours, so Sam Altman was very informative and provided a lot of information.”

“He gave fascinating demonstrations in real time,” Rep. Mike Johnson (R-LA) told the broadcaster. “I think it amazed a lot of members.”

But, despite the intense spotlight centered on Altman, his hearing wasn’t the only one happening that day. In fact, just three floors up in the same building, another AI hearing was being held by the Senate Committee on Homeland Security & Governmental Affairs at the exact same time, featuring speakers such as U.S. Government Affairs data scientist Taka Ariga, Stanford law professor Daniel Ho, University of Tennessee AI researcher Lynne Parker, and journalist Jacob Siegel—and it was arguably more important.

“It got maybe 1/100th of the attention,” Suresh Venkatasubramanian, Brown University’s director of the Center for Tech Responsibility, told The Daily Beast. “It was looking at all the ways in which AI is being used across the board to actually impact our lives in ways that have been going on for years, and that people still aren’t talking enough about.”

That sounds pretty important, but people remain hyper-fixated on the Altmans and OpenAIs of the world. The entire world seems to be completely zeroed in solely on generative AI, or machine learning systems that can create content when prompted (e.g. chatbots like ChatGPT and Bard, or image generators like Midjourney or DALL-E). And in doing so, we might be missing out on the actual dangers of AI—and opening ourselves up to a lot of harm in the process.

Derailing the AI Hype Train

Since OpenAI released ChatGPT in November 2022, generative AI has dominated the headlines and discourse surrounding all things machine learning. It’s created a whirlwind of hype and lofty promises—which is incredibly attractive to company executives and investors, but ultimately detrimental to its workers.

We’ve already seen instances of this playing out. Media companies like Insider and Buzzfeed announced they would be employing large language models (LLMs) to help write content—before laying off swaths of their workforce. The Writers Guild of America went on strike in April partly due to disputes with the Alliance of Motion Picture and Television Producers regarding the use of AI in the writing process. Scores of businesses have already begun using LLMs and image generators to replace copywriters and graphic designers for their businesses.

In reality, though, generative AIs are actually quite limited in what they can accomplish—despite what Altman and other companies might have to say. “My worry is that, with these systems that are powerful at completing words or pictures for us, we will feel like we can replace human creativity and ingenuity,” Venkatasubramanian said.

We don’t ask arsonists to be in charge of the fire department.

Suresh Venkatasubramanian, Brown University

Venkatasubramanian’s concern is that employers will be drawn in by the allure of cutting costs with AI—and if they aren’t, then companies might feel pressure from shareholders and competitors. “Then you get this rush to the bottom to try and do cost savings,” he explained. “I’m almost certain there will be this rush, and then a reaction where people realize that it doesn’t work very well and they made a mistake.”

Emily M. Bender, a professor of linguistics at the University of Washington, largely agrees. “The thing about hype is it generates a sense of FOMO,” she told The Daily Beast. “If everybody else is on board with this magic thinking machine, then I have to figure out how to use it, right?”

This is what makes Altman’s time on the Hill particularly eyebrow raising to Bender, Venkatasubramanian, and many other AI experts. Not only did Altman get an incredible amount of media coverage from his testimony, but he also spent the evening before wining and dining with the very same lawmakers that he was about to appear in front of. He made astonishing and frightening statements about the power of AI and, more specifically, his company’s technology.

Meanwhile, lawmakers welcomed his recommendations with open arms and ears. During the hearing, Sen. John Kennedy (R-LA) implored Altman: “This is your chance, folks, to tell us how to get this right. Please use it. Talk in plain English and tell us what rules to implement.”

Bender didn’t mince words when describing her reaction to the hearing. “It’s marketing,” she said. “When you have people like Sam Altman saying, ‘Oh, no! What we’ve built is so powerful, you better regulate it’ or the people who signed that AI pause letter saying, ‘This stuff is so powerful, we’ve got to stop for a little while,’ that is also marketing.”

While Altman has said he welcomes regulation to rein in generative AI, the company’s refusal to be more transparent about the dataset used to train ChatGPT’s and shutting down requests to up access for third party apps to use its API suggest OpenAI isn’t as warm to regulation as it claims.

“His company is the one that put this out there,” Venkatasubramanian said. “He doesn’t get to opine on the dangers of AI. Let’s not pretend that he’s the person we should be listening to on how to regulate.”

He added, “We don’t ask arsonists to be in charge of the fire department.”

Meanwhile, Venkatasubramanian noted, Congress and the rest of the world have paid much less attention to other forms of AI that have already actively harmed people for years. “It’s the tools used to decide if someone’s committing fraud when asking for benefits, whether someone should be put in jail prior to their hearing, whether your resume should let you go on to the next stage of an interview, whether you should get a particular kind of medical treatment. All of that is using AI right now.”

Now You See It, Now You Don’t

Altman may not be serious in his call for more regulation over AI, but he’s not wrong—and there are already things that lawmakers can do. Venkatasubramanian co-authored the Blueprint for the AI Bill of Rights, which provides a set of guidelines for how machine learning algorithms can be safely deployed to best protect the data and privacy of everyday people. While the framework has yet to gain purchase in Congress, states like California are already pushing bills inspired by it, Venkatasubramanian said.

And while Altman’s suggestion to create a separate agency dedicated AI regulation isn’t a bad idea Bender noted that existing governing bodies already have the power to regulate the companies behind these AI technologies.

In fact, the Federal Trade Commission, Department of Justice, the Consumer Financial Protection Bureau, and the Equal Employment Opportunity Commission released a joint statement last month stating that there was no AI exemption and that “the FTC will vigorously enforce the law to combat unfair or deceptive practices or unfair methods of competition.”

Resist the urge to be impressed. These things are impressive, especially at first blush. But it’s important because the people building it are trying to sell you something.

Emily M. Bender, The University of Washington

Creating a separate governing body for the technology would be “a move towards misdirection,” Bender argued. “It sounded like [Altman] was advocating for a separate regulatory agency that will be concerned with ‘AI’ separate from what this technology is actually being used to do in the world. That feels like a move towards effectively denying the jurisdiction of existing laws and agencies.”

Whether we see any meaningful policy remains to be seen. While Congress’ history with emerging technologies suggest that they’ll move at a similarly glacial pace to react, the hearing on Tuesday suggests that they seem to at least be cautiously open to the idea of regulating AI.

Yet, during the hearing and at the dinner before, what lawmakers and the world at large saw was effectively a magic show, Bender said—keeping people’s attention with one hand while the other moved to do something else. From her perspective, the entire testimony underscored a lesson she has been hammering well before ChatGPT first crashed the tech scene.

“Resist the urge to be impressed,” Bender warned. “These things are impressive, especially at first blush. But it’s important to keep a critical distance because the people building it are trying to sell you something.”

Newsletter