Home » Why We Should Regulate AI Like We Do Drugs and Guns
News

Why We Should Regulate AI Like We Do Drugs and Guns

Whether you like it or not, an AI probably brought you here.

You might have found this article on your Facebook feed, or stumbled on it in Google News. A friend of a friend of a friend might have posted it to Twitter that you saw in a retweet. No matter how you got here, this article was uniquely curated and delivered straight to you on a digital platter.

Mind you, the AI didn’t do this because of random luck. It did this because it knows you. In fact, it’s been watching you for years—watching every website you visit, every Google search, every Amazon purchase, every click and keystroke that makes up the pages of your digital story. In a lot of ways, algorithms probably know more about you than you do.

Sounds creepy, right? And yet that’s the reality of our everyday lives, online and offline. AI is constantly being used to automate and streamline tasks to make things easier for us—but it comes at a cost. At its best, AI can help us do things like find a new job online, or find the most direct route for a shipment of medicine. At its worst, it can project some of the worst of society’s biases leading to things like denying home loans for people of color or spouting outright racist, bigoted statements.

While AI technology has come a long way, we’re still in the wild, wild west. Right now, it can be so easily weaponized and exploited by bad actors to do everything from facial recognition to deepfaking people into compromising videos and images—and there’s woefully little regulation to prevent any of it from happening.

That’s not to say lawmakers are doing nothing about it. In fact, some nations are beginning to consider taking a harder look at regulating these algorithms—to vastly varying degrees.

In October, the White House unveiled a blueprint for the AI Bill of Rights. The framework provided principles for how machine learning algorithms could be used and accessed—while ostensibly trying to protect the data and privacy of Americans.

“In America and around the world, systems supposed to help with patient care have proven unsafe, ineffective, or biased,” the White House wrote on the AI Bill of Rights web page. “Algorithms used in hiring and credit decisions have been found to reflect and reproduce existing unwanted inequities or embed new harmful bias and discrimination. Unchecked social media data collection has been used to threaten people’s opportunities, undermine their privacy, or pervasively track their activity—often without their knowledge or consent.”

I think the challenge with regulation of new technologies is whether governments act too quickly or fail to act quickly enough.

Sarah Kreps, Cornell University

Meanwhile, the European Union proposed the AI Act in 2021. While it hasn’t been passed into law yet, the act would require things like automated chatbots to be clearly labeled and would outright ban AI that they deem an “unacceptable risk” such as China’s social credit system—which is ironic considering the fact that Beijing is leading the pack when it comes to AI regulation.

China recently announced a new law that requires AI generated media to be clearly marked as such, declaring that “services that provide functions such as intelligent dialogue, synthesized human voice, human face generation, and immersive realistic scenes that generate or significantly change information content, shall be marked prominently to avoid public confusion.”

This builds off of recent AI regulation from Beijing that required all algorithms to be recorded on a registry and also punishes algorithms that spread misinformation. This should come as no surprise considering China’s authoritarian government—but does offer a glimpse at a potential pathway for the U.S. to pursue.

China’s aggressive moves beg the question of whether the U.S. should do more to regulate machine learning algorithms as well. If so, might this require establishing a whole separate agency dedicated to such efforts—a kind of U.S. Department of AI tasked with ensuring AI is being developed and deployed safely and equitably.

It might seem extreme at first blush but a new governing body to regulate and provide guardrails for emerging technologies might be completely necessary—especially considering the increasingly rapid advances in these technologies. Ten years ago, chatbots were fairly primitive. Now, they’re so sophisticated that some people are even claiming that they’ve gained sentience. With each year, they’re going to get even more advanced—opening the doors to a lot of potential harm that could be prevented with regulation.

But when it comes to emerging tech, that’s a lot easier said than done.

“I think the challenge with regulation of new technologies is whether governments act too quickly or fail to act quickly enough,” Sarah Kreps, director of the Cornell Tech Policy Institute at Cornell University, told The Daily Beast. She pointed to the recent FTX debacle as a good example of this.

“It’s easy to conclude from the FTX implosion that the government should have done more to intervene and regulate the crypto market,” she said. “And now we’re seeing, after the implosion, that customers lost billions of dollars and that the government didn’t act quickly enough.”

There needs to be more effort to understand both how these technologies work and what the implications of them could be. That needs to be the first step.

Sarah Kreps, Cornell University

Therein lies the tricky balance when it comes to AI—and a perennial issue when it comes to regulating emerging technology. When should the government act and what should that regulation look like?

The AI Bill of Rights offers some glimpse and includes principles that would protect people against being discriminated against by an algorithm. For example, an AI used to sort through CVs and resumes for a job opening would have to be certified to be unbiased against applications from people of color—which is something that has happened before. While that’s good, the problem is that new and emerging AI is quickly moving beyond these specified use cases.

“My concern is we’re now beginning to see AI systems that are more general that could be used across different sectors,” Baobao Zhang, a tech policy researcher at the Centre for the Governance of AI, told The Daily Beast. “ChatGPT is a good example of this. Students can use this software to write essays. That is a use within the educational context. But this could impact journalism. Whoever wants to can create news stories, whether it’s true or fictitious. It could be used in academic research. It could be used in other unexpected contexts.”

Then there are powerful text-to-image generators such as Lensa and Google’s Imagen or even Meta’s text-to-video generator. These AIs have limitless potential applications across virtually any industry or sector—and therefore an enormous potential for harm. And yet, it’s a massive blindspot in the AI Bill of Rights, which fails to address it.

“This is an area that particularly worries me,” Zhang adds. “As the government is thinking about regulating AI, it needs to not only think about the sector specific uses and the harms that can arise, but anticipate more powerful AI systems that are applicable to multiple sectors.”

But it’s hard to anticipate problems in new technologies—especially when lawmakers aren’t exactly the most well-versed when it comes to tech. So perhaps the most impactful thing that regulators can do to address these issues is simply education. After all, they don’t know what they don’t know. If lawmakers were more aware of the dangers and pitfalls of AI, as well as its potential for good, they can propose more appropriate regulation.

“Government officials tend not to really understand these technologies very well, and I think that’s understandable because they have a lot on their plate,” Kreps said. “So I think at the very least, there needs to be more effort to understand both how these technologies work and what the implications of them could be. That needs to be the first step. From there, they can decide how best to make these regulatory decisions.”

The reality is regulation for AI is going to happen whether we like it or not.

Kreps doesn’t necessarily believe that there needs to be a U.S. Department of AI or strict regulation. However, she is in favor of “light regulatory touches” when it comes to these emerging technologies.

The reality is regulation for AI is going to happen whether we like it or not. Both Kreps and Zhang said that they believe the U.S. will eventually enact some sort of regulatory framework—the only question is when it happens and how much of a heavy hand the government is going to play. Of course, governing bodies shouldn’t limit the development of these systems so much that it prevents innovation—but then again, what happens if they don’t do enough?

“The first step is to at least acknowledge and try to frame the problem. I don’t even think we’ve reached that stage,” Kreps said. “Let’s try to first understand what we’re looking at—and then we can go from there. But we’re not even taking that first step.”

Newsletter

December 2022
M T W T F S S
 1234
567891011
12131415161718
19202122232425
262728293031