President Joe Biden issued yesterday a sweeping executive order aiming to impose federal regulation on the development of artificial intelligence technologies, such as large language models like ChatGPT. The executive order cites the emergency powers of the Korean War-era Defense Production Act as the justification for imposing federal regulation on AI technologies. As my Reason colleague Eric Boehm has pointed out, “the Defense Production Act has become a license for central planning.” Taken as a whole, the new order amounts to federal central planning for artificial intelligence.
Among other things, the order will “require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government,” according to the White House. Specifically, the new federal AI regulators are supposed to oversee any “foundation model” that purportedly “poses a serious risk to national security, national economic security, or national public health and safety” by requiring that developers report to the secretary of commerce the results of extensive “red-team safety tests.” Roughly speaking, foundation models are large language models like OpenAI’s GPT-4, Google’s PaLM-2, and Meta’s LlamA 2. Red-teaming is the practice of creating adversarial squads of hackers to attack AI systems with the goal of uncovering weaknesses, biases, and security flaws. As it happens, the leading AI tech companies— OpenAI, Google, Meta—have been red-teaming their models all along.
The National Institute of Standards and Technology is charged with setting up the additional safety standards with which the AI developers are supposed to comply. Complying with such reporting requirements will likely slow down the process of safety and security testing undertaken by Big Tech developers while at the same time driving out smaller competitors who cannot afford the costs of dotting regulatory i’s and crossing bureaucratic t’s. An even bigger worry is that the new AI safety testing orders will quickly evolve into the digital equivalent of the deadly slow hyper-precautionary FDA drug safety approval scheme.
It’s hard to see how U.S. national defense can be enhanced by slowing down domestic AI innovation. After all, U.S. regulations will not apply to foreign competitors who will be able to catch up and surpass U.S. artificial intelligence developers hampered by bureaucratic fetters.
In addition, the executive order directs the Department of Commerce to develop techniques for watermarking the outputs of AI technologies. This means embedding information into photos, videos, audio clips, or text to let users know that they were generated by AI. As it happens, AI companies like OpenAI and Google are already doing that. Of course, scammers and propagandists will simply ignore watermarking when they create their misleading deepfakes.
Biden’s order also directs various federal agencies to address the problem of AI “job displacement” and “job disruption.” And doubtlessly, such a powerful suite of technologies will affect nearly everyone’s work activities and prospects. But keep in mind the dire prediction back in 2014 that robots would steal one in three human jobs by 2025. Only just over a year to go, folks and the U.S. unemployment rate is the lowest it’s been since 1969.
On the plus side, Biden’s executive order does instruct the Department of Homeland Security to “modernize immigration pathways for experts in AI and other critical and emerging technologies.” This is always a good idea since such immigrants significantly boost U.S. technological progress, employment, and economic growth.
“White House executive order threatens to put AI in a regulatory cage,” is how the free market R Street Institute characterized the Biden administration’s regulatory proposals. In a statement, Carl Szabo, vice president and general counsel for the technology lobbying group NetChoice, warned that Biden’s new executive amounts to an “AI red tape wishlist” that “will result in stifling new companies and competitors from entering the marketplace and significantly expanding the power of the federal government over American innovation.” He added that the executive order “puts any investment in AI at risk of being shut down at the whims of government bureaucrats.”
Over at Forbes, Competitive Enterprise Institute Senior Fellow James Broughel glumly warns, “Biden’s AI safety order could well be the biggest policy mistake of my lifetime.”