Updates to our Terms of Use

We are updating our Terms of Use. Please carefully review the updated Terms before proceeding to our website.

Friday, March 29, 2024 | Back issues
Courthouse News Service Courthouse News Service

AI experts warn against abdicating regulation to Big Tech

The director of MIT’s artificial intelligence program implored Congress to move quickly on establishing boundaries for the emerging technology.

WASHINGTON (CN) —  When it comes to advancements in artificial intelligence technology and its integration into everyday life, “the genie is out of the bottle,” and Congress should act now to regulate it, a panel of experts told the House Oversight Committee.

Aleksander Mądry, director of the Center for Deployable Machine Learning at the Massachusetts Institute of Technology, told the lawmakers Wednesday that the growth of easy-to-use artificial intelligence, such as the predictive language model ChatGPT, has helped to bolster the technology’s popularity and speed its development.

“AI is no longer a matter of science fiction, nor is it confined to research labs,” Mądry said. “AI is being deployed and broadly adopted as we speak.”

Eric Schmidt, chair of an AI-focused nonprofit called the Special Competitive Studies Project, remarked that the recent growth of artificial intelligence could represent an inflection point in the technology’s rollout.

“I’m used to hype cycles,” Schmidt said, “but this one’s real, in the sense that enormous amounts of money are being raised to implement and build these systems. The sense to me is that this moment is a clear demarcation of before and after.”

Congresswoman Nancy Mace said Wednesday that there are “serious concerns” about the ethical implications of artificial intelligence.

Delivering an opening statement that she later revealed had been written by ChatGPT, the South Carolina Republican urged: “As we explore the potential of AI and generative models, it's essential that we consider the impact they may have on society. We must work together to ensure that AI is developed and used in a way that is ethical, transparent and beneficial to all of society."

There are plenty of benefits to AI assistance, Mądry said, but the technology also poses significant risks. The MIT professor noted that, in its current form, artificial intelligence isn’t completely reliable. Models like ChatGPT tend to “hallucinate” and give users incorrect answers to questions. AI also has a “propensity for promoting bias and enhancing social inequities,” and can be used to disseminate convincing manipulated media," he said.

Given those serious drawbacks, Mądry pressed the federal government to “step up” and regulate emerging artificial intelligence companies. Congress “cannot abdicate AI to Big Tech,” he said. “As capable as they are, they have different use cases, they have different priorities.”

Merve Hickok, senior research director at the Center for AI and Digital Policy, added that the U.S. does not “have the guardrails in place, the laws that we need, the public education, or the expertise in the government to manage the consequences of these rapid technological changes.”

Hickok recommended that Congress hold more hearings on artificial intelligence to explore the risks and benefits of such technology on the public. She also suggested that the Office of Management and Budget move ahead with proposed rulemaking that would govern the use of AI in the federal government — guidance that the agency was directed to develop the Artificial Intelligence (AI) in Government Act of 2020.

“We need to establish guidelines for AI development and use, we need to establish a clear legal framework to hold companies accountable for the consequences of their AI systems,” Representative Mace said, via ChatGPT.

Schmidt sees the U.S. government as having a particular role to play in setting the tone of artificial intelligence for the rest of the world.

“What we are doing is working on systems that will affect the way people perceive their world,” Schmidt said, “and I think the best thing for America to do is follow American values, which include robust competition, government funding of basic research and using innovators to deliver on this.”

Meanwhile, Mądry said that government should not make the same mistake it made with social media companies and allow AI companies, which he called “social media on steroids,” to self-regulate.

“The hope here is that we don’t use the same playbook we used for social media,” Mądry said. “If that doesn’t change, I’m extremely worried.”

Follow @BenjaminSWeiss
Categories / Business, Government, Technology

Subscribe to Closing Arguments

Sign up for new weekly newsletter Closing Arguments to get the latest about ongoing trials, major litigation and hot cases and rulings in courthouses around the U.S. and the world.

Loading...