WASHINGTON (CN) — While lawmakers scramble to reach a consensus on regulating the growth of artificial intelligence, a new research reveals most Americans are more concerned than excited about the accelerating deployment of the technology.
The emergence of AI-powered language models, image generators and other tools has sparked conversation on Capitol Hill about how to balance the benefits of artificial intelligence with its potentially serious or unforeseen drawbacks. Such congressional scrutiny comes as AI use is becoming ever more visible in daily life. Teachers and university professors are grappling with students who use ChatGPT to write essays. Lawyers are using language models to prepare court briefs. The Writers’ Guild of America, months deep into an industry-shaking strike, has cited Hollywood’s flirtation with artificial intelligence as a central grievance.
These issues appear to be resonating with Americans, according to data published Monday by the Pew Research Center.
The survey of more than 11,000 respondents found that 52% felt more concerned than excited about the increased use of artificial intelligence in daily life. That figure represents a double-digit leap from the 38% of Americans who responded similarly in research conducted last year.
Just 10% of respondents in this year’s survey said they were mostly excited by developments in AI tech, a 5% decrease from 15% the year prior. Around 36% of respondents said they were equally excited and concerned by the prospect.
While the majority of respondents were wary of AI’s impact on daily life as a whole, they were less decisive when it came to how such technology would affect different aspects of society. Nearly half of Americans agreed that artificial intelligence will help people find products and services on the internet. A slim majority said that AI would also make cars and trucks safer and improve health care outcomes. Respondents were similarly split on whether the technology would help people find accurate information online or assist police.
However, Americans were overwhelmingly convinced that the increased use of artificial intelligence would negatively affect privacy, with 53% of respondents saying AI would hurt the ability to keep personal information private.
Contrasting the negative outlook on AI technology, a second Pew survey published Monday found that just 24% of Americans who had heard of the generative language model ChatGPT had ever used the tool. Respondents were also divided on whether such AI models would affect their job security. The majority of people said that chatbots would have a minor impact on their jobs or none at all. However, more than 50% of respondents said that AI tools would have major effects on software engineers, graphic designers and journalists.
In Washington, lawmakers have heard directly from Americans who are worried about the negative consequences of rapidly developing artificial intelligence. The Senate in June heard from an Arizona woman who was nearly extorted out of $50,000 by scammers using an artificial intelligence tool to spoof her young daughter’s voice. In May, meanwhile, copyright experts warned a House committee that AI tech could be used to skirt intellectual property laws and recreate the work of human artists.
As it deliberates on how best to tackle the task of regulating artificial intelligence, Congress has secured the support of some of the largest AI companies. Sam Altman, CEO of ChatGPT developer OpenAI, told lawmakers that he supports federal intervention to “mitigate the risks” of unchecked AI expansion.
“[W]e can and must work together to identify and manage the potential downsides, so that we can all enjoy the tremendous offsets,” Altman told the Senate Judiciary Committee.
The push for increased AI guardrails has enjoyed bipartisan support in Congress. Connecticut Democrat Richard Blumenthal and Missouri Republican Josh Hawley — who both head up the Senate Judiciary Committee’s technology subpanel — have worked closely in recent months to develop legislation that would require artificial intelligence vendors to license their products with the federal government. The lawmakers also hope to ensure that AI companies are not protected by so-called Section 230 protections, which shield some internet-based companies from legal action.
A spokesperson for Blumenthal did not immediately return a request for comment on Monday’s survey results. However, the Connecticut senator has said that Congress should learn from its refusal to regulate nascent social media companies as it navigates the developing world of AI.
“Congress failed to meet the moment on social media,” Blumenthal said in May. “Now, we have the obligation to do it on AI before the threats and the risks become real.”Follow @BenjaminSWeiss
Subscribe to Closing Arguments
Sign up for new weekly newsletter Closing Arguments to get the latest about ongoing trials, major litigation and hot cases and rulings in courthouses around the U.S. and the world.