WASHINGTON (CN) — As members of Congress sound the alarm about the potential harms of artificial intelligence left unregulated, Virginia Senator Mark Warner dialed in Thursday on one issue he said has been supercharged by the emerging technology: the prevalence of eating disorders.
Bipartisan interest in AI regulation has meshed in recent months with a renewed legislative push to protect young people from what some lawmakers see as the harmful effects of social media and content algorithms. In the Senate, that effort has seen blockbuster hearings with tech executives and a bill to enhance privacy protections for minors and their parents.
As Congress seeks to tackle these issues Warner, the Democratic chair of the Senate Select Committee on Intelligence, is urging companies that develop generative AI models to take another look at how their tools handle user queries about eating disorders.
“The failure of your company to implement adequate safeguards to protect vulnerable individuals, especially teens and children, from well-established and foreseeable harms is of grave concern,” the Virginia senator told OpenAI CEO Sam Altman in a letter Thursday. “I urge you to quickly take steps to fix this glaring problem.”
Warner, who sent similar correspondence to the CEOs of Snap Inc. and Google parent company Alphabet, pointed to a Monday report from non-profit organization the Center for Countering Digital Hate, which found that some generative AI models provide “step-by-step guides on dangerous weight loss methods, information on drugs that can induce vomiting, and other harmful responses.”
OpenAI’s ChatGPT was one of the language models cited in the report, alongside Google’s Bard and Snapchat’s My AI.
The nonprofit report found AI tools generate harmful eating disorder content in 41% of prompts. In response to test prompts from researchers, chatbots including Bard and Snapchat My AI suggested chewing and spitting, smoking 10 cigarettes a day and using heroin as weight loss methods.
Most generative AI models scrape the internet for information and use it to form responses to prompts from users. Echoing the concerns of other lawmakers, Warner argued that tech companies have shirked their social responsibility by not establishing proper guardrails to keep artificial intelligence tools from ingesting harmful information, especially as it relates to well understood issues such as eating disorders.
“[T]he ways in which consumer technology products have contributed to, or exacerbated, eating disorder behaviors is a matter of public record, with significant media and public policy attention dating back several years,” Warner wrote. “The inability of your company to anticipate these misuses and establish appropriately robust safeguards invites scrutiny of your company’s ability to anticipate and prevent a wider range of misuses of your products.”
Concerns about how generative models respond to prompts about eating disorders also raise broader questions about how tech companies train AI tools, the Virginia Democrat continued, noting that his office reviewed a prominent image dataset used to train artificial intelligence and found “extensive image-text pairs consistent with prominent eating disorder imagery and jargon.”
“The existence of these datasets underscores the need for the world’s leading AI vendors to safeguard their models from being trained to embed harmful behaviors, assumptions and associations in model weights,” Warner wrote.
The lawmaker requested that OpenAI, Snap and Google provide him with written plans detailing how they will address content moderation issues with their AI models.
“I urge you to immediately take steps to protect vulnerable users from your products by implementing safeguards that prevent your products from providing harmful advice and recommendations related to eating disorders,” Warner said.
Congressional scrutiny of AI companies has ratcheted up in recent weeks, with lawmakers mulling how the government can best regulate the emerging technology.
Democrats, such as Connecticut Senator Richard Blumenthal, have advocated for a new federal agency designed to oversee the safe deployment of artificial intelligence. Some Republicans — while they agree AI companies should not be left unchecked — have been wary to give the government complete authority over the technology.
AI companies meanwhile appear open to federal intervention. A cadre of leading tech companies in July agreed to follow a set of security and transparency guidelines set forth by the White House. In May, OpenAI CEO Altman testified at a congressional hearing, during which he said the private sector should work alongside the government to ensure AI technology develops safely.
“We think that regulatory intervention by governments will be critical to mitigate the risks,” Altman said at the time. “We think it can be a printing press moment, but we have to work together to make it so.”Follow @BenjaminSWeiss
Subscribe to Closing Arguments
Sign up for new weekly newsletter Closing Arguments to get the latest about ongoing trials, major litigation and hot cases and rulings in courthouses around the U.S. and the world.