Senate Democrats eye new AI transparency rules | Courthouse News Service
Sunday, December 3, 2023 | Back issues
Courthouse News Service Courthouse News Service

Senate Democrats eye new AI transparency rules

A new, sweeping piece of legislation would force companies using artificial intelligence tools to review the impacts of automating critical business decisions.

WASHINGTON (CN) — Senate Democrats on Thursday unveiled a hefty bill that they said would give Americans a better idea of how companies employ artificial intelligence and would hold businesses accountable for the technology’s flaws.

The new legislation, sponsored by a coalition of Democratic lawmakers including Oregon Senator Ron Wyden and New Jersey Senator Cory Booker, comes amid reports that AI tools can amplify bias in major business decisions, such as hiring new employees, approving loans or reviewing housing applications. Those issues and others have in recent months spurred discussion on Capitol Hill about regulating the nascent AI industry.

“AI is making choices, today, about who gets hired for a job, whether someone can rent an apartment and what schools someone can attend,” Wyden said in a statement. “Our bill will pull back the curtain on these systems to require ongoing testing to make sure artificial intelligence that is responsible for critical decisions actually works.”

If passed, the measure — known as the Algorithmic Accountability Act —  would require companies that use AI algorithms or other automated processes to make critical business decisions to submit an impact assessment of those tools to the Federal Trade Commission.

The bill defines critical decisions as a judgment that “has any legal, material, or similarly significant effect on a consumer’s life” related to the access or cost of an array of services such as education, employment, housing or health care.

The proposed impact assessment would require companies to describe their need for AI decision making tools and the intended benefits of employing such technology. Firms would also need to lay out “any known harm, shortcoming, failure case, or material negative impact” their use of artificial intelligence may have on consumers.

In particular, the bill directs companies to determine whether their automated processes exhibit “differential performance” based on consumers’ race, gender, age or religion, among other criteria.

If any negative effects can be identified, companies must take steps to mitigate the drawbacks — a process that could include “removing the system or process from the market or terminating its development,” according to the measure.

Further, the Democrats’ bill would direct the FTC to publish an annual report on trends in automated decision making programs and would require the agency to establish a public data repository allowing consumers to review business decisions that companies have made using artificial intelligence. The measure would also give the trade authority resources to hire as many as 75 new staff and would stand up a new technology focused sub-office within the FTC.

Senate Democrats touted the legislation as an effort to protect civil liberties and ensure safe development of AI tech.

“We know of too many real-world examples of AI systems that have flawed or biased algorithms,” Booker said. “The Algorithmic Accountability Act would require that automated systems be assessed for biases, hold bad actors accountable, and ultimately help to create a safer AI future.”

Senator Jeff Merkley, Wyden’s Oregon colleague and one of the bill’s cosponsors, said the measure was “step one” in boosting the transparency of artificial intelligence deployment.

“Algorithms play a significant role in the way society interacts and behaves every single day, whether we like it or not,” Merkley said. “Big tech companies and developers must be transparent with the public about how they use our data to shape every aspect of out lives.”

The legislation’s House counterpart is also sponsored by a cadre of Democrats, including Representatives Ayanna Pressley of Massachusetts, Jamaal Bowman of New York and Pramila Jayapal of Washington state.

With AI in its infancy, but growing fast, Congress has been intent on exploring ways to regulate the emerging technology — with some lawmakers resolving not to repeat what they see as a failure to set effective guardrails for social media companies.

So far, some of the biggest companies and figures in artificial intelligence have expressed willingness to work with Capitol Hill to grow the industry responsibly. A group of tech executives, including Meta CEO Mark Zuckerberg and billionaire mogul Elon Musk, discussed AI regulation last week during a panel with members of Congress.

Musk told reporters after the meeting that the government should establish a federal agency to oversee artificial intelligence, a sentiment echoed by some lawmakers.

Meanwhile, Americans are increasingly concerned about the role AI plays in their daily lives. An August survey conducted by Pew Research Center found that more than half of respondents were more worried than excited about the technology’s rapid visibility. A similar number of Americans said AI tech would make their private information less secure.

Follow @BenjaminSWeiss
Categories / Government, National, Politics, Technology

Subscribe to Closing Arguments

Sign up for new weekly newsletter Closing Arguments to get the latest about ongoing trials, major litigation and hot cases and rulings in courthouses around the U.S. and the world.