LOS ANGELES (CN) – E-commerce giant Amazon is under fire from U.S lawmakers after independent tests revealed Thursday that the company’s facial recognition technology falsely matched 28 current members of Congress with images in a mugshot database.
The controversial artificial intelligence software, called Rekognition, scans images and videos to identify objects, emotions, people, text and activities, according to the Seattle-based company’s website.
Amazon, which runs Rekognition on its servers, is paid each time the software processes an image. The results are used by clients to verify online users, flag inappropriate content and in public safety activities.
Police departments and amusement parks have used the software to search for people of interest and lost children on social media and video archives, a strategy that has drawn condemnation from over 70 privacy rights and civil liberties advocates who claim the technology threatens First Amendment-protected activity.
Democratic and Republican lawmakers in Congress were falsely matched with photos in a database of 25,000 publicly available mugshots, the American Civil Liberties Union said Thursday.
Congress members of color, including Georgia lawmaker and civil rights icon Rep. John Lewis, were “disproportionately” matched in the database.
In a May 24 letter to Amazon CEO Jeff Bezos, the Congressional Black Caucus expressed concern about the “profound negative unintended consequences” facial analysis could have for black people, undocumented immigrants and protesters.
Lewis and Rep. Jimmy Gomez, a California Democrat who was also falsely matched in the test, wrote a sharply worded letter Thursday to Bezos demanding a meeting over concerns Rekognition could strip people of their “constitutionally-protected rights.”
California lawmakers Mark DeSaulnier, Steve Knight and Norma Torres were also falsely matched.
The ACLU, which spent $12.33 to run the test, said it used default image match settings that Amazon sets for Rekognition.
In a statement to Courthouse News, an Amazon Web Services spokesperson said the ACLU should have used “higher threshold settings” in its test, recommending a 95 percent confidence threshold as opposed to the 80 percent level used by the group.
Rekognition “can be a driver for good in the world” when it’s used to prevent human trafficking, package theft or used to build educational apps, the statement said, adding that the software is used “almost exclusively” for rapid reviews and not for autonomous decision-making.
“Amazon seems to have missed, or refuses to acknowledge, the broader point: facial recognition technology in the hands of government is primed for abuse and raises significant civil rights concerns,” ACLU attorney Jacob Snow said in a statement. “It could allow – and in some cases has already enabled – police to track protesters, ICE to continuously monitor immigrants, and cities to surveil their own residents.”
The civil rights organization is asking Congress to pass a law barring law enforcement agencies from using facial recognition technology.
While the test was a hypothetical exercise, Amazon faces pressure from its employees and shareholders for providing Rekognition technology to police departments.
The ACLU said Amazon is “aggressively marketing” its face surveillance technology to police with claims it can identify up to 100 faces in a single image, track people in real time surveillance, and scan video from officer-worn body cameras.
Snow said changing the matching threshold could exacerbate the danger it poses to the more than 117 million American adults in facial recognition databases that can be searched in criminal investigations.
“An identification — whether accurate or not — could cost people their freedom or even their lives,” the ACLU statement said. “People of color are already disproportionately harmed by police practices, and it’s easy to see how Rekognition could exacerbate that.”