SAN FRANCISCO (CN) – As attendees of the Ninth Circuit Judicial Conference arrived for the Wednesday morning panel discussion, two large state-of-the art security robots wandered slowly and menacingly around the perimeter of the conference room.
William Santana, CEO of Knightscope which developed the robots, described some of the technical aspects of the robots, which are trained to patrol certain areas and flag unusual activity to law enforcement – presumably of the human variety.
“These robots can predict and prevent crime and help make the United States one of the safest countries in the world,” Santana said.
Santana’s presentation kicked off a full day at the conference held in San Francisco on Wednesday, where legal analysts, technological innovators and sociologists spent the day exploring the emerging intersection of technology and the law.
The panel experts carefully avoided weighing in too heavily on the cybersecurity issues of the day, including Russia’s apparently successful hack of Hillary Clinton campaign chair John Podesta’s emails, which some say had an undue influence on the last election.
Instead, the panelists focused on the rise and predominance of the internet and its influence on human communication, along with emerging trends in the field of artificial intelligence.
“Are robots going to take over the world? The short answer is no,” said Yann LeCun, director of AI research at Facebook. “Machines are still really stupid.”
The assertion seemed to relax the audience, many of whom were eyeballing the security robots still on patrol.
LeCun said that while AI has advanced to the point where computers are almost invincible in games like chess against human counterparts, machine learning is still in its infancy.
Nevertheless, judges and attorneys must grapple with the implications of AI and designer algorithms sooner rather than later.
For example, panelists envisioned a world where judges would be given a score about a given defendant that rates the likelihood he or she would commit another crime.
“The question becomes how does notice fit,” said Kate Crawford, a senior fellow at New York University’s Information and Law Institute. Or more specifically, how and when should people be made aware information is being collected about them, and how can they seek recourse if the data collected is wrong or biased.
These are some of the questions Crawford said sociologists and law professors must wrestle with as they move forward.
“We don’t have all the answers, so this is the beginning of hard work,” she said.
On the issue of cybersecurity, California Supreme Court Justice Mariano-Florentino Cuellar said he was wary of any claims made by companies or individuals that they have designed a failsafe program or system.
“There is no such thing,” he told the audience.
Instead, society needs to focus on minimizing risk and then educating human users of computer systems on how to avoid falling into traps, he said.
“Ninety-five percent of cybersecurity problems come down to a human clicking on a link they shouldn’t click on,” Cuellar said. “The problem is that humans are easy to hack.”
Humans will divulge secrets, click on links and fall prey to all manner of tricks – which is far more common than a computer-science mastermind finding a glitch in the system to gain entry and wreak havoc.
As AI becomes better and the daunting prospects of cybersecurity continue to increase, the panelists said another problem will be the distribution of the benefits of improving technology.
Crawford said studies show the benefits of technology most often accrue to the wealthy first, and then trickle down to the lower income strata of society. Conversely, those with lower incomes typically bear the brunt of problems and risks – a point Cuellar emphasized.
“The question becomes who will win and who will lose, and who will get stuck when something goes wrong,” he said.