WASHINGTON (CN) — Intensifying the country’s investment in artificial intelligence, President Donald Trump signed an executive order Monday that would open up access to government data and computing resources.
Known as the American AI Initiative, the executive order also calls for the creation of safety guidelines and directs federal agencies to use fellowships, apprenticeships, and computer-science education to give workers in the field relevant experience.
Trump’s push today follows a 2017 announcement by China that it intended over the next 10 years to advance its standing on the world economic stage by becoming a global leader in artificial intelligence.
Jeffrey Popyack, a professor of computer science at Drexel University, thinks that it is important that the United States invest in this.
“The rest of the world is doing so,” he said Monday. “We don’t want to be left behind.”
Tom Mitchell, interim dean of Carnegie Mellon University’s School of Computer Science, suggested that one area where the government can make big improvements at low costs is in establishing standards for capturing and sharing data.
“Imagine the advances we could make in medical diagnosis and treatment if all electronic medical records were stored in a common format that allowed the Centers for Disease Control and Prevention to aggregate and mine that data for new insights,” Mitchell said.
Mitchell also highlighted the education of the U.S. workforce as an area where the government can step in.
“We have a severe shortage in the supply of AI experts, relative to the demand from industry, and this is slowing down the development of AI-based improvements across the economy,” he said. “What can the government do? We might consider providing tax incentives or tuition support for individuals who are already programmers or engineers to become AI experts by taking courses designed for this purpose.”
Mitchell suggested pushing K-12 education in STEM fields — in particular by exposing high school students to machine-learning methods.
Popyack said that there’s significant potential for AI growth wherever there are needs for protection or efficiency or discovery or safety.
But Popyack also noted that even well-intended artificial-intelligence projects can go askew because of inherent human biases.
Popyack cited as one example Amazon’s recent testing of software designed to evaluate job applicants. The system proved biased against women, however, consistently rating men higher in the applicant pool.
Popyack also highlighted the potential for privacy intrusions and other ethical dilemmas inherent in machine learning.
He noted today for example that law enforcement can use AI to predict who will be involved in a crime — “and that’s both who is a danger and who is the victim.”
Ethically gray situations arise as well as the country invests more in self-driving automobiles, Popyack said, proposing the scenario where a car might have to choose between harming the driver or several pedestrians.
“There’s just so many perils out there, there are ethical challenges, and there’s a lot of worry that AI can run amuck in a way that can be very dangerous to deal with,” he said.