Updates to our Terms of Use

We are updating our Terms of Use. Please carefully review the updated Terms before proceeding to our website.

Tuesday, April 16, 2024 | Back issues
Courthouse News Service Courthouse News Service

Researchers Mix Satellite Photos &|Machine Learning to Find Poverty Zones

(CN) — Logistical problems in identifying impoverished communities may become relics of the past, as researchers are now combining satellite data with advanced computer algorithms to bypass traditional hurdles.

In a study published Friday in the journal Science, Stanford University researchers proposed a way to use machine learning — the science of designing computer algorithms that learn from data — to interpret data acquired from high-resolution satellite imagery.

The availability of accurate and reliable information on the location of impoverished zones is sorely lacking, which forces aid groups and other international organizations to conduct door-to-door surveys to supplement existing data — an expensive and time-consuming process.

Using earlier machine-learning methods, the team found pockets of poverty across five African nations which have previously been void of valuable survey information.

"We have a limited number of surveys conducted in scattered villages across the African continent, but otherwise we have very little local-level information on poverty," said study co-author Marshall Burke. "At the same time, we collect all sorts of other data in these areas — like satellite imagery — constantly."

Satellite imagery on its own provides relatively limited data, making it difficult for researchers to use the information for concrete scientific findings. However, high-resolution satellites are capable of displaying how much electric light is used in specific areas, which coordinates with higher or lower economic resources.

While machine learning works best when vast amounts of data are inputted, the limited data on poverty to start with presents challenges.

"There are few places in the world where we can tell the computer with certainty whether the people live there are rich or poor," Neal Jean, a doctoral student at Stanford's School of Engineering and the study's lead author, said. "This makes it hard to extract useful information from the huge amount of daytime satellite imagery that's available."

Since areas that are brighter at night tend to be more financially developed, the solution involved combining images of the Earth at night with high-resolution daytime imagery. The team used the "nightlight" data to identify features in the higher-resolution daytime imagery that are correlated with economic development.

"Without being told what to look for, our machine learning algorithm learned to pick out of the imagery many things that are easily recognizable to humans — things like roads, urban areas and farmland," Jean said.

The team then used these features from the daytime imagery to predict village-level wealth, as measured in the available survey data.

The method did a surprisingly good job of predicting the distribution of poverty, outperforming existing approaches, according to the authors.

Study co-author Stefano Ermon said that these enhanced poverty maps could help aid organizations and policymakers distribute funds more efficiently and evaluate policies more effectively.

"Our paper demonstrates the power of machine learning in this context," he said. "And since it's cheap and scalable — requiring only satellite images — it could be used to map poverty around the world in a very low-cost way."

Follow @SeanDuffyCNS
Categories / Uncategorized

Subscribe to Closing Arguments

Sign up for new weekly newsletter Closing Arguments to get the latest about ongoing trials, major litigation and hot cases and rulings in courthouses around the U.S. and the world.

Loading...