A.I.’s Role In Agriculture Comes Into Focus With Imaging Analysis

The imaging technologies scanning farms today trace their roots to the space race. In 1965, the U.S. Geological Survey proposed using satellites to observe the planet. Seven years later, NASA launched Landsat 1. Among that satellite’s accomplishments: an estimate of the corn and soybean acreage stretching from Iowa to Indiana.

The eighth version of Landsat continues to snap pictures of Earth’s terrain, and Landsat 9 is on the way. In the meantime, farmers can now supplement satellite imagery with pictures captured by planes, aerial drones, or other sensing systems.

But even though satellite data are still useful, they are less helpful for immediate insight about a farm field, says Raju Vatsavai, associate professor of computer science at North Carolina State University. By the time data are downloaded and analyzed, they are no longer timely. To remedy that problem, some researchers are trying to improve image analysis with artificial intelligence-related technologies.

“A.I. [in agriculture] is moving toward real-time analytics,” Vatsavai says.

Vatsavai was among the speakers at the Ag Biotech Professional Forum, a quarterly event held by the North Carolina Biotechnology Center. The topic for the forum held last week was the convergence of technologies giving rise to “artificial intelligence” in agriculture.

Compared to the satellite images from Landsat 1, visuals from today’s technology have much higher resolution, providing plant details down to the millimeter. Powerful software also accompanies these images. These systems use machine-learning techniques to improve how they capture and analyze visual images, Vatsavai says. That means as these systems collect more data, they should get better at providing insights. Vatsavai’s research uses software to mimic agricultural scenarios in the real world, such as drought. He can then try and predict outcomes, and thus suggest where and when to apply more water to best support a crop, for example.

Machine learning techniques are part of the scanner systems sold by LemnaTec, a Germany-based company that has U.S. operations in Research Triangle Park, NC, and St. Louis. The scanners harvest visual crop data, and then turn it into digital form, says Solmaz Hajmohammadi, image analytics and algorithm software developer for LemnaTec. Analysis of the visual data helps the LemnaTec system get better.

“We are trying to program the machines to predict and learn from the experiment, and improve it,” Hajmohammadi says.

Universities and agribusiness companies use LemnaTec’s technology for research. One of the company’s publicly disclosed projects involves studying sorghum for biofuel applications. In 2015, The Donald Danforth Plant Science Center was awarded an $8 million grant from the U.S. Department of Energy for the project. A robotic field scanner from LemnaTec captures crop data from the test field in Arizona. The scanner is attached to a 30-ton steel frame that glides on a track running alongside the 1.5-acre field.

The LemnaTec scanner measures physical characteristics, such as the height of the sorghum plants. But the data-gathering efforts go further. The system can capture details such as the amount of water and chlorophyll in the plants, Hajmohammadi says. Each day, the system captures between 5 and 6 terabytes of data. By analyzing the data, researchers hope to learn how to grow sorghum in ways that improve its potential as a source material for producing biofuels. The project has attracted interest from the Bill and Melinda Gates Foundation, which sees the research having applications in sub-Saharan Afric, a region where sorghum is a key food crop. Earlier this year, the foundation awarded the project a $6.1 million grant to research ways to improve sorghum breeding.

Some of LemnaTec’s scanners use imaging technology from Fitchburg, MA-based Headwall Photonics. Carson Roberts, senior applications engineer for Headwall, says that the company’s systems take photos at many different wavelengths. Collectively, those images form a spectral library. Through machine learning, the system can be trained to analyze the images and classify the components in them, Roberts says.

Spectral analysis of agricultural products can automate some of the inspection processes that have been done by humans, and it can do it faster and run 24 hours a day, Roberts says. As an example, Roberts describes scanning systems used to select light almonds from dark ones, which is important for certain food products. As almonds travel down a conveyor belt, a robot picks up and removes the unsuitable ones.

Scanning capabilities also figure into apple inspection. Analysis of the light reflected from the apples can be used to detect bruises, Roberts says. By analyzing fluorescence from the apples, the system detects contamination by fecal matter. (Apples that fall on the ground can come into contact with animal waste.) The system can then classify the fruit, learning to accept or reject apples based on the analysis.

“You can take some bruised apples, but you never want any with fecal contamination,” Roberts says.

Photo by Flickr user Muhammad Ali via a Creative Commons license.

Frank Vinluan is editor of Xconomy Raleigh-Durham, based in Research Triangle Park. You can reach him at fvinluan [at] xconomy.com Follow @frankvinluan

Trending on Xconomy