Embedded Vision

Researchers Create Model for Surface Inspection of Peanuts

The model uses a YOLO variant to inspect and classify the surface appearance of peanut pods. It is suitable for embedded computing platforms.
March 4, 2025
4 min read

Researchers in China proposed a convolutional neural network (CNN) model designed to inspect and classify peanut pods in real time using an embedded computing platform.

Because the appearance of a peanut pod impacts the crop’s market value, the model classifies peanuts as good, loss, or with mechanical damage, mold, or signs of germination.

Numerous scholars have proposed automated inspection methods involving CNNs, particularly YOLO (You Only Look Once), to evaluate peanuts, which traditionally have been inspected and classified manually by humans.

However, the researchers write that existing models for automated inspection have shortcomings. “Many existing methods achieve high detection accuracy but come with significant computational costs, or vice versa,” write Zhixia Liu, Xilin Zhong and other authors from the College of Engineering, Shenyang Agricultural University, Shenyang, China, in the journal Frontiers in Plant Science (https://bit.ly/3F5N8FF). 

The goal of this research is to find the sweet spot balancing speed and accuracy with computational load in order to produce a model suitable for space-constrained embedded computing platforms operating in real-time environments.

Related: Condi Food Develops Hyperspectral Imaging Sorting Solution

How Researchers Acquired and Processed Images

To acquire images of peanuts in shells for their experiments, the researchers created a box enclosure. Inside, they mounted a 2 MPixel webcam with a maximum resolution of 1280 X 720, which they installed above a conveyor belt/operating platform set on top of a desk. A light source was positioned 220 mm above the desktop. Image data was processed on a Raspberry Pi (Cambridge, UK) 4B with a Broadcom BCM2711 SOC, powered by a 64-bit 1.5 GHz quad-core CPU and including a NVIDIA GeForce RTX 3060 Ti graphics card.

A total of 1,600 images of three types of peanuts were acquired between March 10-15, 2024.

Related: Distillery Tests Vision System to Improve Manual Processes

The researchers used image-data augmentation methods such as Gaussian Noise (which is the normal distribution of unwanted signal in an image’s features) to create 8,000 images. They sliced and segmented those images to create a uniform collection with a resolution of 640 X 640 pixels. After annotation, the images were randomly divided into the following data sets: 5,600 for training, 1,600 for validating, and 800 for testing.

The researchers created an enhanced version of YOLOv5s, an object detection model that they selected because of its relatively small size of 16.3 MB, making it suitable for embedded computing platforms. The model also places greater emphasis on detecting small objects, compared with other YOLO models, they write. 

YOLOv5s also is faster than other YOLOv5 iterations. "YOLOv5s' superior inference speed positions it as an excellent choice for real-time detection scenarios and applications requiring quick response," they explain. 

Related: AI and Rugged Edge Computing Are Fueling a Range of Applications

To further enhance the lightweight structure of the neural network architecture, the researchers added ShuffleNetv2 as the backbone network to extract features, or objects, from image data.

Evaluation of the Computer Vision Model

They evaluated their model’s ability to accurately predict one of five categories of peanuts: good, loss, germinant, moldy, or soil contaminated. They demonstrated high levels of accuracy for all categories, ranging from 98.6% to 99.6%. Detection speed was 192.3 frames per second—fast enough for real-time inspection and classification of peanuts.  

They also compared its performance to other two-stage and one-stage object detection models, concluding that “the method presented in this study demonstrates significant advantages in detecting small targets compared to other object detection methods.” For example, spots of mold on a peanut pod, which is already a relatively small item, can be hard to detect reliably. 

Related: Deep Learning at the Edge Simplifies Package Inspection

To further improve the model in future studies, the researchers suggest optimizing its background suppression properties to improve performance in production environments. "In real-world scenarios, peanut pods may be in complex backgrounds, such as mixed with other impurities and background colors similar to peanut pods. These complex backgrounds may cause the model to mistakenly detect thebackground objects as peanut pods or miss the detection of realpeanut pods," they write. 

 

About the Author

Linda Wilson

Editor in Chief

Linda Wilson joined the team at Vision Systems Design in 2022. She has more than 25 years of experience in B2B publishing and has written for numerous publications, including Modern Healthcare, InformationWeek, Computerworld, Health Data Management, and many others. Before joining VSD, she was the senior editor at Medical Laboratory Observer, a sister publication to VSD.         

Sign up for Vision Systems Design Newsletters

Voice Your Opinion!

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!