Objective Accurate counting of peanut (Arachis hypogaea L.) samples is a crucial step in determining the 100-seed weight and 1 000-seed weight during seed testing. To tackle issues such as missed detections caused by peanut seed overlapping in practical measurements, this study aims to explore precise image recognition and target statistics of peanut pods and kernels using the improved YOLOv8n model.
Method The MLCA (mixed local channel attention) attention mechanism was integrated into the backbone network of the original YOLOv8n model to reduce background noise interference, enhance the detection ability for overlapped peanut samples, and thereby reduce the missed detection rate. The SCConv (spatial and channel reconstruction convolution) module was added to the C2f module to strengthen the model learning different peanut boundary features in the overlapping areas and highlight the true boundaries of individual peanut pods and kernels. The detection head was replaced by the LSCD (lightweight shared convolutional detection) to reduce the model parameters, enhance the global information fusion ability between feature maps, optimize the extraction and fusion methods of feature maps, and improve the model detection speed.
Result The improved MSL-YOLOv8n model contained 3 383 663 parameters, with the mean average precision (mAP50-95) of 90.9% and 91.7% for peanut pods and kernels counting, the precisions were 98.1% and 99.8%, and the recalls were 97.2% and 99.7%, respectively. The frames per second of the model were 245.8. Compared with the original model YOLOv8n, the mAP50-95 was improved by 1.7 and 1.1 percentage points, and the performance of the improved model was obviously better than those of SSD, YOLOv10n and other models.
Conclusion The improved model has high accuracy, fast real-time processing speed, and strong robustness, and can provide technical support for accurate counting in the process of peanut seed testing.