Citation: | ZHANG Liyin, ZHANG Ji, YANG Qinglu, et al. Detection of dairy cow feeding behavior based on video and BCE-YOLO model[J]. Journal of South China Agricultural University, 2024, 45(5): 782-792. DOI: 10.7671/j.issn.1001-411X.202404009 |
Animal feeding behavior serves as an essential indicator of animal welfare. This study aims to address the issues of poor recognition accuracy and insufficient feature extraction in cow feeding behavior under complex farming environments, aiming to achieve automatic monitoring of cow feeding behavior.
This paper proposed a recognition method based on the improved BCE-YOLO model. By adding three enhancement modules of BiFormer, CoT, and EMA, the feature extraction capability of the YOLOv8 model was enhanced. Furthermore, it was combined with the Deep SORT algorithm, which outperforms Staple and SiameseRPN algorithms, to track the head trajectory of cows during feeding. A total of 11 288 images were extracted from overhead and frontal videos of cows during feeding, divided into training and test sets at a ratio of 6∶1, to form a feeding dataset.
The improved BCE-YOLO model achieved precision of 77.73% and 76.32% on the frontal and overhead datasets, respectively, with recall rates of 82.57% and 86.33%, as well as mean average precision values of 83.70% and 76.81%. Compared to the YOLOv8 model, the overall performance of the proposed model was improved by six to eight percentage points. The Deep SORT algorithm also demonstrated one to four percentage points improvement in comprehensive performance compared to Staple and SiameseRPN algorithms. The combination of the improved BCE-YOLO model and Deep SORT target tracking algorithm achieved accurate tracking of cow feeding behavior and effectively suppressed cow ID (Identity document) changes.
The proposed method effectively addresses the issues of poor recognition accuracy and insufficient feature extraction in cow feeding behavior under complex farming environments. It provides an important reference for intelligent animal husbandry and precision farming.
[1] |
何东健, 刘冬, 赵凯旋. 精准畜牧业中动物信息智能感知与行为检测研究进展[J]. 农业机械学报, 2016, 47(5): 231-244.
|
[2] |
胡国政. 基于计算机视觉的奶牛身份及采食行为识别研究[D]. 泰安: 山东农业大学, 2023.
|
[3] |
黄吉峰. 高产奶牛饲养管理技术措施探讨[J]. 中国乳业, 2024(1): 37-40.
|
[4] |
徐国忠. 奶牛跛足原因与预防管理[J]. 中国乳业, 2017(4): 63-64.
|
[5] |
刘娜, 安晓萍, 王园, 等. 机器视觉技术在奶牛精准化管理中的应用研究进展[J/OL]. 中国畜牧杂志, (2024-03-29) [2024-04-01]. https://doi.org/10.19556/j.0258-7033.20230803-07.
|
[6] |
梁家璇. 基于惯导的奶牛行为识别及其传感器电路中低功耗运放设计与实现[D]. 泰安: 山东农业大学, 2022.
|
[7] |
安健, 程宇森, 桂小林, 等. 多场景下基于传感器的行为识别[J]. 计算机工程与设计, 2024, 45(1): 244-251.
|
[8] |
AOUGHLIS S, SADDAOUI R, ACHOUR B, et al. Dairy cows’ localisation and feeding behaviour monitoring using a combination of IMU and RFID network[J]. International Journal of Sensor Networks, 2021, 37(1): 23-35. doi: 10.1504/IJSNET.2021.117962
|
[9] |
卫阳森. 基于深度学习的牲畜行为识别研究与应用[D]. 秦皇岛: 河北科技师范学院, 2023.
|
[10] |
白强, 高荣华, 赵春江, 等. 基于改进YOLOV5s网络的奶牛多尺度行为识别方法[J]. 农业工程学报, 2022, 38(12): 163-172.
|
[11] |
杨阿庆. 基于计算机视觉的哺乳母猪行为识别研究[D]. 广州: 华南农业大学, 2019.
|
[12] |
王政, 许兴时, 华志新, 等. 融合YOLO v5n与通道剪枝算法的轻量化奶牛发情行为识别[J]. 农业工程学报, 2022, 38(23): 130-140.
|
[13] |
BEZEN R, EDAN Y, HALACHMI I. Computer vision system for measuring individual cow feed intake using RGB-D camera and deep learning algorithms[J]. Computers and Electronics in Agriculture, 2020, 172: 105345. doi: 10.1016/j.compag.2020.105345
|
[14] |
LAO F, BROWN-BRANDL T, STINN J P, et al. Automatic recognition of lactating sow behaviors through depth image processing[J]. Computers and Electronics in Agriculture, 2016, 125: 56-62. doi: 10.1016/j.compag.2016.04.026
|
[15] |
SHELLEY A N, LAU D L, STONE A E, et al. Short communication: Measuring feed volume and weight by machine vision[J]. Journal of Dairy Science, 2016, 99(1): 386-391. doi: 10.3168/jds.2014-8964
|
[16] |
ZHENG Z, WANG P, LIU W, et al. Distance-IoU loss: Faster and better learning for bounding box regression[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2020, 34(7): 12993-13000. doi: 10.1609/aaai.v34i07.6999
|
[17] |
LIN T Y, DOLLÁR P, GIRSHICK R B, et al. Feature pyramid networks for object detection[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu: IEEE, 2017.
|
[18] |
曹小喜. 基于深度学习的口罩佩戴实时检测算法研究与系统实现[D]. 芜湖: 安徽工程大学, 2022.
|
[19] |
ZHU L, WANG X, KE Z, et al. BiFormer: Vision transformer with Bi-Level routing attention[C]//2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Vancouver: IEEE, 2023: 10323-10333.
|
[20] |
RAMACHANDRAN P, PARMAR N, VASWANI A, et al. Stand-alone self-attention in vision models[EB/OL]. arXiv: 1906.05909 (2019-06-13) [2024-04-01]. https://doi.org/10.48550/arXiv.1906.05909.
|
[21] |
OUYANG D L, HE S, ZHANG G Z, et al. Efficient multi-scale attention module with cross-spatial learning[C]//ICASSP 2023. 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Rhodes: IEEE, 2023: 1-5.
|
[22] |
WOJKE N, BEWLEY A, PAULUS D. Simple online and realtime tracking with a deep association metric[C]//2017 IEEE International Conference on Image Processing (ICIP). Beijing: IEEE, 2017: 3645-3649.
|