张立印, 张姬, 杨庆璐, 等. 基于视频和BCE-YOLO模型的奶牛采食行为检测[J]. 华南农业大学学报, 2024, 45(5): 782-792. DOI: 10.7671/j.issn.1001-411X.202404009
    引用本文: 张立印, 张姬, 杨庆璐, 等. 基于视频和BCE-YOLO模型的奶牛采食行为检测[J]. 华南农业大学学报, 2024, 45(5): 782-792. DOI: 10.7671/j.issn.1001-411X.202404009
    ZHANG Liyin, ZHANG Ji, YANG Qinglu, et al. Detection of dairy cow feeding behavior based on video and BCE-YOLO model[J]. Journal of South China Agricultural University, 2024, 45(5): 782-792. DOI: 10.7671/j.issn.1001-411X.202404009
    Citation: ZHANG Liyin, ZHANG Ji, YANG Qinglu, et al. Detection of dairy cow feeding behavior based on video and BCE-YOLO model[J]. Journal of South China Agricultural University, 2024, 45(5): 782-792. DOI: 10.7671/j.issn.1001-411X.202404009

    基于视频和BCE-YOLO模型的奶牛采食行为检测

    Detection of dairy cow feeding behavior based on video and BCE-YOLO model

    • 摘要:
      目的 动物采食行为是一个重要的动物福利评价指标,本研究旨在解决复杂饲养环境下奶牛采食行为识别精度差、特征提取不充分的问题,实现对奶牛采食行为的自动监控。
      方法 本文提出了一种基于改进BCE-YOLO模型的识别方法,该方法通过添加BiFormer、CoT、EMA 3个增强模块,提高YOLOv8模型特征提取的能力,进一步与优于Staple、SiameseRPN算法的Deep SORT算法结合,实现对奶牛采食时头部轨迹的跟踪。在奶牛采食过程的俯视和正视视频中提取11288张图像,按照6∶1比例分为训练集和测试集,构建采食数据集。
      结果 改进的BCE-YOLO模型在前方和上方拍摄的数据集上精确度分别为77.73%、76.32%,召回率分别为82.57%、86.33%,平均精确度均值分别为83.70%、76.81%;相较于YOLOv8模型,整体性能提升6~8个百分点。Deep SORT算法相比于Staple、SiameseRPN算法,综合性能提高1~4个百分点;并且改进的BCE-YOLO模型与Deep SORT目标跟踪算法结合良好,能对奶牛采食行为进行准确跟踪且有效地抑制了奶牛ID(Identity document)的变更。
      结论 本文提出的方法能有效解决复杂饲养环境下奶牛采食行为识别精度差、特征提取不充分的问题,为智能畜牧与精确养殖提供重要参考。

       

      Abstract:
      Objective Animal feeding behavior serves as an essential indicator of animal welfare. This study aims to address the issues of poor recognition accuracy and insufficient feature extraction in cow feeding behavior under complex farming environments, aiming to achieve automatic monitoring of cow feeding behavior.
      Method This paper proposed a recognition method based on the improved BCE-YOLO model. By adding three enhancement modules of BiFormer, CoT, and EMA, the feature extraction capability of the YOLOv8 model was enhanced. Furthermore, it was combined with the Deep SORT algorithm, which outperforms Staple and SiameseRPN algorithms, to track the head trajectory of cows during feeding. A total of 11 288 images were extracted from overhead and frontal videos of cows during feeding, divided into training and test sets at a ratio of 6∶1, to form a feeding dataset.
      Result The improved BCE-YOLO model achieved precision of 77.73% and 76.32% on the frontal and overhead datasets, respectively, with recall rates of 82.57% and 86.33%, as well as mean average precision values of 83.70% and 76.81%. Compared to the YOLOv8 model, the overall performance of the proposed model was improved by six to eight percentage points. The Deep SORT algorithm also demonstrated one to four percentage points improvement in comprehensive performance compared to Staple and SiameseRPN algorithms. The combination of the improved BCE-YOLO model and Deep SORT target tracking algorithm achieved accurate tracking of cow feeding behavior and effectively suppressed cow ID (Identity document) changes.
      Conclusion The proposed method effectively addresses the issues of poor recognition accuracy and insufficient feature extraction in cow feeding behavior under complex farming environments. It provides an important reference for intelligent animal husbandry and precision farming.

       

    /

    返回文章
    返回