FAN Shengzhe, GONG Liang, YANG Zhiyu, et al. A lightweight I2I deep learning method for on-panicle grain in-situ counting and occluded grains restoration[J]. Journal of South China Agricultural University, 2023, 44(1): 74-83. DOI: 10.7671/j.issn.1001-411X.202202008
    Citation: FAN Shengzhe, GONG Liang, YANG Zhiyu, et al. A lightweight I2I deep learning method for on-panicle grain in-situ counting and occluded grains restoration[J]. Journal of South China Agricultural University, 2023, 44(1): 74-83. DOI: 10.7671/j.issn.1001-411X.202202008

    A lightweight I2I deep learning method for on-panicle grain in-situ counting and occluded grains restoration

    More Information
    • Received Date: February 11, 2022
    • Available Online: May 17, 2023
    • Objective 

      To address the functional and efficiency limitations of the conventional grain phenotype analysis algorithm of seed analyzers, a deep learning based lightweight general algorithmic framework was designed for two tasks: In-situ counting of on-panicle grains and restoration of occluded grains.

      Method 

      Two complex tasks of on-panicle grains in-situ counting and restoration of occluded grains were decomposed into two stages, and their core stages were modeled as I2I problems. A lightweight network architecture capable of solving the I2I problem was designed based on MobileNet V3, and the data set generation method was designed according to the characteristics of these two tasks. Then the network was trained with appropriate optimization strategies and hyperparameters. After training, the model was deployed and tested with TensorFlow Lite runtime on Raspberry Pi 4B development board.

      Result 

      The algorithm had good accuracy, rapidity and some generalizable performance in the task of on-panicle grain counting. In the task of occluded grains shape restoration, the evaluation accuracy of the restored images in the metrics of area, perimeter, length, width and color score were all over 97%.

      Conclusion 

      The algorithm proposed in this paper can complete the task of on-panicle grain counting and occluded grains restoration effectively, and also has the advantage of being lightweight.

    • [1]
      ZHANG Q. Strategies for developing green super rice[J]. Proceedings of the National Academy of Sciences of the United States of America, 2007, 104(42): 16402-16409. doi: 10.1073/pnas.0708013104
      [2]
      CHENG S H, CAO L Y, ZHUANG J Y, et al. Super hybrid rice breeding in China: Achievements and prospects[J]. Journal of Integrative Plant Biology, 2007, 49(6): 805-810. doi: 10.1111/j.1744-7909.2007.00514.x
      [3]
      TANABATA T, SHIBAYA T, HORI K, et al. SmartGrain: High-throughput phenotyping software for measuring seed shape through image analysis[J]. Plant physiology, 2012, 160(4): 1871-1880. doi: 10.1104/pp.112.205120
      [4]
      KNECHT A C, CAMPBELL M T, CAPREZ A, et al. Image Harvest: An open-source platform for high-throughput plant image processing and analysis[J]. Journal of Experimental Botany, 2016, 67(11): 3587-3599. doi: 10.1093/jxb/erw176
      [5]
      CHEN Y T, LIU X, YANG M H. Multi-instance object segmentation with occlusion handling[C]//2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston: IEEE, 2015.
      [6]
      DOLLÁR P, WOJEK C, SCHIELE B, et al. Pedestrian detection: A benchmark[C]//2009 IEEE Conference on Computer Vision and Pattern Recognition. Miami: IEEE, 2009.
      [7]
      李秀智, 李家豪, 张祥银, 等. 基于深度学习的机器人最优抓取姿态检测方法[J]. 仪器仪表学报, 2020, 41(5): 108-117. doi: 10.19650/j.cnki.cjsi.J2006162
      [8]
      甘海明, 岳学军, 洪添胜, 等. 基于深度学习的龙眼叶片叶绿素含量预测的高光谱反演模型[J]. 华南农业大学学报, 2018, 39(3): 102-110. doi: 10.7671/j.issn.1001-411X.2018.03.016
      [9]
      史红栩, 李修华, 李民赞, 等. 基于深度学习的香蕉病害远程诊断系统[J]. 华南农业大学学报, 2020, 41(6): 92-99. doi: 10.7671/j.issn.1001-411X.202004027
      [10]
      HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016
      [11]
      REDMON J, FARHADI A. YOLO9000: Better, faster, stronger[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017.
      [12]
      LIU C, CHEN L C, SCHROFF F, et al. Auto-DeepLab: Hierarchical neural architecture search for semantic image segmentation[C]//2019 IEEE Conference on Computer Vision and Pattern Recognition. Long beach: IEEE, 2019.
      [13]
      ASHA C, NARASIMHADHAN A. Vehicle counting for traffic management system using YOLO and correlation filter[C]//2018 IEEE International Conference on Electronics, Computing and Communication Technologies. Bangalore: IEEE, 2018.
      [14]
      OLTEAN G, FLOREA C, ORGHIDAN R, et al. Towards real time vehicle counting using yolo-tiny and fast motion estimation[C]//2019 IEEE International Symposium for Design and Technology in Electronic Packaging. Cluj-Napoca: IEEE, 2019.
      [15]
      RAD R M, SAEEDI P, AU J, et al. Blastomere cell counting and centroid localization in microscopic images of human embryo[C]//2018 IEEE International Workshop on Multimedia Signal Processing. Vancouver: IEEE, 2018.
      [16]
      CHEN J, FAN Y, WANG T, et al. Automatic segmentation and counting of aphid nymphs on leaves using convolutional neural networks[J]. Agronomy, 2018, 8(8): 129. doi: 10.3390/agronomy8080129.
      [17]
      FALK T, MAI D, BENSCH R, et al. U-Net: Deep learning for cell counting, detection, and morphometry[J]. Nature methods, 2019, 16(1): 67-70. doi: 10.1038/s41592-018-0261-2
      [18]
      CHAUDHURY S, ROY H. Can fully convolutional networks perform well for general image restoration problems?[C]//2017 IEEE International Conference on Machine Vision Applications. Nagoya: IEEE, 2017.
      [19]
      XIONG R, LIU G, QU Y, et al. Depth map inpainting using a fully convolutional network[C]//2019 IEEE International Conference on Robotics and Biomimetics. Dali: IEEE, 2019.
      [20]
      KRIZHEVSKY A, SUTSKEVER I, HINTON G E. ImageNet classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6): 84-90. doi: 10.1145/3065386
      [21]
      SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[EB/OL]. arXiv: 1409.1556 (2014-09-04) [2022-02-12]. https://doi.org/10.48550/arXiv.1409.1556.
      [22]
      HOWARD A G, ZHU M, CHEN B, et al. Mobilenets: Efficient convolutional neural networks for mobile vision applications[EB/OL]. arXiv: 1704.04861 (2017-04-17) [2022-02-12]. https://doi.org/10.48550/arXiv.1704.04861.
      [23]
      SANDLER M, HOWARD A, ZHU M, et al. MobileNetV2: Inverted residuals and linear bottlenecks[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 4510-4520.
      [24]
      HOWARD A, SANDLER M, CHEN B, et al. Searching for MobileNetV3[C]//2019 IEEE International Conference on Computer Vision. Seoul: IEEE, 2019.
      [25]
      YANG T J, HOWARD A, CHEN B, et al. Netadapt: Platform-aware neural network adaptation for mobile applications[EB/OL]. arXiv: 1804.03230 (2018-04-09) [2022-02-12]. https://doi.org/10.48550/arXiv.1804.03230.
      [26]
      VANSCHOREN J. Meta-learning: A survey[EB/OL]. arXiv: 1810.03548 (2018-10-08) [2022-02-12]. https://doi.org/10.48550/arXiv.1810.03548.
      [27]
      CHEN L C, PAPANDREOU G, SCHROFF F, et al. Rethinking atrous convolution for semantic image segmentation[EB/OL]. arXiv: 1706.05587 (2017-06-17) [2022-02-12]. https://doi.org/10.48550/arXiv.1706.05587.
      [28]
      RAMACHANDRAN P, ZOPH B, LE Q V. Searching for activation functions[EB/OL]. arXiv: 1710.05941 (2017-10-16) [2022-02-12]. https://doi.org/10.48550/arXiv.1710.05941.
      [29]
      KINGMA D P, BA J. Adam: A method for stochastic optimization[EB/OL]. arXiv: 1412.6980 (2014-12-22) [2022-02-12]. https://doi.org/10.48550/arXiv.1412.6980.
      [30]
      LOSHCHILOV I, HUTTER F. SGDR: Stochastic gradient descent with warm restarts[EB/OL]. arXiv: 1608.03983 (2016-08-13) [2022-02-12]. https://doi.org/10.48550/arXiv.1608.03983.
      [31]
      马志宏. 基于深度学习的水稻粒穗复杂性状图像分析方法[D]. 上海: 上海交通大学, 2018.
      [32]
      GONG L, LIN K, WANG T, et al. Image-based on-panicle rice [Oryza sativa L.] grain counting with a prior edge wavelet correction model[J]. Agronomy, 2018, 8(6): 91. doi: 10.3390/agronomy8060091.
      [33]
      AL-TAM F, ADAM H, DOS ANJOS A, et al. P-TRAP: A panicle trait phenotyping tool[J]. BMC Plant Biology, 2013, 13: 122. doi: 10.1186/1471-2229-13-122.
      [34]
      GANIN Y, LEMPITSKY V. Unsupervised domain adaptation by backpropagation[C/OL]//Proceedings of the 32nd International Conference on Machine Learning. PMLR, 2015, 37: 1180-1189. [2022-02-08]. http://proceedings.mlr.press/v37/ganin15.html.
      [35]
      ISOLA P, ZHU J Y, ZHOU T, et al. Image-to-image translation with conditional adversarial networks[EB/OL].arXiv: 1611.07004 (2016-11-21) [2022-02-12]. https://doi.org/10.48550/arXiv.1611.07004.
    • Cited by

      Periodical cited type(4)

      1. 陈佳豪,汪语哲,段晓东,梁凯华. 基于改进YOLOv5s的青稞病虫害检测方法. 中国农机化学报. 2025(05): 162-168 .
      2. 丁永军,杨文涛,赵一龙. 基于一致性半监督学习的苹果叶片病斑分割模型研究. 农业机械学报. 2024(12): 314-321 .
      3. 梁毅. 山东东营市小麦条锈病的发生与防治. 农业工程技术. 2023(22): 34-35 .
      4. 强沥文,贾广慧,王伟,周莉. 图像分析技术在农作物污染损害鉴定中的应用. 农业环境科学学报. 2023(12): 2851-2859 .

      Other cited types(9)

    Catalog

      Article views (152) PDF downloads (414) Cited by(13)

      /

      DownLoad:  Full-Size Img  PowerPoint
      Return
      Return