In-field Blueberry Fruit Phenotyping with a MARS-PhenoBot and Customized BerryNet

Authors:
Zhengkun Li
ZL
Zhengkun Li
Google Scholar
Rui Xu
RX
Changying Li
CL
Changying Li*
Google Scholar
Patricio Munoz
PM
Patricio Munoz
Google Scholar
Fumiomi Takeda
FT
Fumiomi Takeda
Google Scholar
Bruno Leme
BL
Bruno Leme
Profile

*Corresponding author

Affiliation: BSAIL Research Group, Department of Agricultural and Biological Engineering, University of Florida

Project Resources

Abstract

This research presents an automated in-field blueberry fruit phenotyping system that leverages the MARS-PhenoBot mobile robotic platform integrated with a customized BerryNet deep learning framework. The system employs Segment Anything Model (SAM) for automated pixel-wise label generation and a customized YOLOv8 architecture for robust fruit detection and segmentation. The BerryNet framework incorporates three major enhancements: (1) enhanced P2 layer for better capture of small object features, (2) BiFPN for improved feature fusion, and (3) C2f-faster block replacement for accelerated inference. Through comprehensive field validation in commercial blueberry fields, the proposed approach demonstrates robust performance across varying lighting conditions, fruit maturity stages, and blueberry varieties, offering a scalable solution for precision agriculture applications.

1. Introduction

Accurate yield estimation and fruit phenotyping are critical for agricultural planning, resource management, and economic optimization in commercial blueberry production. Traditional manual counting and phenotyping methods are labor-intensive, time-consuming, and subject to human error. Recent advances in computer vision and deep learning, particularly foundation models like Segment Anything Model (SAM), offer promising alternatives for automated fruit detection, segmentation, and phenotyping.

This project addresses the challenge of in-field blueberry phenotyping by developing an integrated robotic system that combines:

  • MARS-PhenoBot platform: A custom-designed mobile robotic platform for in-field data collection
  • SAM-based pixel-wise labeling: Automated annotation generation using Segment Anything Model
  • Customized BerryNet architecture: Enhanced YOLOv8 framework optimized for blueberry fruit detection and segmentation
  • Multi-view imagery: Comprehensive fruit coverage through synchronized multi-camera systems

2. Methodology

2.1 System Workflow

The proposed blueberry fruit phenotyping workflow involves four main stages:

  1. Data Collection: In-field image acquisition using the MARS-PhenoBot platform
  2. Training Dataset Generation: Automated pixel-wise label generation using SAM
  3. Model Training: Training of customized BerryNet model
  4. Phenotyping Traits Extraction: Automated extraction of fruit characteristics and yield estimation
System Workflow

Fig. 1: Diagram of the proposed blueberry fruit phenotyping workflow involving four stages: data collection, training dataset generation, model training, and phenotyping traits extraction.

2.2 SAM-based Pixel-wise Label Generation

A key innovation of this work is the automated generation of pixel-wise labels using the Segment Anything Model (SAM). The process involves:

  1. Bounding Box Initialization: Using detections from a previous dataset (Li et al., 2023)
  2. Maturity Classification: Re-classifying bounding boxes into three categories:
    • Immature (yellow)
    • Semi-mature (red)
    • Mature (blue)
  3. SAM-based Segmentation: Generating pixel-wise mask labels using SAM for precise fruit segmentation

This automated labeling approach significantly reduces annotation time while maintaining high-quality training data.

SAM-based Labeling

Fig. 2: Illustration of the proposed automated pixel-wise label generation process for blueberry fruits at different maturity stages. (a) Bounding boxes from previous detection dataset; (b) Bounding boxes re-classified into maturity categories; (c) Pixel-wise mask labels generated using SAM.

2.3 BerryNet Architecture

The customized BerryNet framework builds upon YOLOv8 with three major enhancements:

Key Architectural Improvements:

  1. Enhanced P2 Layer: Modified to better capture features of small objects, crucial for detecting small or partially occluded blueberries
  2. BiFPN Implementation: Bidirectional Feature Pyramid Network for improved multi-scale feature fusion, enhancing detection accuracy across different fruit sizes
  3. C2f-faster Block: Replaced standard C2f blocks with optimized C2f-faster blocks to accelerate inference while maintaining accuracy
BerryNet Architecture

Fig. 3: Illustration of the BerryNet framework incorporating three major enhancements: (1) enhanced P2 layer for small object detection, (2) BiFPN for feature fusion, and (3) C2f-faster block for accelerated inference.

2.4 MARS-PhenoBot Platform

The MARS-PhenoBot is a custom-designed mobile robotic platform specifically developed for in-field agricultural phenotyping:

  • Multi-camera system: Synchronized cameras for comprehensive fruit coverage
  • Navigation capability: Autonomous or semi-autonomous field navigation
  • Real-time processing: Onboard computing for immediate results
  • Robust design: Weather-resistant construction for field deployment

3. Results and Validation

The system was extensively validated in commercial blueberry fields with comprehensive performance evaluation:

Key Performance Metrics

  • Detection Accuracy: High precision and recall across diverse field conditions and fruit maturity stages
  • Segmentation Quality: Accurate pixel-wise segmentation enabled by SAM-generated training labels
  • Real-time Performance: Optimized inference speed suitable for field deployment on mobile platforms
  • Yield Estimation Accuracy: Strong correlation with manual ground truth measurements
  • Robustness: Consistent performance across different blueberry varieties, growth stages, and environmental conditions
  • Labeling Efficiency: Significant reduction in annotation time through automated SAM-based labeling

4. Publications

Journal Article (2025)

Li, Z., Xu, R., Li, C., Munoz, P., Takeda, F., & Leme, B. (2025). In-field blueberry fruit phenotyping with a MARS-PhenoBot and customized BerryNet. Computers and Electronics in Agriculture, 232, 110057.

View Article → | DOI: 10.1016/j.compag.2025.110057

Conference Paper (2023)

Li, Z., Li, C., & Munoz, P. (2023). Blueberry Yield Estimation Through Multi-View Imagery with YOLOv8 Object Detection. Proceedings of the ASABE Annual International Meeting, Paper No. 2300129. St. Joseph, MI: ASABE.

View Abstract →

5. Technical Implementation

MARS-PhenoBot

Custom mobile robotic platform with synchronized multi-camera system for comprehensive in-field fruit coverage

SAM-based Labeling

Automated pixel-wise label generation using Segment Anything Model, significantly reducing annotation effort

BerryNet Architecture

Customized YOLOv8 with enhanced P2 layer, BiFPN, and C2f-faster blocks for improved accuracy and speed

Field Validation

Extensive testing in commercial blueberry fields across multiple seasons, varieties, and maturity stages

6. Key Contributions

This research makes several important contributions to the field of agricultural robotics and computer vision:

  • SAM-based Automated Labeling: Novel approach for generating pixel-wise training labels using Segment Anything Model, dramatically reducing annotation time
  • Customized BerryNet Framework: Three major architectural enhancements (P2 layer, BiFPN, C2f-faster) for improved small object detection and inference speed
  • Integrated Robotic System: Complete MARS-PhenoBot platform for end-to-end in-field phenotyping workflow
  • Multi-maturity Segmentation: Comprehensive fruit segmentation across different maturity stages (immature, semi-mature, mature)
  • Real-time Performance: Optimized deep learning pipeline suitable for mobile field deployment
  • Validated System: Extensive field validation demonstrating practical applicability in commercial settings

7. Visual Documentation

Project Images

Click any picture to view in full screen.


For more information about this project, please refer to the publications or contact:

zhengkun.li3969@gmail.com