Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: overdue updates
Table of Contents
minLevel1
maxLevel2
Conceptual Design

Data Flow

Image Removed

Image Repository.jpgImage Added

Batch Upload

Benchbot operators upload batch folder images to the “uploads” blob container with a custom app.

Upload batches include:

  1. Images

  2. Potting area and species location metadata

Images are uniquely labeled using UNIX timestamps.

Metadata includes:

  1. Ground marker locations in the form of a (csv) measured in meters and using a single point of origin.

  2. Species map that explain specie groups location in the potting area

Inputs:

Outputs:

Preprocessing

Includes color calibration…

Raw images are preprocessed using color calibration card.

Inputs:

  1. Raw images from “uploads” blob container

Outputs:

  1. Processed jpgs

  2. masks the remove blue BenchBot area to reduce AutoSfM alignment error

Mapping and Detection

AutoSfM

Image Removed

Preprocessing

Involves moving images from Azure to SUNNY, using RawTherapee to perform color calibration, running detection model for general “plant” localization, uploading images back to Azure but to the semif-developed blob container. Object detection is performed to identify plant locations and create local bounding box coordinates. More detailed pixel wise segmentation is performed within the bounding box areas later on in the pipeline.

Outputs:

  1. calibrated JPG

  2. Detection results for each image. Detection results are in normalized xyxy format (.csv).

AutoSfM

Image Added

The AutoSfM process takes in developed images and ground control point metadata to create a global coordinate reference system (CRS). An orthomosaic, or collage of stitched images, and detailed camera reference information is generated, the latter being used to convert local image coordinates into global potting area locations.

For example, an image 2000 pixels high and 4000 pixels wide has a local center point at (1000, 2000), half its height and half its width, measured in pixels. Camera reference information allows us to project this local center point to a geographical potting area location in meters, (1.23m, 4.56m) for example. Knowing the general location of species pot groups, we assign species labels to each general “vegetation” detection results. We relate the global potting area locations to local image bounding boxes coordinates allowing to us to fill in the missing species label.

Inputs:

  1. Developed images

  2. Masks

  3. Ground control point information (.csv)

Outputs:

  1. camera matrix and orientation information for each image (.csv)

  2. fov for each image (.csv)

  3. and processing accuracy assessment (.csv)

  4. orthomosaic of potting area (.tif)

Detection

Object detection is performed to identify plant locations and create local bounding box coordinates. More detailed pixel wise segmentation is performed with the bounding box areas.

  • Model - YOLOv5

  • Single class - “plant”

  • Trained and tested on 753 images captured and labeled for the 2021 OpenCV AI competition.

  • mAP_0.5 = 0.93, mAP_0.5:0.95 = 0.67, recall = 0.9, precision = 0.93

Inputs:

  1. Developed images

  2. Trained model

Outputs:

  1. Local detection results for each image. Detection results are in normalized xyxy format (.csv).

  1. Metashape (psx) project for projecting image coordinates to real-world 3D coordinates

image-20240402-211455.pngImage Added

Remap

Local plant detection results are remapped to global potting area coordinates.

Inputs:

  1. Images

  2. camera reference information (.csv)

Outputs:

  1. detailed metadata with camera information and detection results (.json)

Image Removed

WHY?

Image annotations need specie-level information, but the detection model only provides the location of “plants”. We also have many overlapping images resulting in duplicate segments which can lead to an imbalanced or homogeneous dataset.

Species mapping: We can infer the species of each detection result with a user-defined species map and geospatial data. If we know what row or general geographic area each species is located, we can label each bounding box appropriately.

Image Removed

Unique detection result: The benchbot is taking 6 images along a single row of 4 pots. These images overlap considerably and the same plant is often detected, and thus segmented, multiple times at different angles. While multiple angles are good, its important to identify the unique, or primary detection result (when the camera is directly over the plants). Doing so allows us to:

  1. maximize synthetic image diversity and avoid using the same plant segment (albeit at slightly different angles) multiple times, which could lead to homogeneous data and thus poor model performance.

  2. Identify unique plant/pot position throughout their growth stages leading to detailed phenotypic profiles

Monitoring: Monitor for inconsistencies and error in image capture across sites using detailed reporting of camera reference information

Segment Vegetation and Cutout Data

Image RemovedA combination of digital image processing techniques including

Assign Species

At the start of each "season," shapefiles are generated to delineate the boundaries for different species potting groups. These shapefiles are then used to assign specific species labels to individual bounding boxes, based on their intersection with the shapefile. If there is an overlap between a bounding box global coordinates and the designated shapefile features—illustrated as rectangular features below—the label of the overlapping shapefile feature is then attributed to the respective bounding box. This assignment process follows the "Remapping" phase, where bounding box coordinates were transformed from pixel representations to real-world global coordinates.

Vegetation Segmentation

Plant Cutout Generation

Digital image processing techniques like index thresholding, unsupervised classification, and morphological operations are used to separate vegetation from the background . The resulting plant cutouts will be used for generating synthetic data.

Gallery
includeNC_5_3_1647017847000.0_1.png, NC_1_4_1647016528000.0_2.png, NC_3_4_1647017271000.0_7.png, NC_3_1_1647017214000.0_2.png, NC_5_6_1647017903000.0_5.png, NC_4_4_1647017680000.0_3.png, NC_6_3_1647018188000.0_12.png
columns4
sortname

Synthetic Data

Image Removed
Image Removed

Image Removed

TODO:

  1. implement species class labels

  2. add transformations to cutouts, pots, and backgrounds

  3. make more pots

Image Removed
Image Removed

Models

TBDwithin pre-identified bounding boxes. The specific segmentation approach might vary depending on the plant species and the size of the bounding box. These extracted plant regions, or "cutouts," serve as building blocks for creating synthetic data.

Sub-Image Data (Cutouts): Details and Metadata

Cutouts are cropped sections from full-sized images, each containing a single plant instance. Each cutout's metadata includes:

  • Parent image ID

  • Unique cutout ID and number

  • Species classification

  • Primary status

  • Exceeding cropped image border status

  • Camera position

  • Bounding box from which the cutout originated within the full-sized image

  • Specific cutout properties like area, perimeter, and color statistics, which are valuable for in-depth analysis.

Furthermore, the metadata inherits EXIF data from the parent image and incorporates additional details:

Review

For each batch, a weed scientist or agronomic expert selects a random set of 50 images for visual examination. During this process, they review the species detection outcomes, along with the semantic and instance masks. Should any discrepancies arise, the batch's log files are scrutinized for any errors or anomalies. A batch that successfully clears the inspection stage, free of mislabeling or significant errors, proceeds to the next step. Upon approval, the full-sized images and their corresponding sub-images are transferred to the semif-developed and semif-cutouts blob containers in Azure. Additionally, all data products are securely backed up to the NCSU storage facilities, ensuring data integrity and availability.