Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Benchbot operators will manually upload (by dragging and dropping) batch folder into the “Uploads” blob container using Azure Storage Explorer.

Metadata includes:

  1. Ground control point location (csv) measured in meters.

  2. Species Map (csv) that lists species for each row.

Running the pipeline and storage

...

Raw images are preprocessed using color calibration card.

Image RemovedImage Added

Mapping and Detection

...

The AutoSfM process takes in developed images and ground control point metadata to create a global coordinate reference system (CRS). An orthomosaic, or collage of stitched images, and detailed camera reference information is generated, the latter being used to convert local image coordinates into global potting area locations.

For example, an image 2000 pixels high and 4000 pixels wide has a local center point at (1000, 2000), half its height and half its width, measure measured in pixelpixels. Camera reference information allows us to convert project this local center point to a geographical potting area location in meters, (1.23m, 4.56m) for example.

Inputs:

  1. Developed images

  2. Ground control point information (.csv)

Outputs:

  1. camera matrix and orientation information for each image (.csv)

  2. fov for each image (.csv)

  3. and processing accuracy assessment (.csv)

  4. orthomosaic of potting area (.tif)

Detection

Object detection is performed to identify plant locations and create local bounding box coordinates. More detailed pixel wise segmentation is performed with the bounding box areas.

  • Model - YOLOv5

  • Single class - “plant”

  • Trained and tested on 753 images captured and labeled for the 2021 OpenCV AI competition.

  • mAP_0.5 = 0.93, mAP_0.5:0.95 = 0.67, recall = 0.9, precision = 0.93

Inputs:

  1. Developed images

  2. Trained model

Outputs:

  1. Local detection results for each image. Detection results are in normalized xyxy format (.csv).

Gallery
includeNC_2_2_1647016892000.0_DETECTION.png, NC_3_3_1647017251000.0_DETECTION.png, NC_4_1_1647017625000.0_DETECTION.png, NC_4_6_1647017719000.0_DETECTION.png, NC_5_1_1647017809000.0_DETECTION.png, NC_6_2_1647018169000.0_DETECTION.png, NC_7_1_1647018293000.0_DETECTION.png, NC_8_4_1647018523000.0_DETECTION.png
columns4
sortname

Detection results from 2022-03-11

Remap

Infers global bounding box positions using autoSfM camera reference information.Local plant detection results are remapped to global potting area coordinates.

Inputs:

  1. Images

  2. camera reference information (.csv)

Outputs:

  1. detailed metadata with camera information and detection results (.json)

...

WHY?

The plant Image annotations need specie-level information, but the detection model only detect plants and cannot differentiate between species. However, to create species-level annotation, each bounding box detection needs a species labelprovides the location of “plants”. We also have many overlapping images resulting in duplicate segments which can lead to an imbalanced or homogeneous dataset.

Species mapping: Species level detection for this project (24 species) is unrealistic at this early stage. When We can infer the species of each detection result with a user-defined species map and geospatial data are applied, AutoSfM results can provide specie level information. If we know what row or general geographic area these each species are is located, then we can label each bounding box appropriately.

...

Unique detection result: Provides unique (primary) bounding box information. The benchbot is taking 6 images along a single row of 4 pots. These images overlap considerably and the same plant is often detected, and thus segmented, multiple times at different angles. While multiple angles are good, its important to identify the unique, or primary detection result (when the camera is directly over the plants). Doing so allows us to:

  1. maximize synthetic image diversity and avoid using the same plant segment (albeit at slightly different angles) multiple times. us monitor and understand the distribution of primary vs non-primary data for training models. A dataset with many non-unique duplicates, while large, will not be diverse and will lead to , which could lead to homogeneous data and thus poor model performance.

  2. Lastly, being able to identify Identify unique plant/pot position allows us to monitor individual plants throughout their growth stages leading to detailed phenotypic profiles

Monitoring: Monitor for inconsistencies and error in image capture across sites using detailed reporting of camera reference information

...

Segment Vegetation and Cutout Data

...

A combination of digital image processing techniques including index thresholding, unsupervised classification, and morphological operations .

Process Bounding Box Area

Image Removed

separate vegetation from background. The resulting plant cutouts will be used for generating synthetic data.

Image Added

  • Crop image to bbox

Image AddedImage Removed

  • Vegetation index

  • Multiply VI by some factor X

  • Perform unsupervised classification

  • Apply combination of morphological opening and closing operations

  • Resulting cutout

...

Excess green VI

Gallery
includeNC_5_3_1647017847000.0_1.png, NC_1_4_1647016528000.0_2.png, NC_3_4_1647017271000.0_7.png, NC_3_1_1647017214000.0_2.png, NC_5_6_1647017903000.0_5.png, NC_4_4_1647017680000.0_3.png, NC_6_3_1647018188000.0_12.png
columns4
sortname

...