Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Version History

« Previous Version 30 Current »

Conceptual Design

Batch Upload

Benchbot operators upload batch folder images to the “uploads” blob container with a custom app.

Inputs:

  1. Raw images collected from benchbot

  2. marker locations measured in meters

  3. specie group locations

Outputs:

No processing occurs during upload. Images and metadata are kept in blob storage for further processing.

Preprocessing

Includes color calibration…

Raw images are preprocessed using color calibration card.

Inputs:

  1. Raw images from “uploads” blob container

Outputs:

  1. Processed jpgs

  2. masks the remove blue BenchBot area to reduce AutoSfM alignment error

Mapping and Detection

AutoSfM

The AutoSfM process takes in developed images and ground control point metadata to create a global coordinate reference system (CRS). An orthomosaic, or collage of stitched images, and detailed camera reference information is generated, the latter being used to convert local image coordinates into global potting area locations.

For example, an image 2000 pixels high and 4000 pixels wide has a local center point at (1000, 2000), half its height and half its width, measured in pixels. Camera reference information allows us to project this local center point to a geographical potting area location in meters, (1.23m, 4.56m) for example. Knowing the general location of species pot groups, we assign species labels to each general “vegetation” detection results. We relate the global potting area locations to local image bounding boxes coordinates allowing to us to fill in the missing species label.

Inputs:

  1. Developed images

  2. Masks

  3. Ground control point information (.csv)

Outputs:

  1. camera reference information for each image (.csv)

  2. fov for each image (.csv)

  3. error statistics (.csv)

  4. marker reference assesment (.csv)

  5. orthomosaic of potting area (.tif)

  6. digtial elevation model (.tif)

Detection

Object detection is performed to identify plant locations and create local bounding box coordinates. More detailed pixel wise segmentation is performed with the bounding box areas.

  • Model - YOLOv5

  • Single class - “plant”

  • Trained and tested on 753 images captured and labeled for the 2021 OpenCV AI competition.

  • mAP_0.5 = 0.93, mAP_0.5:0.95 = 0.67, recall = 0.9, precision = 0.93

Inputs:

  1. Developed images

  2. Trained detection model

Outputs:

  1. Local detection results for each image. Detection results are in normalized xyxy format (.csv).

Remap

Local plant detection results are remapped to global potting area coordinates.

Inputs:

  1. Images

  2. camera reference information (.csv)

Outputs:

  1. detailed metadata with camera information and detection results (.json)

Vegetation Segmentation

Inputs:

Outputs:

WHY?

Image annotations need specie-level information, but the detection model only provides the location of “plants”. We also have many overlapping images resulting in duplicate segments which can lead to an imbalanced or homogeneous dataset.

Species mapping: We can infer the species of each detection result with a user-defined species map and geospatial data. If we know what row or general geographic area each species is located, we can label each bounding box appropriately.

Unique detection result: The benchbot is taking 6 images along a single row of 4 pots. These images overlap considerably and the same plant is often detected, and thus segmented, multiple times at different angles. While multiple angles are good, its important to identify the unique, or primary detection result (when the camera is directly over the plants). Doing so allows us to:

  1. maximize synthetic image diversity and avoid using the same plant segment (albeit at slightly different angles) multiple times, which could lead to homogeneous data and thus poor model performance.

  2. Identify unique plant/pot position throughout their growth stages leading to detailed phenotypic profiles

Monitoring: Monitor for inconsistencies and error in image capture across sites using detailed reporting of camera reference information

Segment Vegetation and Cutout Data

A combination of digital image processing techniques including index thresholding, unsupervised classification, and morphological operations separate vegetation from background. The resulting plant cutouts will be used for generating synthetic data.


Synthetic Data

TODO:

  1. implement species class labels

  2. add transformations to cutouts, pots, and backgrounds

  3. make more pots

Models

TBD

  • No labels

0 Comments

You are not logged in. Any changes you make will be marked as anonymous. You may want to Log In if you already have an account.