Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

SemiField

Bellow are the steps involved in processing the semi-field data and estimated people/h needed to accomplish those:

Assumptions: there will be 9 batches per week, 3 batches per each partner site (MD, NC and TX)

Glossary

  • Batch = Collection of images collected during one BenchBot run.

  • Season

Task #1: Pre-processing

  • Skills needed: basic command line and python skills, attention to detail.

  • Hours required to complete this task on average per batch: 0.5 h (30 min)

  • On average, 8 batches can be processed per day due to the time that it takes for those to go through the pipeline once the manual inspection has been done.

Subtask #1-1: Pre-processing Backlog

As of there are 179 batches that need to be processed to get us to the point where we can start maintaining this pipeline on a weekly basis.

  • Labor estimation:

    • Total: 90 h

    • Per day (limited to): 1.5 h

    • Weeks needed to complete the task: 12 (If only one person and done 5 days a week)

  • Possible candidates for this role: Zack, Jordan, Courtney

  • Goal: finish processing the backlog by the end of Jan.

  • Each person is assigned ⅓ of the total batches (179/3 ~60)

  • Who gets which batches will be on a shared spreadsheet (Preprocessing Backlog Sheet)

  • Minimum of 16 batches/week/person

Subtask #1-2: Pre-processing weekly

This happens on a weekly basis to keep the incoming data moving through the pipeline as it gets uploaded to the Azure storage.

  • Labor estimation: 

    • Estimated batches per week: 9 (3 from each site)

    • Hours / week: 4.5h (hours have to be split at least in 3 days)

    • Candidates for this role: Zack, Jordan

Task #2: SfM bench reconstruction

This is the main bottleneck of the pipeline!!! Each time the QR codes are moved a lot of resources need to be pulled in to first identify the issues and then adjust parameters accordingly to rerun the  reconstruction. Markers MUST be fixed for all sites moving forward. With BBot 2.0 with RTK capability it’s possible that the QR codes won’t be needed anymore, until then, we have to guarantee that marker position stays the same.

  • Skills needed Task#2 - #5: Each task demands a mix of technical skills (such as command line usage, Python programming, and GIS knowledge) and soft skills (like attention to detail and problem-solving).

  1. Programming and Scripting

  • Command line/bash

  • Python

  • Git

  1. Logging

  2. Visual inspection

  • Labor estimation: 

    • If QR codes are not in the same position (Auto SfM needs to be rerun):

      • Identify issues: 2 hours

      • Rerun : 2 hours

    • If QR codes are in the same position

      • 5 min/batch

      • 45 min/week (Assuming 9 batches per week)

  • Candidates for this role: Nav and Matthew

It takes ~48 h to get the output of the Auto SfM each time a rerun is needed. If everything goes perfect it takes 4 h on average to process 500 to 700 images (full bench).

Task #3: Make shape file

The shape file contains the information on which set of pots corresponds to which species. It’s done manually, using QGIS to create a shapefile over the completed orthomosaic of the potting area.

A season here refers to the group of plants which are set up at the same time. For example, cover crops planted in fall 2023 and killed in 2024 are a season, weeds planted in spring 2023 and killed at the end of summer 2023 are another season.

The labor estimations below are valid if the QR codes are always in the same position. If the QR code position changes the shape file has to be redone. 

  • Skills needed: see task #2

  • Labor estimation:

    • 1.5 h per season/location

    • 3-4 seasons/year

    • 3 locations

    • 18 h total

Task #4: Put all previously processes and generated data through the rest of the pipeline

This step includes plant detection, labeling, segmentation and reviewing results on a random sample of ~ 10 output images.

  • Skills needed: See task #2

  • Labor estimation:

    • Putting data through pipeline and reviewing results: 15 min/batch

    • Move final images to local storage and azure and checking upload: 10 min/batch

    • 3.75 h/week total

  • No labels