Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

NAIR is made-up of 3 broad plant datasets; Weeds (WIR), Cover crops (CCIR), and Cash crops (CIR) each of which can be further divided into two main scene types, Semi-field and true real-world Field each providing a unique set of advantages and use cases.

The images have been annotated with information about the location and type of agronomic plant present in the image. The annotations were created using a combination of manual labeling and computer vision algorithms, ensuring high accuracy and consistency.

The datasets also include metadata about each the conditions under which the images were captured, including the type of soil, weather conditions, and the stage of growth of the weeds. This information can be used to further understand the diversity of the dataset and the types of conditions that the algorithms are exposed to.

In summary, the weed image dataset is a comprehensive collection of images and metadata designed to support research in the field of weed management. The diverse and large dataset provides a wealth of data for computer vision algorithms to learn from, helping to improve the accuracy and efficiency of automatic weed detection in agricultural fields.

Semi-Automatic Labeling

A major component of NAIR and the semi-automatic labeling approach is the development of plant segments, or cutouts, that can be used to develop temporary datasets of synthetic images for training annotation assistant detection and segmentation models. Synthetic data along with weak image labels is used to iteratively refine whole image labels.

...