Purpose:
The potential of biomass from agricultural land is not a constant value, but rather a dynamic one that is influenced by changes in many factors and characteristics. By monitoring the productivity of their plants, farmers can manage the production process. Identifying which parts of their cover cropped fields are performing more or less is crucial information for determining how much nitrogen to apply for the next cash crop. Also, knowing when and what to harvest can result in higher yields, as every day matters. Therefore, having a biomass calculator is essential in the decision support tool to aid the farmers in their work.
Overview of Steps from User Perspective:
On the home page, ask the user if they have sampled biomass or if they would like to use satellite data to estimate biomass
If they have sampled biomass, the flow is exactly as it currently is in the tool
If the want to use satellite data to estimate biomass, the flow will be what’s shown below
Location tab
Stay the same as the current develop branch, name field, select polygon.
Remove the calculate biomass button from
Biomass will be displayed at the end
Soil tab
Add text saying
This model will use the SSURGO soil data from your field to estimate cover crop decompostition
No sliders to adjust soil data
Eventually fetch SSURGO data for each pixel
Display the range of Organic Matter, Bulk Density, Soil Inorganic N
Cover crop tab
Add cover crop planting date
Remove dry, fresh biomass, water content → will always be 80% or so
Select cover crop species, planting date with year, termination date with year (add in some error handling, could be in the future) at this point?
Two scenarios.
Past mode → running from planting until a date in the past or today.
Past + future mode → running from planting to a date in the future.
If the date is in the future, get the biomass today using the HLS API using real weather, and then use the 5 year average route from the NCALC API with the current biomass (from today) to make a future prediction.
Quality values come from lookup table, should not be editable from sliders (page 2 of cover crops goes away) (this will need the lookup table to exist, ignore for now)
Cash crop tab
No changes
Output
Two tabs
One is just like the current output (this will need the lookup table to exist, ignore for now)
Calculate the average biomass for all pixels and then use that to generate the existing graphs.
The other has the geospatial biomass, and nitrogen variability at the point of termination
Display a map of biomass for the selected polygon
As another layer display nitrogen released at a specific date (this will need the lookup table to exist, ignore for now)
Have a text box to select a new date to run the model on
Have an export button to feed to a tractor for spatial nitrogen application recommendation (this will need the lookup table to exist, ignore for now)
Aside note → only use centroid for weather data
Overview of Steps in Calculation:
For a given region representing a farm field polygon, we will do the following.
Receive polygon.
Calculate a series of NDVI values over that polygon using satellite images from Sentinel-2 and Landsat-8 with a pixel size of 30m x 30m.
This series of NDVI values will have approximately one sample per month in the fall and winter and then approximately 1 sample per week in the spring when the crop starts growing more rapidly.
By interpolating and applying regression analysis on those NDVI values, a map of estimates of total biomass value for each pixel at the point of termination will be generated.
Use a lookup table to get nitrogen, lignin, and other physical property estimates for the cover crop.
Feed the species, planting date, termination date, biomass map, and physical properties into the NCALC API and receive a nitrogen release map at that point in time
Why we need AWS:
The datasets used are from a NASA maintained server under an initiative called HLS (Harmonized Landsat Sentinel-2). The benefit of using this service is the harmonization and calibration that is done across both satellite datasets. They provide an API endpoint with free access to query and download the satellite images. The API can only be accessed via an AWS north-west zone instance, hence, the need for an AWS VM instance to create this tool. Our pipeline starts with a user request with their farm geometry and we use python programming language to request images from HLS server that covers that specific farmland via our server in the AWS VM instance. Later, that request is calculated according to our remote sensing team’s algorithm and the biomass values are provided to the user through our API endpoints.
Server architecture:
1x t3a.xlarge instance running Ubuntu 20.02.
1x Elastic IP to maintain a constant IP address for DNS.
Access to University Resources
This EC2 instance will not need any access to university data so no additional configuration will be needed to setup VPN etc.
Add Comment