Meetings
User testing - list of users (Dec 9 - 9-12)
collated dev tasks
November 19th
metadata:
primary: date, img type (rgb/multispec)
secondary: 'research_station', 'cloudiness' (can expect buckets of 0-20, 20-40, etc), 'camera_make', camera_model','pilot_name','comments' (truncated to 256 characters)
site title: “product name TBD” (get user input)
logo: ncstate + usda logo
favicon: currently nc state logo, can be replaced with a simple drone
add description/caption to the video to show the user that the map is on the right? but we already use an “arrow” in the text above the video to indicate that
November 5th
devs ready for minor ui changes
ci/cd - dev prod + dev local done. prod left
superusers to use app 1 on separate VM
@jinam talk to jevon - get srs server on campus
all superintendents attending, some breeders - total 10 so far
akshat - code for UI changes done
ol existing tools for alleyways/manipulation of plots + does mapbox have anything?
dropdown to add alleyway height/width - initial
+ allow user to edit it
October 15th
Dec 9 - 9-12 in-person user testing
setup new VM with OIT for app 1 - on-prem for superusers
OIT - will in the future create new VLAN for research computing
jinam - debugging issues in HPC processing
October 8th
chris submitting ticket to cals_it to install app 1 on the lenovo server for ‘superusers’. Jinam work on setting upload timeout to many hours to allow for the uploads.
the December 9th week is #1 choice for user testing with GameTheory. User testing will be a face to face 2-3 hour workshop like event. Fikret has a graduate student class that week that involves folks like Joe and Jeff. Chris check with Fikret for schedule and possible cancellation due to low registration.
setting up development environment for app 2 on the same VM. Will use containers to separate development and production environments. Jinam working on it.
Continued bugs on sfm code that prevents use of the gpu’s and subsequent timeouts on the cpu’s. Possible use of Vyshnavi given all the competing demands on Jinam. Chris checking with Mikah and Steven about using her time.
GameTheory coming into town for face to face user testing.
September 24th
Question: what is the resolution of images for each sensor
GT - Jinam gave overview, Susan getting unityID access
Research station - geojson files for field names
Dev env for app 2 - scope of the development? app 1 + 1.5 + 2 or app 2 in a silo
CI/CD or separate prod and dev env → @Jinam next step
Edit puppet files → stop setenforce 0 + nginx default config for app2
Request service account for app1.5 → Jacob talking to Eric
One more flight for central and sandhills in the next 2-3 weeks
Chris - move metashape + pdata + raw folders from sandhills locker to a “free-ish” locker
September 17th
separate scripts for moving Sandhills 2023 data into the MongoDB. Jinam has already started script. Not a high priority.
Jacob - try using multiple GPUs + using whole node
long term looking at what in ODM will not run on H100’s
We are using whole node because LSF was using lots of cores and causing concern. LSF requests
Chris getting GameTheory onto Unity ID with Susan
Sep 10th
Rob - processing of data
User testing
delete flights? - delete processed data or delete the raw images as well. interface to manage flight data - database + files.
get display name from the pilot
sat img for spatial query / use mapbox?
use Rob’s geotiff of field names to overlay on sat img
tooltips etc for better ux
Sep 3rd
Amanda - ontology stuff update
Catchup on adding data for 2024 - running through the whole pipeline
Jacob - talk to HPC team about preferred queue
NoSQL db - dashboarding + other insights/saved queries tool
Cloudiness dropdown - code done, need to merge
Jinam - update HPC codebase (add cleaned up code)
Jinam - working on adding research stations to DPP
Additional data product - download one image for the whole grid too along with images for each plot
User testing in October
Plant breeders or crop production
Anna locke
Ben
Vasu
susana
ruben alvarez
matt krakowsky
rachel -
dominic
jeff dunne
nonoy bandillo
jim holland
reza
craig
hamid
carlos
jessica spencer
superintendents
phillip winslow
jeremy martin
john Garner
keith starke
Superusers
jeremy davis -
jason ward
brynna
Involve GameTheory for user testing
UI updates: Tooltips for users - shift + cursor for moving map, etc/ feedback+home button on last page
Eric Sills question for Chris. GitHub actions - service account idea.
Aug 27th
Jevon - going to sandhills to figure out networking problems by Friday
Jinam - change orthophoto resolution (parameter updates for singularity)
Multi folder upload - does it do recursive traversal?
Rob - flying over sandhills and central
Both P1 sensors - error - stops taking pictures midflight
Col with and row width + grid should get you 80% of the way as far as accuracy of the plots is concerned
Need ability to add multiple grids on the same map
Next steps discussion on version 1 and 2
Joe- just delivering plot images
Rob- threshold masking should be in scope but not ai enabled. Use HUE slider or something similar to mask out soil.
App 2 running on vmfarm - http://dronepilotapp.psi.ncsu.edu/
Next dev steps for scoping
Alley way stuff
Slider for index/rgb values to create a mask
Changing the display name (maybe can change to date and research station with a smaller component with some other info)
Change spatial query map to satellite imagery
Aug 20th
Ask users in app 2 - details about breedbase details required - trialDBID, studyDBID
Pop-up for user authentication of breedbase/BrAPI endpoint
Does it make sense to have a separate page for exports - create a decision tree to direct users through different flows?
QR code for trial details has info in breedbase - https://ncsu-maizebase.net/user/login?goto_url=%2Fbreeders%2Ftrial_phenotyping%3Ftrial_id%3D207
Get ontology info from imagebreed - https://github.com/nickmorales/imagebreed/tree/master/ontology
Get Study Name from the user in info modal
Jinam - taking over app1.5 problems from OIT + working on script to import previous data into db
Akshat - working on multiple folder upload for app1
Aug 13th
HPC - code problems fixed, ortho intelligence moved to singularity
HPC - db networking problem still exists
Using azure db as the prod db for now. Will move to dronepilotdb when networking fixed
Chris talking to Andy at 1 about storage locker creation
Better UX - allowing uploads of multiple folders
Cloudiness - add dropdown instead of text - fully sunny, fully cloudy, 0-20, 20-40, 40-60, 60-80, 80-100% cloudiness
Aug 6th
Limitation - app1 only able to upload a single file, requires user to select a single folder
App1.5 networking issue should be resolved by Gregory, Jinam will test it today
Add last year’s processed orthos to app2 - direct add into the database
Adding folders for specific users + multiple locations on OIT (recheck next week - Andy said would be done by Tuesday)
Plan to work on adding authentication
App2 - we have schema for exporting, can interact with drawing plot boundaries
Next step → reconfigure app1.5 and app2 to deal with HPC structure
Manually will add the already processed (2023) data to App2 - Jinam
Maybe have Akshat test the BrAPI export in sending data to ImageBreed endpoint
July 30th
Jinam + jacob meeting for app1.5 done. Jinam updated code for Jacob’s hpc processing scripts
Jacob - to do unit testing + pipeline testing with small dataset by end of week
Jinam - make test database entry for small dataset used by Jacob
cron will run at midnight everyday
Jinam - working on adding dropdown + maintaining folder structure for separate folders for research stations
code changes to app1.5 will be required when folders are separated
After testing of app1.5, Rob’s uploaded data will be processed
July 16
app 1.5 is still not running Jinam is meeting with Jacob July 30th
Joe tested installation of app 1 and it wasn’t bad. Still concerns that the 20-30 people using drones might struggle with the installation. Rob and Joe more experienced with software.
Looking at installing app 1 on a server on campus. Either the VM farm or a stray server in CALS. Cannot be on the Sandhills workstation since it will soon be at the station. Chris following up with Jevon.
Follow up from meeting. Chris and Jinam met with Andy Kurth from research storage. We are going ahead with creating folders for ALL stations and one to store the miscellany of super users.
July 9th
Rob has installed app locally on his computer. Was relatively simple.
Needed Git to install front and back
nginx installed
exif tools
node.js
Chris will find a server to install on for the Raleigh based researcher that doesn’t fit our model of stations first.
Amanda making a generic poster of drone project for asa
Brynna and Rob doing a turf specific poster at asa that also talks about drone software
June 11th
question for a user like Rob and Brynna. Could we bypass app 1 and use Globus? Is anything fundamental happening in app 1 like rewriting file names. How would we get the metadata in there. Answer: Use app1 installed locally.
Intentions with Phillip. Does he have the updated network? Even without the updated network, can we install App1. Ask Jevon about how to install there if no linux server in the closet. Add some storage.
Timeline for testing.
Update from Jacob. Jobs can be submitted now from DTN. Only piece is missing is th
June 4th
cron - check flights from yesterday and check status of it (db read) - check num_files
flight(s)*sensor (UUID) needs to be processed
each one is submitted as their own "job"
job title should be same as flight*sensor
flight a(sensor1) - generate_ortho
status = processing (db update)
hpc processing for ortho generation
generate cog using rio
hpc processing generate veg index files else docker is ready
status = processed (db update)
status = processed, export ortho/dsm/pt cld
Action Items:
Jinam and Rob will examine a few of the example outputs produced by Jacob to insure we are happy with the outputs and then we will start doing this in masse.
outputs at W:\transfer\benchmark\HPC\gpu-a100\code\imagesWhat gpu is in the Sandhills machine? Chris asking Jevon
Do we need to buy a node with Eric to get at the top of the queue?
Run the exact same benchmark on the sandhills that Jacob has run on HPC.
Loading multiple folders - app1?
- drag and drop a folder does not work - opens new tab
May 28th
Gregory’s team can no longer make VM’s. VM1(MongoDB) they still could. Gregory just got permission finally to make VM2.
DTN cannot access the database. Jacob working on opening a port or reverse proxy to make it happen. Or we just go back to the file method with a cron job checking a file every hour or so. Its a bit of a hack so hoping database works. Chris checking with Andrew about other triggering options.
Purdue plot- alley system. Jinam get the software up and running.
Export idea - Brapi compliant talk to Akshat
May 21
Data transfer node (DTN) special policy in the firewall for large data transfers.
Triggers: in theory globus flows could serve as the trigger.
Cron job has to watch the research storage locker for new files that have not been stitched.
number of files, status of the job flag
flags: data uploaded, image transfer complete, processing, processed
App 1 which is a local app will count files, checksum and write to database.
DTN cannot communicate directly with the database. Hence, app1 will also write the metadata to a file in OIT storage. DTN will access OIT storage to get metadata (file count, checksum, etc) and check the folder for the flight to verify. Once the data transfer is completed, orthomosaic processing will be triggered.
Jacob/Jinam - also look into trying to access azure db server using DTN
Chris
May 14th
App 1 available for testing at https://srs-uav.cals.ncsu.edu/ This is running on the Sandhills workstation. NOTE: you have to be on campus or on the vpn for it to work.
Rob playing around with web mapping - http://152.7.196.7/cc/
Collected images at Central Crops on 8th - Planning Sandhills for this Thursday(16th)
Discussing possibility of using Purdue ‘Data for Science’ on App2
Can we elevate our status in the queue for certain nodes.
May 7th
Chris checking on clayton IT infrastructure. Question about where app 1 api lives. On a linux server in the closet? Or on a windows desktop? For now, using Sandhills workstation while its still in Raleigh. This is a problem just for Rob’s group at this point. Chris checking with Jevon on this plan.
trigger ideas for app 1.5. Add a file count feature to Mongo.
Cron job on one of our VM’s that checks if all the files are present yet and if so start the stitching pipeline.
Phillip Winslow asking about app 1 along with others
VM1 - just mongo DB, cron on login node
service center
VM1
Just the Mongo DB
VM2
app 2
Login node -
where cron job will live
cron job runs a mongo query that checks file count for all flights where an ortho has not been created yet
has to have read access to MongoDB
April 30th
Help with App1 install on local computers
Starting data collection at sandhills tomorrow (May 1st)
Setting up flight plans for Central Crops - prioritizing trials (High, Mid, Low, priority)
Have Jevon setup Sandhill's workstation in the sandhills
Gregory - working on database deployment and then will work on app 2 server
We should probably start documenting the processes involved in OIT communication. It should make it easier for the rest of the team
Jinam/Jacob - look at app1.5 singularity code cleanup and deployment
Jinam - work on app 1 multiprocessing while checking and writing files
Jinam - start exploring github actions for app 2 CI/CD
Jinam - talk to Mikah about integrating a feedback button for app 2
April 23rd
Purdue has a github open source project for downstream data processing. Beyond what we have done.
https://www.ag2pi.org/workshops-and-activities/field-day-2024-04-24/
Jacob’s experiments. Not sure the comparison with workstation processing times is using the same dataset.
Asking Trevor to set up github actions for VM farm. cc Jinam in slack. Normal Docker for app 2. For app 1.5 the ODM and our scripts that creates the orthomosaics.
Our scripts will only run on the A10 gpu’s right now. Singularity containers will not run on the A100’s and H100’s as of now. They are still being tested.
Rob and Brynna are starting now with field data collection. Wants to install app 1 on a campus computer in their lab.
April 16th
Jacob - coordinate with Gregory on getting the MongoDB server started
Jinam - debug exiftool issue for 10K files
Joe, Rob, Anna have volunteered to be the early testers of the tool
Set a date for early testing
Jinam - start working on deployment script of app 2
CI/CD thoughts - git actions (commercial git) / azure devops (enterprise git)
Jinam and Jacob will meet on their own to work on setting up the deployment pipeline.
April 9th
Clayton expansion
Joe has some fields in Clayton that he would like imaged. We need the names circulated soon for other programs. Anna doesn’t have anything at Clayton this year, she has one field at Sandhills. Ben has been in touch about another field they have.
Workstation moving
Jevon is moving the workstation next week.
Serving COG’s locally
Jinam experiencing cog problems but should be overcome soon.
April 2 2024
DJI just announced ‘DJI Dock 2'
How many digits needed to store vegetation indices
three significant digits in general for NDVI
Jinam making executive decision for now
Proposed Data Processing workflow
Store the index layers for simple/quick access
calculate the plot-level statistics after user draws plot boundaries - store in mongo with plot-id’s
Mar 26, 2024
Finished code for generating veg index values but it has high read and write latency.
Jinam to explore multithreading at plot level index values (for read latency) and using HPC to generate veg index values for a flight (for write latency).
Jacob to check HPC processing and getting a service account for HPC.
GPU behavior seems erratic. Jinam - look at ODM’s usage of GPU. Jacob - look at continuous (graph like) usage of resources when running on HPC.
Orthophoto generated using ODM to compare CRS with Metashape.
Mar 19, 2024
Keith Starks - very enthusiastic and looking forward to this work
Talk with Jeremy Davis (OVT dropping enterprise drone deploy)
who could I talk to about setting up web ODM for extension agents to process imagery?
talk to Rachel about setting up web ODM on OIT for extension agents to use
List of veg indices we’re calculating and a list of sensors we’ll support for the indices.
(See Jan 30th notes for indices)
List of zonal stats
Red, Green, Blue, Red Edge, and Near IR reflectance, and some common vegetative indices
(See Jan 30th notes for indices)
Comparison between ODM and metashape CRS
Version 1 release should have a media release with it (article) - Dee
Should we start the process of applying for a exemption that leads to an exemption for drone in a box
Tom and Evan (NCDOT - ITRE)
Mar 12, 2024
Meeting with Keith Starks (Central Crops) this Friday (last Friday was cancelled) to discuss plans
possibly flying Strawberry field for Jing Zhang and Gina Fernandez
How do we have others access the data on OIT storage via GLOBUS (possible demo for Rob?)
ODM/Metashape Comparison
DJI Care Renewal for P1 ($750) and Matrise 300 ($1200): $3900 Total
How do we calculate the different vegetation indices?
Demo: plot ordering and plot numbering
Discussion around data products to export - csv of plots, plot images, orthos, etc.
Domain Names? Should be made according to a new product name (instead of “drone pilot”)
Thoughts around Tile server/ ways to serve cogs (will need URLs instead direct blob files)- we solved this right?
Send Jinam date that contains visible GCP for comparisons
Mar 5, 2024 | IAM discussion: Drone Pilot Project
Attendees: Jinam Shah Jacob Fosso Tande Chris Reberg-Horton Billy Beaudoin
Interfacing NCSU IAM system with Apps being developed
Notes
Identity need to access data on the oit storage. Before that happens, the researcher has to authenticate, this is on App2
Researchers are importing their plot plan and doing the editing. There is need to protect the editing that happens.
Application is needed to load data from the instrument. To use application 1, user must have a unity ID.
Stack: react frontend and backend is python
Do a reverse proxy on the frontend
NCSU servers uses Apache and not nginx, for nginx, reverse proxy will be needed to access shibboleth
https://github.ncsu.edu/OIT-IWS/ncsu-shib-sp-proxy - Example of doing a Shib SP
Be great if Billy can provide documentation on how to do the implementation.
Request form: https://go.ncsu.edu/shib-access-request
Action items:
March 5th
PRIORtITY 1: Test run image processing on HPC (ODM singularity instance) on full set of image
- compute time, start determining resources neededPRIOROITY 2: Plot order and plot labeling
Install web server on sandhills work station (already installed)
- Rob may install tile server and play around someTimeline for moving workstation to Sandhills - Second week of April (waiting on VPN stuff)
February 27th
Web server on VM - nginx
Get Jacob all the software needed for the VM
nginx
mongo DB - community edition (set up as production)
Central Crops - see if they already have desktop for UAV (Jevon)
- not a priority as we’ll (Rob & Brynna) will be doing the flying and can upload from NCSU campusTalk to Keith Stark about flying at Central Crops (Chris will reach out)
Hold back on figuring out GPU + multispectral with ODM issues
Uploading Plots plans
discussed with Chris/Jinam
Get Toughbook/Fieldbook from Slater
February 20th
Determine the VM size (memory, disk size) , configuration (ports to open, static ip, network, os to use)
How long will this VM will stand (likely duration of the project)
Can the existing configuration management be used
Figure out why Jinam can't use the GPU with ODM on the sandhills workstation (CUDA)
Priority 1: ODM processing with GPU + multispectral image processing when some are corrupt
Priority 2: Enable spatial query on mongodb/python for varying CRS output from ODM
@jinam: connect with Jevon and mount OIT storage to sandhills
@jinam: convert past TIFFs to COGs
Keep Sandhills workstation at PSB until first beta testing is complete
February 13th
Can people see other peoples plots? Yes. They won’t know the treatments etc. because that info lies with the researcher.
Authentication? Not necessary to filter for a PI. But would be necessary to protect plot layouts which are a time investment for the researcher to input. Jacob will approach the OIT group involved in identity management.
Study level metadata. Chris taking the lead on traits.
Visualization of the orthos and the ability to query them. Priority #1
Ability to download the ortho from what they defined as their polygon (study in BrAPI speak). Names will be BrAPI compliant. Version 1. Version 2 maybe ability to push data directly into a breeding database? Still thinking.
Get all of the plot tool components working.
All rectangular plots shapes possible. Dimensions of each plots, allies, etc..
alleyway dimensions
number of plots
Labeling: Serpentine vs dead head etc. Copying logic from Juniper systems. Esleyther has a ‘Toughbook’ with it installed.
ability to modify an individual plot, not just the whole study. This is to accommodate oopsies!
Zonal stats (summary of pixel values based on plot polygons) based on user defined plot boundaries
https://pythonhosted.org/rasterstats/Associate and display stats (e.g. veg indices) at the plot level
February 6
add ability to get single image per plot
export whole ortho
add numbers
screen show plot layout
include number of rows, number of columns, plot width, plot length in feet or meters.
Other metadata for ‘study’
traits taken for study
how are we getting dtm?
Jeff: plot level analysis as a way to inform him of when to go and sample, plot level value is mainly what they want. Use ocassional picture for a presentation. Raw images could be for training a ML model.
add detail to trait data as a string
no one on campus that Susan is aware of is archiving data at scale
need more names, PI’s program, grab unity ID as they interact with app 2
We could ask station staff to fill out an end of year thing asking what crop, what PI? Additional feature to app 2 for a superuser to add in metadata.
Add in field numbers.
January 30th Stars below mean version 1.
For mockup- possible vegetative indices
Common RGB indices
GRVI (Green Red Vegetation Index)
VARI (Visible Atmospheric Resistance Index)
GLI (Green Leaf Index)
TGI (Triangular Greenness Index
Multispectral Vegetation Indices
NDVI (Normalized Difference Vegetation Index including red-edge)
NDRE (Normalized Difference Red Edge)
SAVI/MSAVI (Modified Soil Adjusted Vegetation Index)
EVI (Enhanced Vegetation Index)
LAI (Leaf Area Index)
NDWI (Water Index)
CHI (Chlorophyll Index)
Study Metadata
Crop type/Species
Planting Date
Traits collected in study
yield, insect damage, quality data
Biotic/abiotic stress ratings; phenological; yield
January 23rd
Debrief of 2024 Superintendents meeting - general interest, concerns, thoughts on including new stations this upcoming season
current capabilities vs equipping additional station(s) and training staff?
What is the interest in a Hiphen visit
February 19-23
Additional Tower/workstation (check with Devon)
Targeted and Non-targeted cost estimates for deploying on station
Meeting with Jeremy on Sandhills (Rob/Chris/Amanda) within next week
Move forward option of drone in the box (start thinking about funding)
Concerns over DJI and FAA requirements
Expand locally at Sandhills with existing technology, expand 2024 activities to additional station
Where to expand for the next station?
discussion on need for money/resources to do some of this
Philip Winslow (just needs an RTK drone) - could run both the stations in Kinston
January 16th
(Pushed to next meeting since folks weren’t available) Start conversation on plans (flights) for this upcoming year (sites and equipment)
Mostly Jinam answering Brian’s catch-up questions on the background of the project and the architecture, where each file lives at what step, how we’ll handle authentication, etc.
Authentication: Current plan is that only App 2 gets auth, since App 1 is served from the local research station. Does that mean that we’re essentially setting up user groups where everyone from Research Station X is a viewer of the same data? Should there be narrower scoping to assign each image on a per-user/per-group basis? We decided that any auth decisions shouldn’t happen until after the Feb demo, since we’d rather have more functional features to show then and get feedback.
Next step we identified: make sure that serving tiles from Globus via URL is fast enough for App 2 front end.
Followup items: Brian will make sure Alyssa knows to get a tablet with Fieldbook to Jinam.
January 9th
The front end is now sensor agnostic by just choosing a very high resolution for the configuration file and letting ODM go from there. As long as someone doesn’t fly crazy low we are ok.
Jinam has started on how to define plot overlays on the images. The rotation of the image is already implemented. We will use ‘Fieldbook’ terminology in version one to deal with concepts of row column layout of plots. The ImageBreed videos are also a resource.
Rob recommends against rotating the map, but rather rotate the plots based on the users interaction with the map.
We debated how to assign plot numbers to the row, column designations for each plot. Decided that feature will be done by the user via file import? Or is that still being debated?
Rob thinks they should be stored internally as row-column but provide the users some options for naming plots. (row/column versus serpentine, different start number (e.g 1001) etc..
Waiting for more feedback from Globus on whether our application can access the unique URL’s for every image. If so, we will not need a tile server and so our locally installed app can function how COG’s function in the cloud. If that effort fails, we will just insert a tile server into the application.
Jan 2nd, 2024
PRIORITY 1 (App 1):
- Fix https upload issue
- Add functionality to App1 to upload multiple foldersAfter upload, compress pictures (lossless) into a single .tar file
transfer to campus via globus
Decompress and store RAW on central storage in decompressed state
Stitching into the orthomosaic
Store all data products uncompressed
Should we store raw images in an ungeoreferenced way or after georeference do we store only that. This question can be answered after launch of version 1. The only reason to ask this question is to conserve storage space. If we store both its double the space. Check in with Francisco for his opinion at some point.
- https://community.opendronemap.org/t/extracting-orthorectified-images/1372/3PRIORITY 2 (App 2)
- work on drawing COGs using Globus generated URLs
- setup App2 on webserver for demonstration purposesSend out doddle pool for larger group meeting. Chris sending by tomorrow at latest.
December 12th, 2023
On App 1: Force the pilot to separate the uploads where 1 upload is data collected from one sensor. When flying multiple sensors, one flight would require two separate uploads. Consider changing the mongo structure from ‘flight’ to 'upload'
PRIORITY: Work on converting existing Geotiffs to COGs and having one display in a web browser using either leaflet or OpenLayers
PRIORITY: Look into JavaScript based vector drawing 'plugins' for leaflet and/or openlayers
Leaflet Draw API: https://leaflet.github.io/Leaflet.draw/docs/leaflet-draw-latest.html#l-draw-toolbar
Possibilitys:
- https://leaflet.github.io/Leaflet.draw/docs/leaflet-draw-latest.html
- https://github.com/geoman-io/leaflet-geoman
December 5th, 2023
PRIORITY 1: Setup App1 on Sandhills workstations
→ setup landing page where images files are uploaded, get upload working with db backend, have metadata written to db, and get Globus transfer to trigger after image uploadPRORITY 2: figure out reasons for ODM application failure with high res images/compression settings
Web server installed on sandhills workstations. Setup port 80, Jevon will do this..
Setup a VM on the VM farm (VCL? - is it stable enough)
- short-term: setup on sandhills workstation and get up and running
- long-term: App 2 lives on VM farm
→ decided this will be setup when we are ready for production server (hold now)
- COGs on central storage or VM storage
November 28th, 2023
Deploy app 1 on research station workstation and get working
Setup VM1 as ('Manager VM' Cron job, Mongo DB) on VM farm
VM2 will host application 2
Set up singularity/docker instance on sandhills workstation to get processing pipeline up and running (testing)
setup same structure on SH workstation (VM, singularity, etc...) for ease of migration to HPC and VM
decide on singularity/docker for SH workstation (not a overly big decision)
Run an ODM process on HPC for initial resources testing (work with Jacob on setting up on HPC )
ask Jevon open up port 80 on SH Workstations
Meeting with Micasense on Friday to figure out Altum-PT skyport connection issues
→ figured out the issue (2TB CF Express card was incompatible with Altum-PT)
November 21th, 2023
Discussion of backend server needed to serve UAV images/orthos (App 2) or not?
Maptiler as backend tile server vs TileServer vs TiTiler vs Terracotta
right now, experiment with severing COGs directly with OpenLayers
Discussion on frontend Leaflet vs OpenLayers (App2)
leaning toward OpenLayers, larger codebase, but lots of existing plugins
need to look into drawing plugins
November 14th, 2023
November 7th, 2023
A config file will store the sensor parameters outside EXIF for obtaining our desired final resolution.
Exploring Mongo as the DB. Yes it stores geographic data. What are the advantages of using Mongo vs using the ‘JSON’ field in PostGres
API that will serve the querying functions in app 2. Will also handle the write back into the DB of plot coordinates and plot names etc.
Still working on the calibration methods, not a huge priority for now. Get the rest of the pipeline working and go back to what the calibration equations should be.
Oct 24, 2023
resolution still not matching. That is just settings
Make the compression algorithm the same
Color checking not really there. Time to just do this with OpenCV
Jinam has mentioned to Rob the EXIF file containing the metadata is not the same for Altum PT that we have. So how do we perform the color calibration?
Where,
r is the distance of the pixel (x,y) from the vignette center, in pixels
(x,y) is the coordinate of the pixel being corrected
k is the correction factor by which the raw pixel value should be divided to correct for vignette
I(x,y) is the original intensity of pixel at x,y
Icorrected(x,y) is the corrected intensity of pixel at x,y
In the radiance calculation above, V(x,y) is equal to 1/k
Is the vignetting model always the same for each sensor or is it for each image? Are there additional tags beside exif we are missing? XMP?
For now we will assume first two images and last two are the calibration checker. We will have to build a color checker auto detector for this sometime soon. Could be just one image or multiple that has a checker in it.
OOctober 17th, 2023
Most of the data has been processed manually through metashape for the year
Automating that process with Open Drone Map continues to go well
We have about 12 TB of data sitting in our locker in Research Storage
Jacob helped us manually move all of the MetaShape files to the locker today (another 10TB). We will keep those there for a few months but will likely move or discard in a few months based on how our new pipeline is working.
Jinam is putting his code into a couple of GitHub repos this week so that we can start porting more of the pipeline pieces onto the HPC cluster
Rob is starting to share some of the data products with users via Globus. While not our long term way of sharing, we want to show value now.
Sep 26, 2023
How would project members be given access to the collected data?
Add members to the OIT storage for access to all the data?
Use Globus collection to grant access to specific data in a specific folder?
Project members actively moving data will have access to the OIT Storage.
Everyone else will be given access to the data through globus API
APP2 once developed should give users access to data.
September 19, 2023
Switching to using Azure as our development environment. Rob and Brynna using the workstation too much to use it for development for Jinam’s work.
Can we put a globus endpoint in Azure? Should be possible, asking Jacob to work on it.
Maybe instead of trying to divorce breedbase from the image part we can just cherry pick the bits we need for app2. This will be much faster. The separation of breedbase will have to be another team’s task. The database code is all written in Perl. The plot/alley bits are all python and should be easily followed.
BrAPI compliance at the export layer in app2.
Sep 12, 2023
For processing the calibration images, which include the reflectance card, we need to autodetect which files contain the calibration cards. No metadata coming from operator SHOULD be used in determining which files. We SHOULD examine first five and last files in every flight for every band to identify these calibration images. If ODM does not contain this functionality we will find a path with opencv or other libraries.
Aug 22, 2023
To include in design document. What data products will be computed by default as part of pipeline. And which data products would be on demand.
Do we have an internal meeting to create the design document as our ‘Hackathon’? Or just invite Francisco? We just go forwarded with our own scheduling and then add some from Cornell but not necessary for them to attend. Francisco is the most useful
Chris circulating design spec documents for us.
August 15th
Jinam will start exploring open drone map sooner than expected since MetaShape floating license is proving slow.
We plan on assisting Cornell with separating ImageBreed and BreedBase and getting ImageBreed away from Perl.
Microservices envisioned
1. Stitching
2. Web front end to draw in where your plot boundaries. MapBox and javascript front end
3. Plot level data products produced. Python/R backend that generates plot-level values/statistics based on the locations of plots within the georeferenced plot map (from service 2) and the outputs from the #1 (orthos, dsm, and point clouds)
4. Data analysis portal - probably not or later version. Could install Jupyter Hub and create an analysis portal but overall we don’t think this is the right crowd.
August 1st
Charging stations arrived and we will take down to Sandhills tomorrow
Sandhills flights planned for 8/2 - looks like a nice day, will plan to fly both Multispec/thermal and RGB
Metashape did not recognize the GPU on sandhills workstation. Working with Jevan, Jacob, Jinam on getting drivers installed and Metashape setup to use GPU - hoping will reduce processing time significantly (Complete - Jevon)
Needed to change mount point from /mnt/data/raw to /mnt/data
This created a need to restructure the folders on the sandhills workstation as seen below
/mnt/data/transfer/raw/ (raw imagery from drone flights) - - included in Globus /mnt/data/transfer/pdata/ (output; processed data) - - included in Globus
/mnt/data/metashape/scripts/ (processing scripts)
/mnt/data/metashape/processing/ (metshape projects) (Complete - Jevon)
Transferring last two weeks of sandhills imagery to workstation today - will start to process as soon as GPU is setup and ready
Timeline for moving workstation to sandhills?
We should start to think about metadata. A good place to start would be to make a two column table. The first column could be metadata extracted from the images and column two would be what we want to see as metadata.
July 25
Continue to collect Sandhills imagery weekly and transferring to sandhills workstation
When weather permits, we collect both RGB and MS/Thermal in high priority areas
Processing manually, about 1 week behind
Processing on workstation can still take 6-8 hours on larger missions. We typically queue up 3 flights and run overnight, and most mornings the processing is finished, but not always (typically ~8 missions collected in a day)
We should look into the cluster based processing capability of metashape
Still waiting on charging stations and Altum-PT sensors (asked Benchmark for an update yesterday, have yet heard back)
Currently charging extra batteries in Raleigh and switching out with Jeremy in Sandhills
Jevon is helping change the 20TB mount point of the sandhills workstation from /mnt/data/raw to /mnt/data
We needed space for ./metashape, ./pdata (processed data), and ./scripts to sit at the ./data level
May need to do some reconfig with Globus after moving mount point
Once setup, we’ll start moving and processing data again
Purchased additional SD and CFExpress cards for sensors. Altum-PT collects A LOT of images, 4 flights is ~180GB of images
Effort to Automate Metashape pipeline (Metashape Hackathon?)
https://github.com/ucdavis/metashape
June 27th
Plan to fly this Thursday - looks like nice weather and hope to collect multispectral data
Setup M300s with new KML flight plans - new flight plans are designed based on priority (will discuss in detail at next meeting)
Developing map that will illustrate missions and priorities
Still waiting for battery charging stations - hopefully next week
Globus setup last week - will transfer raw and processed data this weekend
Jacob Fosso Tandecould you setup a ./pdata/ folder on the sandhills workstation for the processed imagery (as well as give proper permissions for transfer)
/sandhills/raw and /sandhills/process/
You should set up the same file structure at source then copy sandhills and everything will map correctly.
I do not have access to your source system but globus script that can do the setup with very little interaction. After the first run, the transfer will happen automatically. However , I do have another script that you can use anytime provided you give it the correct input.
If this needs a separate globus collection, could you set this up as well
No, a collection for a station. Beneath the root folder, other folders can be added as needed
Brynna will post a processing report from one of the larger metashape projects
Jinam is still hung up with his visa. Susan working with him closely
June 20th
Poor weather - perhaps Friday Flight
M300 setup and transferred to Sandhills
Setup standardized flight missions and will copy to Sandhills Drone
Working on processing UAV imagery
Ran out of memory on local machine running 3 missions
Charging stations have yet to arrive
Tested SFTP transfer from williams to PSI on for single date - stopped @ 6+ hours and about 50% transferred
Jacob will show us how to setup Globus later today and we will test
Thoughts:
Do we need to set up a tracking sheet to record flight dates? Google sheet?
Where do we stand on metadata collection?
June 13th
UAV equipment setup and registered with FAA
Charging stations have yet to arrive (hopefully later this week)
Driving down UAV equipment to Sandhills today
Collecting data at Sandhills today
Training Brynna on M300
Flying both UAVs with both pilots
Will join today’s meeting remotely
We will start transferring imagery to Sandhills workstation ASAP
Need the IP address and login credentials (unity?)
June 6th
Brynna starting work today - will start training her ASAP
Drone equipment has arrived!!
Multispectral sensor still 8-10 weeks out
Charging station 2-3 weeks out, but we have backups (thanks Jeff D!)
Will take a few days to get setup, registered, and firmware updated - should be ready to fly next week
Planning Wednesday or Thursday Trip to Sandhills for image collection
Will start pushing images over to workstation as soon as everything is setup
Also plan to manually process until front pipe is setup
Globus configuration:
/data/raw : Path accessible to globus automated transfer
Data is automatically transferred every day at 12:00am
Data differing in checksum is transferred
Preserved timestamp at source
Should the shared directory be owned by a service account?
May 30, 2023
/rs1/shares/cals-research-stations/sandhills/raw read only, even for developers
/rs1/shares/cals-research-stations/sandhills/processed read only for users, write access for developers only
Ask Jevon to setup accounts on workstation
May 23, 2023
ImageBreed was built to serve Kelly’s lab. Was built on top of BreedBase.
They use Pix4D, but there is an open source that can be used but is slower.
Have to store your trial info in BreedBase.
Images get pushed into BreedBase.
Some design choices that need to be fixed. ImageBreed changes were not pushed back to the BreedBase repo.
Want to take the image analysis pieces and make it a stand alone application.
May 16th
First Flights Conducted May 11th
4 flights - Zenmuse P1 sensor
Blocks D, AB, E, and F (flown @200ft)
97 GB of images (5700 RGB images)
Can fly 30 minutes at 200ft with P1 sensor (0.30in/px - 0.7cm/px) ~40 ac/flight
40 minutes to transfer from SD card to local storage
Order extra SD cards
Need to decide on storage space
Chris will have Andy setup folder
Wednesday meeting - agenda/thoughts?
Schedule interview with Jinam Shah
Planting Update (Jeremy)
Setup Github repository (Chris’s team will handle setup - public account)
May 9th
additional batteries are ordered. All told, each UAV will have 8 available batteries (4 flights)
Cloud City Drones was required to cancel their bid, bid issued to Benchmark (local vendor) May 8th. We were told 2-3 weeks for DJI equipment (platform and P1 sensor), 8-10 weeks for Multispectral sensor. (They have been strung along about delivery dates for the Altum-PT sensor - see Joe Gauge)
Need to
Update from Joe about flying multiple sensors
Prioritizing Flights
‘Non-objective flights
Goal is weekly flights
‘Objective’ Flights
best to identify crops and objectives as clear as possible before last minute (scheduling)
Cotton (A3/A5/D2B)
Soybeans (A4)
Prioritizing Sensors
If conditions are not conducive to useful data, is there any value in flying the sensor?
On partly cloudy days, you cannot ‘easily’ radiometrically correct for shadows within the flights and data - as such, is there any reason to fly the sensor in those conditions?
First Flights Planned for Thursday (May 11th)
Update from Jeremy on Planting
Matrise 300 with 8 batteries
4 flights possible
Need to prioritize areas and sensor (s)
Value of multispectral with little-to-no plant material?
https://phenome-force.github.io/PhenomeForce/
May16th
Developing plot boundaries - when/responsibilities
May 2nd
Kim has started working for Rob this week (Pilot #2)
Lucidchart link broken
Get Update from on Jeremy about current planting schedule
Workstation update
Workstation will arrive next week. Looking to complete installation of network rack the week of May 15th-19th.
Let’s target our initial flights for 2 weeks after planting.
Check with Chris on Account to use for equipment, travel, etc..
Please go through Susan Wassmer for all purchases. We have 5 different accounts actively supporting this effort and the interplay with fiscal years is complicated.
How to handle initial flights until equipment arrives
PO received by Cloud City Drones May 2nd
Can only fly one sensor at at time (must prioritize flights)
Joe is experimenting with both sensors on the drone but uncertain if it will work.
Single Matrice 300 available (Alex Woodly’s)
Autum from Alex, P1 from Joe
Batteries. We have enough to borrow from Joe and Alex but need to double our current battery order. Doesn’t have to be through cloud city, can be done on a PCard.
Should we fly non-RTK drones (need to consider GCPs)
- We are really trying to just fly RTK and we think we have it through Joe and Alex.Dividing out two groups of flights. Fields with a clear objective to be flown at 150 ft, and those without an a priori objective get flown at 250 ft.
Without objective flights should be at least weekly.
An objective flight like “stress flights” might even be daily? Decision of when to fly
April 25th
Drone purchase will be from Cloud City Drones ($79,807k)
- comes with 1 year enterprise shield warranty
- drone and P1 sensor immediately available, multspec 2-3 weeks out - hopefully
Rob Join Developer meetings when needed or by invite
Hard pass on UAV-pilot from durham-tech
April 18
Meeting to discuss flight plans in Sandhills
Stand counting
Establishing nutrient research system
Drought stuff - plot level data
Crop growth/monitoring
Jeremy & Denise - created a google form
Usually do get a plot map of most fields
General Updates/ToDo
Get permission from Alex Woodley to use to match what sensor suites are coming in - Rob
UAV-pilot from Durham-tech
More batteries (4 more) + charging stations (3k)
Flight Planning Discussion
Cotton [RED]
D block: D2 D3, D5, D7 (dependent on Todd), maybe A3 (dry cotton)
B - late planted cotton test
Canopy Temp, NDVI, Canopy Height
Planting first week in May
Soybeans [GREEN]
A41 (irr/non irr trial) - thermal (Anna Locke)
Perhaps time series during drought
Gas exchange focus
Can you replace wilting ratings? (again, ML approach opportunity)
Planting first week in May (more like single crop)
F Block: F4A F4C (Ben Fallen)
F2D, F4D - large fields (wilting ratings)
Irrigation is cut, ratings start
Planting first week in June (needed because of double crop)
Ballpark July/August wilting starts (3-4 days without rain)
Wilting ratings (perhaps ML opportunity?)
Peanuts (Jeff Dunne) [ORANGE]
A43
Planting (maybe) mid May
Corn (Chad Pool) [BLUE]
A42
Soil moisture meters and compare to weather station, works with Jason Ward
Planting this week
Wheat [PINK]
C Block (harvested in June)
Turf
E2
Hort Crops - Sweet Potatoes, muscadine, others [YELLOW]
B2B/B2F
B4A - Peach yield test
Tobacco/Soy - herbicide trial
(B4B)
Planting week of May 15, plant both the same day
Sweet Potatoes
B2B2 (plant in June) - not exactly decided on field
Initial Plan (Amanda & Rob)
Blocks D, A, E - specific targeted flights, weekly, varied resolutions based on objectives
(Brynna & Rob)
Need to refine goals and objectives crop specific
Blocks F & B - SH staff handles weekly collections plus one-offs as possible uses-cases emerge
More batteries?
April 11, 2023
- Brynna signed to join Rob’s program as student (should be showing up ~June 1)
- Joe and Rob are going over equipment bids with Susan Wassmer
Looks like equipment will be available 2-3 weeks after they’re received
- Rob will start collecting data and cover until Brynna is on board
Chris - AerPaw company (open source ecosystem) that runs the 5g out at Lake Wheeler interested in us running their drones
Mobile node that runs 5g tower, maintain wireless systems
Wants to know if we can run their drones at some point in the future
Let them know interested in the future but not necessarily right now to alter our purchasing
Might be worth looking at meta-data to make sure our systems work with their schema as well
Jevon - Dell workstation in queue to arrive ($13K) for Sandhills
Ubuntu OS default
Amanda and Joe will ask Cornell to attend a future meeting to discuss their road map and ours.
May 18th, 1-3 as the likely time to meet with the larger group. Kirtan can be here. Need to have him here earlier in the week to prepare.
March 28, 2023
- docker onto the workstation at sandhills
- Stitching - we need someone to explore licensing on HPC’s. Charges based on users or cores could be a problem. Jevon Smith do you have time to look at licensing from Metashape vs Pix4D?
- downstream metadata to add post-ingestion (additional columns)
sandhills or other station, field, crop, researcher
- 30TB available in HPC 'storage’ for storage and processing (central storage)
-> can request more as needed
- drone out for bid (closes Thursday - hoping for Mid-april to May)
Mar 21, 2023
Kirtan available to start mid-May
Set up to meet rest of the project participants at the next meeting 3/28
Discussion on GPS connected equipment available at the stations
What will be used for the particular target projects?
Drone update - going thru purchasing
Workstation update - Jevon submitting request today, probably arrive in 2-3 weeks
Should plan on starting data collection soon as needed then informatics downstream
March 16th Design session
Decisions made for version 1
Drone operator will load images into a workstation at Sandhills. No metadata will be human input at this stage of data ingestion.
Plan A will involve moving this raw data via Globus back to the Research Storage on campus where it will land in a storage locker named Sandhills.
Metadata embedded within the images will be scraped and put into a database and the images file names will be changed to some combination of date and gps location. The metadata embedded in exif files:
Data/Time Collected
assuming sensor is set correctly
Image properties
size, resolution, bit depth, ect)
Camera parameters
Make, model, etc.
GPS Location of image
Latitude, longitude, altitude
All raw data will be automatically stitched and processed into data products and written into the Sandhills directory. Subdirectories could be used to delineate types of data products. Note: someone must run some tests on HPC to determine the optimal number of cores, GPU’s, and RAM to run this.
Data products and raw data will be archived after 12 months. On prem tape or cloud cold storage from the large providers are acceptable.
Researchers may ‘claim’ data by copying data to their individual lockers.
Web tool
A web based application will allow users to find their data and to compute plot level values. This tool will be relatively simple and it is not meant to be a total analysis and visualization tool. We still anticipate that researchers will do the majority of those tasks themselves within the software of their own choosing. Our app will do the following:
Allow researchers to search for all data corresponding to a particular set of coordinates and year combination. They can do this via a polygon selection tool that allows them to identify individual fields.
Allow researchers to upload a field map (with RTK level plot corners) so that plot averages (or percentiles) for any band can be calculated. Plots values should be downloadable. See commercial provider Solvi for indications of intended scope.
User defines plots in at least 10 polygons
Additional metadata will be written back to the database based on use of the web tool. Data such as crop, researcher and likely more can be appended to the metadata.
Miscellaneous notes and barriers
Our database will have to live outside of HPC on the university’s VM farm, which is technically separate from the HPC.
The web tool will have to run on the university’s VM farm too.
While HPC is our ‘plan A’, we will use the workstation at Sandhills as our development environment. Work will be moved from that computer to HPC as soon as possible during each iteration.
March 14th
Position Updates: UAV Pilot (Brynna) and Geospatial Developer (Kirtan)
Looked at Joe’s Metashape performance
Jacob suggests - run with/without GPUand test/tune performance
Can’t work directed off SMB mounts, etc..
Post-processing options
Plan A: Go through HPC (bit understaffed) (production server)
Plan B: Sandhills server rack (development server)
Strategically involve HPC center, “we can only do X offsite because HPC does not offer Y”
Discussed pulling metadata from images, storing in DB, query DB for requests surrounding the raw images
Design Systems Buckets
PHASE I: UAV Image Collection, Transfer, and Post-Processing
Design, develop, and test pipeline from image collection to post-processed data
Focused effort at Sandhills Research Station (11) - and breeding efforts
Assess resources and needs (people, hardware, equipment)
PHASE II: Analytics and Visualization of UAV-derived data products
Focus is on developing tools and systems to analyze processed imagery
PHASE III: “Become the daily UAV-imagery provider for all research stations”
March 7th
Position Update: UAV Pilot
PSI Drone architectural design session (Thu Mar 16, 2023 3pm – 5pm)
Overview of current pipeline
Research Station Hardware Discussion
Adding compute capabilities to the research station hardware (Javon)
benefits/limitations/cost?
Decided to purchase additional workstation dedicated to this project and housed at research station
Rob’s recommendation ~4TB NVME for processing, then perhaps 4-8TB SSD for short-to mid term storage and/or overflow.
GeForce 3000 RTK series
University/PSI/commercial post-processing compute resources available for post-processing imagery (did not discuss)
Not a lot of GPU’s
HPC is poorly administered, but should get better, cover all our basses for now with workstation (plan B)
Flights (targeted versus broad farm-level)
February 28th
Position Update: Developer Position KIRTAN DESAI
Position Update: UAV Pilot
General Expectations
# of flights, how many programs/fields?
First year ~1/week at Sandhills
Turf, Cotton, and Peanuts?
Training and training material development
Lower priority at this point, but may be able to work in with some extension efforts
Timing-first flights?
How to incorporate prior work done on ‘leveling data‘ - Rob
Bring in design group - leaning toward ‘unstructured’ uploads
UAV Equipment updates?
Do we know the expected delivery for sensors and platform
Things to check - with dual mount both sensors RTK capable?
Vision for first-year use (1 w/ UAV Position, 1 w/ Jeremy @ Sandhills)
Hardware Discussion
Before any hardware is ordered, need to consider if image post-processing will occur ‘off-site’, at a remote HCP Center, or ‘in-the-cloud’
Need to ask Javon about added costs if done remotely (GPU, etc)
Perhaps spec out two options
Arch-Design - need to decide when to bring in Chris’s ‘Geospatial Guy’ (Milad)
Design Group Meeting (two weeks)
To Do
Javon - additional costs for hardware at remote site if off-site post-processing images
Email susan about Equipment status
Follow-up with potential UAV Pilot for year 1
2 weeks - do the demo (lead off - then talk with design team afterward, 2 hours)
Setup meeting to discuss Sebastiano Busato, Daniel Perondi involvement
Demo complete workflow with data (could be a long meeting)
Include Milad
Call Amanda about Developer position and if she is okay with the Kirtan (Chris)
To Do (Longer Term)
Setup meeting with Cornell Group to discuss lessons learned, their vision for the ImageBreed Project, collaborative development opportunities
Feb 14, 2023
Here is a dumping of our chats. Will clean up later.
12:05:30 From Chris Reberg-Horton to Everyone:
Notes if you will. First thing to know frequency, flight height, RTK enabled drone so we don't have to have ground control points.
12:10:25 From Chris Reberg-Horton to Everyone:
Need make sure RTK correction is applied to both sensor systems. SD card removed and loaded. Use MetaShap or other similar product where processing occurs on site. On a local server rack. Is this totally automatable? point clouds? Rasterization. return to this topic
12:11:04 From Chris Reberg-Horton to Everyone:
at some point, putting in a plot map overlay to create the plot level outputs.
12:17:42 From Jeremy Martin to Everyone:
Replying to "at some point, putti..."
Who's responsible for onsite processing if server rack located at SRS?
12:18:27 From Chris Reberg-Horton to Everyone:
Replying to "at some point, putti..."
The automated pipeline is responsible
12:32:37 From Chris Reberg-Horton to Everyone:
Globus for data movement. A decision made.
12:49:29 From Chris Reberg-Horton to Everyone:
Version 1 decision. Save all the data. we will study what percentage of the data gets used.
12:59:21 From Chris Reberg-Horton to Everyone:
design to around 100' of elevation for flight