User Testing Notes

User Testing Notes

06-03-2025

Attending WCCC e-board meeting: Nick Andrews, Juliet Norton, Victoria Ackroyd, Anna Morrow, Nate Stacey, Elizabeth Seyler (note taker), Anna Morrow, Clair Aiken, Mikah Pinegar, Adam Smith

Testing Update

  • Elizabeth provided stats on responses now that deadline just passed. We met our goals for regions and have 38 total responses from facilitators and respondents. (See more details under 05-30-2025 notes.)

  • Elizabeth briefly described her data analysis process and what she’ll present to the e-board on June 11.

  • We’ll meet again on June 17 for the regular WCCC e-board meeting to discuss any questions that have arisen.

  • We’ll meet finally on June 23 to wrap up discussion of testing results and next steps for the developers.

Development Update

  • Adam showed tool changes made so far in response to Sarah Light’s and whole e-board’s requests, including: Alaska and Hawaii now visible on map of Welcome page; better UI to select either Browse or Get Recommendations; and irrigation filter now in two places, including on Crop Calendar.

  • We discussed planting windows and accuracy; Juliet will do a bit more on that. (Noting this after the meeting; I didn’t get the details.)

  • Adam and co will draft a description of the tool from Sarah Light’s text and share with the WCCC e-board for approval. This will go in the About section and in a Wizard on the PSA website to direct people to the various CC DSTs.

 

05-30-2025

Attending: Elizabeth Seyler, Victoria Ackroyd, Mikah Pinegar, Adam Smith

WCCC Selector Testing Update

  • Numbers to date: 29 Facilitator Forms completed with respondents, 24 Respondent Surveys completed, 8 facilitators did own testing

  • 6 respondents yet to test, and one facilitator yet to do own testing; Eliz doing another Hawaii test on Monday

  • New deadline: Mon, June 2, 5p

WCCC Selector Data Analysis

Eliz shared spreadsheet of Facil Form data and of Resp Survey data and her analysis processes to date.

  • Facilitator Form process:

    • Eliz color coded fields for positive and negative feedback, suggestions, and bugs/data issues.

    • Nearly finished grouping/condensing data by positive, suggestions, missing info, confusing parts, and bugs/data issues.

    • Next will identify themes and # of testers who commented similarly. Will also differentiate between bugs vs data problems.

    • Will deliver bugs and data problems to PSA team at Wed 6/4 meeting. These will be grouped by themes so Adam can make single tickets. Adam, Mikah, and team will begin working on those ahead of 6/11 presentation of testing results to WCCC e-board. Adam also has other good news to report to them--something WCCC requested has been completed.

  • Respondent Survey process:

    • Creating pie charts and other visuals to represent respondent demographics

  • Plans to merge data from Facil Form and Resp Survey to see whether themes emerge by demographics.

    • Mikah can easily do that merge (matching the email fields) for whatever columns Eliz wants to insert into Facil Form spreadsheet. Choices are region, state, irrig/nonirrig, perenn/ann/grazing, exper with cov crops, exper with technology, farm acres, farm size description. Eliz will likely send to Mikah for merging after testing ends Mon 6/2.

    • Victoria recommended not spending much time on looking for themes in demographics unless I want to explore the reason for an outlier, e.g., were people with little cov crop exper similarly stymied by certain tool elements?

Eliz creating a Ppnt presentation patterned after GameTheory’s VegSpec testing one. Will present (with some input from Adam) at the 6/11 meeting.

  • Eliz will do any more analysis WCCC e-board requests

  • Eliz will upload all raw data, analysis tools, presenttn, and other relevent items to WCCC shared drive and to PSA shared drive before she leaves PSA end of June

 

05-20-2025

Attending: Sarah Light, Nick Andrews, Juliet Norton, Victoria Ackroyd, Anna Morrow, Doug Collins, Nate Stacey, Elizabeth Seyler (note taker)

WCCC Selector Testing

  • Victoria will help with outreach meetings over the summer; so Clair both and Victoria will be at the e-board and outreach meetings. First outreach meeting is tomorrow; Sarah L will share the latest update from Steven M.

  • Sarah is working on a budget for outreach and will be in touch with Kayla Driver

  • The testing numbers to date: 37 Total respondents, 18 tests completed (plus 7 facilitators' tests = 25 completed); 11 tests to be completed and a few more facilitators' tests

  • Testing end May 23, but a few more tests will straggle in the following week. Then Elizabeth, Victoria, Adam, and Mikah will review results and discuss recommended tool improvements.

  • At June 11 meeting, Elizabeth and PSA team will present testing results and tool recommendations. At the June 23 meeting, we’ll discuss snafus or questions developers have.

  • Victoria will ask Steven who will play Elizabeth’s role when she’s gone. Juliet could help with that, esp bc she was on the AirTable end. Victoria could be the liaison if Juliet isn’t available. Clair and the WCCC e-board can communicate with the developers directly, too. Adam is on the West Coast and can be avail on Pacific time. The questions will be more specific in this round of changes, based on the feedback.

  • Juliet supposed to be writing manuscripts; happy to train on what we’re doing and can hold onto it all until PSA gets up and running again (funding dependent). Juliet confirms that she can play this role, given her scope of work. Anna Morrow confirms.

Addition of British Columbia

  • Sarah L: some people from BC reached out to us about being in the tool.

  • Juliet: the time commitment for us depends on how much data review they want from us; if they want new parameters, it’s a lot of work.

  • Victoria: Ontario and Manitoba are in the tool, but Canadian data have to be in En and Fr. Probably falls under the purvue of new work; this may not be the best time. Once we finish the WCCC selector testing, Victoria will work on reports, and tool work will be on ice until PSA funding starts up again.

  • Sarah L: Raelani Kesler and Deiter are the Canadians who expressed interest, but WCCC is going to wait on adding BC bc of funding freeze and PSA projects on ice.

Sara on Sabbatical

  • Has new cell phone and will put WhatsApp on it, so WCCC can communicate with her. She will not check email more than weekly.

05-06-2025

Attendees: Adam Smith, Juliet Norton, Nick Andrews, Anna Morrow, Sarah Light, Victoria Ackroyd (note taker), Doug, Nate

WCCC Selector Testing

Agenda:

  • Changes due to staffing/funding issues

    • Cora has started another job

    • Nick Sirovatka is still with WCCC but no longer with NRCS, lost that federal contact

    • Steven Mirsky is now Sarah’s only “direct” line now to USDA

    • Game Theory’s contract is complete

    • Elizabeth Seylor’s contract ends at the end of June (not July)

      • Elizabeth has worked out a beta-testing timeline to get it done with the WCCC

    • If anyone hears anything about funding/etc, please let Sarah know

    • How solid is tool developer funding?

      • Adam: the dev team has a 2 yr budget, bare bones group, but WCCC tool is at top of list

      • Adam and Ted will be doing most of the updates

      • Victoria has nothing to add, does not know

      • WCCC is keen to finish the project, is trying not to be annoying

      • Discussion of nature of funding - we don’t think this funding is from the IRA, but rather NRCS/ARS/university NACAs

      • Question about being able to do the SRC?

      •  

  • Timeline for next 4 months

    • Loss of Game Theory. 🙁

    • Elizabeth got info from them on how to analyze user testing results (Juliet also has professional experience and training in this area).

    • May 23rd is user testing end date.

    • Elizabeth is tracking respondents/demographics, and will be sending the WCCC updates. (email, twice a week, while Sarah is on Sabbatical).

    • Elizabeth and WCCC will encourage them to recruit more participants.

    • Cora will send out a MailChimp email recruiting participants soon.

    • Elizabeth will work with devs in early June to outline tool changes.

    • Sarah will join a few meetings during end of user testing even though she will be on sabbatical, so it needs to be early AM Pacific/afternoon Eastern/late evening Spain for the meetings.

    • Then the devs can take the info and make the needed changes.

    • Clair Akin  has agreed to be a liaison between Doug, Nate, Nick (WCCC e-board) and this group to keep things moving.

    • The WCCC will then do outreach over the summer and once Sarah is back.

    • The outreach committee and exec board of the WCCC have combined, and meet last Wed of every month from 4-5PT. Nick S, Clair Akin involved.

    • Who will keep ownership of this meeting re: meeting invite?

      • Nick will maintain it, then Sarah will take over after her sabbatical.

      • Cora will delete the old meeting request for going forward.

    • Sarah will ask Elizabeth for demographic/respondent numbers update today.

    • How long will dev changes likely take? Depends on changes needed, but 4-6 weeks is likely.

    • Feedback from other tools that are relevant to this tool will be incorporated. 

    • WCCC responses from testing will take priority for updating the tool over basic Game Theory suggested updates.

    • The devs are working on the termination info sheet changes, but that in calendar view will be in a future version of the tool. Waiting for WCCC to give the go-ahead to work on the latter. With Game Theory “gone”, design will fall to the devs and work that was previously done.

    • Likely an August “release” for the tool ahead of the fall 2025 cc planting and winter conference season.

    • Elizabeth’s facilitator/respondent sheet. To help people recruit participants.

    • Elizabeth is managing the overall keeping track of respondents.

    • Nick found some respondents in HI. Sarah found a few CA respondents. 

    • Elizabeth will be the point of contact.

    • Victoria is available to facilitate if someone can make initial contact with potential respondents. Sarah and Nick’s people will need to be paired with a facilitator.

  • Termination info implemented in calendar discussion

    • Adam pulled up the ticket with the image Juliet sketched out for starting design: WCCC Implementation Requirements - Termination Windows · Issue #556 · precision-sustainable-ag/dst-selector

    • The trick will be changing this from a planting date calendar view to how to show termination.

    • It’s a data visualization “problem”.

    • Game Theory would have been useful here, but the devs can make a start of it and them or someone like them can polish it

    • Plans to reconvene to make some mock-ups next Mon or Tues.

    • Sarah L would like this mocked up before the “final review”, it’s the last big piece to put in place.

04-22-2025

Attending: Adam Smith, Sarah Light, Nick Andrews, Cora Rose Bobo-Shisler, Juliet Norton, Victoria Ackroyd, Anna Morrow, Marguerite Dibble, Elizabeth Seyler (note taker)

WCCC Selector Testing

  • Selecting key traits (with Juliet): 10 minutes Airtable | Everyone's app platform

    • Key traits are showing up on Crop List of Selector. Need same for the west. She created an airtable of them to facilitate the convo. Can fit up to about five if they’re small enough. She shares the Airtable so WCCC folks can look through it. Sarah L reviews them. Add Acceptable Termination Methods? Adam: things that long would limit the number you could have. Sarah L: trying to focus on things that are not goals. Adam: other CCCs have used things that would be polarizing for species, yes/no options. Victoria: dry matter bc of weed suppression, nitrogen rules out adding so much; so those were used in the NECCC. Sarah L: add drought tolerance, soil drainage, and biomass range (I think). Nick A: Life cycle and ease of establishment, too? Sarah L: Nitrogen and Biomass definitely. Adam: salinity is already a filter. Sarah: salinity and drought will be good for the west key traits. Nick: C:N ration vs nitrogen content. Do we have the latter? Juliet: yes, we have a goal. Victoria: no other nitrogen info; same for Anna. Juliet: we have nitrogen scavenging and nitrogen fixation. Nick A: do we need C:N ratio as a key trait? It’s already covered in the goals. Sarah: agree that it’s not needed. So we have: Life cycle, typical dry matter, ease of establishment, and two together: salinity tolerance and drought tolerance.

  • Review termination window updates and other changes to the tool

    • Nick A: when are filters talking about cover crop selection? Change all titles on that page to: Cover Crop Goals, cover crop additional filters, Cover crop planting season, cover crop lifecycle, will cover crop be irrigated? (yes/no)

  • User Testing:

    • Finalize timeline: now through May 23

    • Three respondents have completed testing; four have signed up to be testers.

    • Cora sent Elizabeth her spreadsheet of possible respondents, mostly in Oregon.

    • Recruitment efforts. Cora sent the request for respondents to all the chairs of subregion committees of the WCCC and some people Cora chose. Sarah: would be good to send to the whole listserve and ask facilitators to recruit. Cora: downside of the listserv is it’s not regionwide. Marguerite: tell facils we need this general diversity, and let them start testing. We’ll push to fill the gaps as we start getting responses. Sarah: will re-email some regional chairs.

    • Adam found a potential respondent; runs an urban garden in San Diego area; will send her Elizabeth’s email address

    • May 20 is a meeting of the WCCC e-board, at which we could try to fill in any gaps in respondents

  • Nick S will be leaving sometime this month; Laura Star will take over his responsibilities for western states. She willing to liaise with us but won’t have as much time.

  • Sarah, Nick A, Doug Collins, Nate Stacey will be the WCCC e-board people to contact over the summer.

 

03-25-2025

Attending: Adam Smith, Mikah Pinegar, Sarah Light, Nick Sirovatka, Nick Andrews, Cora Rose Bobo-Shisler, Juliet Norton, Victoria Ackroyd, Anna Morrow, Elizabeth Seyler (note taker)

WCCC Selector Testing

  • Answers to WCCC team Qs – see the doc

    • Criteria vs demographics/characteristics

  • Adam walked through the rest of the tool and showed some devel changes

    • Which goals are mapped to which termination methods. Juliet will add to the data dictionary and GitHub and let Adam know.

    • There are some problems with the AirTable data--units of measurement are missing. This is a Trevor question. Everything on the info sheet is a direct fed from AirTable. Juliet looking into the seeding rate info; baselines were evaluated for every state except for Hawaii. There’s no info from VegSpec here; the data are separate.

    • Notes should be corrected or removed (at bottom of info sheets)

    • Add spaces between commas on the info sheets.

    • Make it more clear that the seeding rate differs for diff varieties. Mikah: tool tips, titles, categories, values--all that should be changed in AirTable and we’ll load it back into the tool.

    • Juliet: first step for the board is to let her know of any needed changes.

    • In user testing, we should continue to use the Feedback button, and Adam and Mikah will make sure Juliet can see the GitHub tickets.

    • 100s are a bug in AirTable--an ID in the database? Mikah and Adam will fix.

    • Juliet: what were they key traits to include under Crop List view. Adam: we know key traits are an issue.

    • Comparison view: planting time, flowering time, irrigation type will be taken out bc they’ll appear earlier in the tool.

    • C to N Ratio: Nick A would like it to read N rate (or something like that) for clarity; it’s the N that changes.

Next steps:

  • Sarah L will recruit more facilitators and clarify the respondent (tester) recruitment process at two meetings tomorrow. (Will the board find respondents? Will the facilitators do so?)

  • Eliz will send Sarah an invitation for Sri and Clint as mock testers. She will send it to Nate and ask him to invite them to be mock testers.

  • Eliz will send Sarah the notes w/ GT from late Feb regarding all the options/factors we need to cover when recruiting respondents and for the respondent survey (demographics).

  • When the facilitator list is complete, Eliz will invite them to two diff meeting times for training; we’ll record them for anyone who can’t attend.

  • Nick Andrews and Cora will do a mock testing session, then Cora will do so with someone else, too. Victoria will, as well. Eliz will invite Juliet to them all so she can observe.

  • Juliet will make multiple changes in AirTable and let Adam know when complete.

03-19-2025

Attending: Elizabeth Seyler, Adam Smith, Rick H

WCCC selector testing

Topic: creating a walkthrough video of WCCC selector pages for facilitators to use during training and testing

  • Rick demos what he created so far. He walks Adam through his coding process.

  • Adam will copy/use the same commands or create his own. Rick available to help.

  • The walkthrough is on one of Rick’s branches in GitHub for the selector.

  • After Rick created it, he sent to Sarah to put it on YouTube.

  • Adam plans to complete the walkthrough asap, probably this week.

 

03-17-2025

Attending: Elizabeth Seyler, Adam Smith, Mikah Pinegar, Victoria Ackroyd

WCCC Selector Testing

  • How are development changes going? Adam: most are done. Just need data updates from Trevor. One thing won’t be ready for testing: map updates. It’s a separate component. We’re in process of reamping it. We’re going to use same map design everywhere. Show off the changes at next week’s meeting.

  • Status of user-testing materials: Eliz created Facilitator Brief, Facilitator Form, and Respondent Survey. Integrated feedback from GT. Awaiting answers from WCCC before can complete materials.

  • Eliz gave Marg feedback on her respondent-tracking Confluence sheet

  • Next steps:

    • Show off development changes at next week’s meeting w/ WCCC

    • complete testing materials when receive answers from WCCC on key questions

    • schedule mock testing, conduct mock testing, fine-tune testing materials

    • schedule facilitator training, support facilitators in recruiting respondents

A/B Testing

  • Eliz gave Shannon feedback on her draft A/B testing survey

 

03-10-2025

Attending: Elizabeth Seyler, Adam Smith, Mikah Pinegar, Sarah Light, Nick Sirovatka, Nick Andrews, Cora Rose Bobo-Shisler, Juliet Norton

WCCC Selector Testing

  • Juliet: WCCC needing to see how the data is presented in the tool, eg the filter, key attributes and traits. Opp to walk thorugh step-by-step would be helpful for WCCC decisions.

  • Nick S: Add language saying it’s the last polygon you selected that the tool uses. WCCC would like a hover tool over the polygon to say that.

  • Mikah: say “do only one polygon at a time” and that the pointer indicates where the data is pulled from

  • Adam: you can create multiple polygons and select them when you log in to pick the one you want to use.

  • Sarah L: take out “more states coming soon” when all the states are complete.

  • Sarah L: can they delete old fields? Yes, said Adam and Mikah.

  • Mikah: Drainage class is from SSURGO data from just the red point. Juliet and Nick: Could there be multiple classes? We could add it back in for the West. Nick: OK to leave this way. Adam: the drainage class will impact the cover crop recommendations.

  • Sarah: Login box should tell you more: this will save your history so you can use it in the future. Adam agrees; needs some design updating.

  • Nick: noticed Ontario. We have collborators in British Columbia, and they’re interested in the tool.

  • Juliet: if they can provide data and verification, we’d be able to add them.

  • Juliet: can help with providing Hawaii data. She sent a link to Trevor.

  • Sarah: maybe for another update we might be able to add the British Columbia data; not before testing. Once we’ve done testing for WCCC, we can show them the tool and try to add their data.m

  • Nick: I think you should only be able to draw one polygon at a time, then save it and use it and have the option to create another polygon. Mikah and Adam: someone probably asked for being able to draw multiple fields. Nick: confusing if someone has a field in OR and in CA, then might forget which field the data is about. Would prefer that you’re only able to create one polygon at a time. Adam: we can limit the number of states you see.

  • Sarah: what if you select a city location or a mountain location that’s not farmable? Anwer: the soil type doesn’t show up bc there’s no SSURGO data.

  • Mikah: display a message that says “ you’ve created a field in a state other than the one you selected. and tell them their choices.” The data pulled is completely state-determined. Also could be across different cccouncils, too.

  • Sarah: would be good to tell user that they’ve selected an urban area.

  • Sarah: Is there a maximum field size that can be selected? Get back to her on this.

  • Goals are not required to move forward on the tool.

  • Sarah: in Calif there are winter and summer cover crops, mostly they don’t overlap. Sarah: most users will wonder which species to grow in the winter. Juliet: the way data was collected won’t make it easy to do this; I could create a parameter to make your request possible. It’s implicit in the planting dates. Could we add a filtering mechanism? Adam: yes, we can select spring, summer, fall and create a filter. Nick: that sounds like a good solution. Can we search by goals and planting window?

  • Adam: termination window: it’s now on the info sheet for each ccrop. Sarah: I thought it would be a visual window. Juliet and Adam: we didn’t have enough info to create a visual. Juliet: on a site conditions or other previous window, people should choose the rainfed, irrigation and nonirrig; planged in spring, fall, winter, or summer; perennial vs annual; type of irrigation. On the Goals and Cash Crop Growing Window page. Nick likes buttons, but we can also use drop-down.

  • Juliet: use drop-downs when user can only select one thing.

  • Juliet: will give the exec committee what the data will look like, then devel will get it for inputing.

  • One title, then include all the pieces of info--qualitative and nonqualitative

  • Sarah: these are known methods: “Acceptable Termination Methods” important to say. The word “qualitative” is confusing; take that out. Adam: that’s fine. Nick:

  • Sarah Light: where did these data come from? Juliet: from the data verification process.

  • Nick: do termination methods include no-till? e.g., roller crimping? Other Nick: may not be there bc no guarantee that it will kill the cover crop.

  • Juliet: mowing and vegetation were taken off the list by first verification committee: Allowed termination methods: Tillage at Vegetative, Tillage at Flowering, Chemical at Flowering, Mow at Flowering, Roller Crimp at Flowering, Chemical at Vegetative, Intense Grazing, Winterkill

  • Nick: in chat: Winter Camelina- Tillage at Veg

  • Nick: in chat: For specific crop species information with multiple photos, can you disable the auto shuffle and turn it to shuffle on click or advance arrow?

  • Mikah: we need key traits from the WCCC. Each council has to select about three key traits for all the crops--whatever values you want for comparison view. Adam: this should make crops a choice or not a choice pretty quickly.

  • Sarah: doesn’t like the planting window; Jan to Dec would be better. Mikah: add a legend for Jan on left and Dec on right.

Next step: we’ll meet again at the next WCCC meeting on Tuesday to continue discussion. We did not get through everything.

 

02-25-2025

Attending: Marguerite Dibble, Elizabeth Seyler, Mikah Pinegar, Victoria Ackroyd, Adam Smith, Sarah Light, Nick Sirovatka, Nick Andrews, Trevor Puckett, Anna Morrow, Cora Rose Bobo-Shisler, Juliet Norton, Nathan Stacey, Doug Collins

WCCC Selector Testing

  • Marguerite presented the draft testing plan, here.

  • Sarah asks whether these are iterations for the first version of the selector. Mikah confirms yes.

  • Nick A: How do we get people from all demographics? Marg: we’ll ask facilitators to find people who meet our requirements.

  • Juliet: Irrigated vs non-irrigated; precipitation range; annual vs perennial

  • Nick: within perennial include grazing and perennial horticulture

  • Sarah: will we have annual vs perennial crops? Marg: should we pull that out separately? Or include it in bioregions or in the types of growing conditions? Nick: we’ll have to focus on irrigated vs nonirrigated, as well as the annual vs perennial. I think we could do that within the regions, as long as respondents have to answer those questions. Sarah: four key regions, then irrig v nonirri, then perennial v annual.

  • Marg: we can do branching in the survey--different questions according to how they answer the region question. The goal is to get what we need from testers.

  • Nick: can we populate some geodata based on their field location? Marg: yes, that would be cool.

  • Nick: in another testing session of a different tool, we got a lot of consistent feedback from a few respondents. Maybe we should parse out by acreage of field locations. Sarah: would species selection be influenced by acreage? Not sure.

  • Sarah: I might be too regional to give a larger view of the west; people who did the data verification. Nick S could provide that…when he’s back on the call.

  • Marg: could your team mull over these demographics?

  • Juliet: which testers were involved in data verification? We may need to know this. They will have more knowledge than those who weren’t part of it. Victoria: I think naive users might be best.

  • Nick: another tester demographic: farmers vs NRCS agents vs consultants; English vs non-English speakers; age? Sarah: farmers not happy to provide demographic data. Juliet: how long have they been using their cropping systems? Marg: we’ll figure out how to ask these things while keeping the testing process as easy and light as possible for the respondents.

  • Nick A: We have Cora as a facilitator, and she can dedicate full-time to it. If we know how many traits we have, then we can figure out how to get a group of respondents who represent those traits. Marg: more responses is better than fewer bc everyone’s experience is unique. Victoria: Anna and I have experience with user testing and could assist if you like. Sarah: one board member volunteered; what’s the training like? Marg describes it.

  • Victoria: could some NRCS folks who did VegSpec testing help with this?

  • Sarah: could we have a short overview of what facilitating involves so we could send to the board to encourage them to get involved. If no one volunteers, we could target some people. Nick enjoyed it with the other tool, so he could help sell what it’s like to be a facilitator. Sarah: if we show the value-added aspect, they may be more inclined. Victoria: flow-chart could be very helpful. Marg: expectations and value of the process in one short paragraph. Data diversification is a value from having more facilitators vs fewer. ASAP to Sarah. 3pm tomorrow is their board meeting.

  • Nick: do facilitators prompt people to use something on the tool? Marg: if we want them to test particular things, we can include questions on those into the Facilitator Form. Nick: if features aren’t obvious, we need to know that.

  • Juliet: My instinct is that you may need a different use case for perennial grazers versus horticulture systems, etc.

  • Sarah: what termination looks like should be included

  • Mikah asks some data questions

    • We discuss “Will you irrigate your cover crop?” Mikah asks them to decide what they want.

    • Sarah: email is best for sending us development questions. We’ll probably talk at our 4p meeting tomorrow. Adam will be avail at that time.

Eliz questions:

  • when do mock testing? with whom from wccc?

  • need a walkthrough video?

  • three or four weeks for testing?

02-18-2025

Attending: Jinam Shah, Adam Smith, Shannon Mitchell, Marguerite Dibble, Elizabeth Seyler, Mikah Pinegar, Rob Austin, Victoria Ackroyd, Heather Darby

VegSpec

  • Respondent Survey: 59 complete. A good blend of agents with diff levels of experience w/ conservation. Good range of comfort with technology. Regions look evenly represented; no data from 16 states.

  • Facilitator Form: 64 complete.

  • Eliz will send GT the results: export as Excel; and send emailed responses

  • At VegSpec meeting today we’ll discuss these numbers, how it went, any more coming in? Time line for results.

WCCC Selector Testing

  • GT will reach out to WCCC to find a meeting time to look at the draft on 24th or 25th, then they could show the board meeting on 26th if they like. Include Victoria and others in this call, but not everyone will come.

    • Victoria: 25th 3pm--Eliz will ask Sarah Light if we could do it then. Is that meeting agenda full?

  • Shannon: Anything we should keep in mind as we develop the testing plan? Victoria: don’t make any assumptions based on SCCC, NECCC, WCCC. They can be prickly. Very big region.

  • Marguerite: any key testing goals? Things to avoid? Mikah: irrigation is a big one. Victoria: termination is another sticky one for them. Adam: updates to the calendar view would come after testing, in version 1.5. We’re building A/B testing materials and can include that.

  • Shannon: we can decide what should be tested and what we can decide internally.

  • Mikah: we haven’t talked about Hawaii, Puerto Rico, Alaska, Virgin Islands. No way to access them bc not on the map. He and Adam will discuss it.

  • Marguerite: we should think about how to get testers from far-flung places in the WCCC.

  • Shannon: what are the archetypes of climates/regions we want to test with the tool? Mikah: agreed, because there are many permutations we could consider. GT: that would be very helpful so we can set targets around them, then we can also get testers outside those archetypes. Victoria: they have subregions. See the WCCC website for more info.

  • Shannon: number of facilitators and testers? Mikah: what’s the minimum for meaningful results? He expects it will be smaller than VegSpec and other tools. Marg: we’ll balance numbers of large and niche subregions. Adam: could we appeal to their bigness and diversity and ask them to find the number of testers we feel would give best results? Victoria: perhaps. Cautionary Note: focus on how we’ll use the data, not data verification--which has already been done. They’re prickly about this.

  • Shannon: what is the ideal window for developers receiving responses? Mikah: two months from now is good.

02-10-2025

Attending: Adam Smith, Shannon Mitchell, Marguerite Dibble, Elizabeth Seyler, Mikah Pinegar, Victoria Ackroyd, Nick Sirovatka (NRCS in the west), Nick Andrews (WCCC, OSU Extension), Nathan Stacey (WCCC, OSU Extension), Sarah Light (WCCC president, Extension Sacramento Valley), Cora Rose Bobo-Shisler (WCCC), Doug Collins (WCCC?)

WCCC Selector Testing

  • Agenda: Introductions all around, talk through timeline, what is a facil and a respondent in user testing

  • Testing process: Nick Andrews: Western is a huge region. Better to get feeback from types of users rather than feedback from one behemoth region. Shannon: yes, we like to gather data from people who are representative of the types of users in a region. Also, how tech savvy are they? How much subject matter expertise do they have? Victoria: west has some really unique concerns that can influence the tool.

  • Sarah L: irrigated and not. Diverse ecoregions and crop systems--perennial vs annual. Calif: state regulations on how farmers make cover crop decisions. Tech saviness: an aging demographic is very active who are less tech savvy; let’s make the selector accessible for them bc they won’t use other tech.

  • Nick A: scale--gardeners to 1,000s-acres farms. Some use cover crops in grazing, too. Tropical to arctic or subarctic climate, climate to really arid, irrigated to nonirrigated.

  • Nick S: Add growing season length

  • Sarah L: termination windows? Different ways to show termination. Very critical for species selection. It’s a uniquely west challenge. Mikah: we’ll make some mock-ups before user testing to address that. Adam locates a draft visual for it and shows us.

  • Nick A: is this testing on tools for just desktops or also for mobile? Mikah: do you want it as mobile friendly as possible? Sarah L: yes, they’ll be making decisions in the field. We’d like to test a mobile version. She shows a mockup that GT and Mikah will review.

  • Sarah L: in Calif, Small farm is under 100 acres, which might be a lot bigger than in other parts of the country.

  • Sarah L: recommendation output is important. It should be very useful to people. They like the “dials.”

  • Nick: making it intuitive--where to start and where to go next. Give exercises or assignments to people to use the tool. Would like a mockup of how the whole tool works.

  • Sarah L: will users' input remain after they leave? Mikah: No. (they log in)

  • Nick S: would like to be able to get to a final report really quickly, esp when out in a field with a farmer. Doesn’t want to have to enter a lot of data.

  • Shannon: 1. efficient process to get to the report, 2. data is useful to them. Steven: this feedback is consistent with what people in other regions have said.

  • Sarah L: can growers print out what they planted and in what field? Mikah: not right now; you could ask species to your list, but there’s no button for what you’re actually planting. But if that would be helpful, we could add it. Sarah L: that would be useful. Mikah: we’d need to add this feature. The list is saved, but there’s no assumption that a farmer planted the species in the list.

  • Mikah: before we define the goals, let’s meet to walk through the tool and demo it after all the WCCC data is live in the tool (March 1), then you can use it and decide what we want to test for.

  • Nick A: concern in the southern areas about running into the growing season as testing starts. Could we look at something earlier?

  • Sarah L: Step 6 is too tight. We need 4 weeks there. A longer window will allow us to get good feedback from the south and other parts of the region. Growers get a little window in June after they’ve planted but don’t have big pest problems yet.

  • Nick A: how many facilitators can we recruit? We could move from south to north.

  • Mikah: pushing it back to June wouldn’t give much time for cover crop seed orders, which usually happen in August. Sarah: big outreach push will be in the fall, but we’ll promote it throughout the year. Exending length of step 6 should work.

  • Nick A: could we show the tool to facilitators earlier?

  • Mikah: could we develop the materials earlier to meet this timing goal?

  • Shannon: we’ll see if we can revise the timeline.

  • Sarah L: Cora rose will be doing the scheduling for facilitators.

  • Nick S: what are the modifiers you have for western region--what’s the focus so we can select facilitators and testers lined up. Let us know the variables/modifiers we need to cover in selecting people. Add growing season length the modifiers.

  • Sarah L: pls send us that list of variables/modifiers and the kinds of questions you’ve asked testers in the past. Feb 26 is next WCCC board meeting--a good time to engage the board.

  • Mikah: we’ll be nearly ready, but data for NE and South are in

  • Adam: we’ll have the west and midwest will be working, but there will be bugs. We could pick out screenshots for the board meeting. Sarah L: we actually already have these and have shown the board. Adam: I’ve sent you screenshots. Other things will be functioning by then. Sarah L: someone from your group could come to the meeting. Adam put it on his calendar.

  • Marg: GT will write up the testing plan and schedule asap and send to WCCC for feedback.

02-03-2025

Attending: Shannon Mitchell, Elizabeth Seyler, Mikah Pinegar, Victoria Ackroyd, Amanda Hulse-Kemp, Adam Smith, Chris Reberg-Horton, Akshat Savla

VegSpec

  • 30 facilitator forms are completed; 32 respondent surveys are complete.

  • Elizabeth gave GT access to the result so far. She’ll share with Karl and GT the current breakdown of survey respondents' experience levels.

DSTs

  • There will soon be an intro to testing meeting for the WCCC

Drone software

  • Developers now have the results of testing and are making changes

01-14-2024

Attending: Marguerite Dibble, Elizabeth Seyler, Rob Austin, Mikah Pinegar, Anna Morrow, Jinam Shah

VegSpec

  • Elizabeth: Two facilitator training meetings yesterday were good. Karl invited a volunteer tester to each so he or a facilitator could walk through the Facilitator Form with them. Outcomes very consistent with what we found in mock testing in December. Both took 1 hour 40 min because there was a lot of discussion, and tester/respondents got into the weeds on the data. I reminded that the data isn’t complete so to focus more on function. I also reminded facilitators not to teach people how to use VegSpec but rather note how user-friendly it is and how it could be improved.

  • Eliz and Marg made sure data entering correctly in Survey Monkey Facil Form. It’s not really clear why the second session yesterday doesn’t show up in Survey Monkey responses. We surmise that we can’t be making text changes to the form itself while a facilitator is entering data, but we’re not sure.

Drone

  • Mikah: everyone on the drone team got the slide deck on the drone testing results

  • Rob: can see pretty quickly which items would be easy to change and which less so. Jinam: has ideas on how to accomplish various things, esp regarding the grids.

  • Next step: The group will meet to prioritize changes to the software, Wed, 1/22, 4-5pm. Mikah sending invitation.

 

01-08-2024

Attending: Marguerite Dibble and Elizabeth Seyler

Agenda: plan for full testing kickoff meeting later today.

John

  • welcomes everyone and thanks facilitators for participating

  • gives a brief overview of VegSpec and the goals of this testing round in collaboration with Game Theory and PSA

Marguerite

  • high-level look at testing and its goals

Elizabeth

  • describes PSA and its role in helping to test and improve VegSpec

  • tour people through the Facilitator Brief sections: Overview, Your Job, Testing Session Steps, including briefly showing the Facilitator Form and Respondent Survey

Marguerite

  • Facilitator Brief sections: Tips and Tricks, Troubleshooting

  • GT avail during these office hours: Thu, Jan 30, 1-1:30 and Tue Feb 11, 1-1:30. Eliz will send invitations to GT, Karl, all facilitators.

  • Karl will lead meetings in next two weeks for deeper dive into VegSpec

  • Any questions?

 

12-17-24

Attending: Rob Austin, Shannon Mitchell, Marguerite Dibble, Chris Reberg-Horton, Elizabeth Seyler, Heather Darby, Mikah Pinegar

Drone Testing

  • We’ll wait to go over the results of testing until Chris has put his info into the new format.

  • We’ll meet later this week to review the results and prioritize items for developers

VegSpec

  • We discussed the two mock testing sessions. Decided we like the version Eliz and Lindsey did better; lighter lift for facilitator and allowed session to stay within an hour. No demo of tool first: respondent jumped right into using it on their own and answering questions page by page.

  • Eliz will tweak the materials to reflect the latter process; update the facilitator form; adjust the respondent form. NRCS agents don’t have to pick the experience levels; they just get paired up. Add a list of what to do: Attend kick-off, attend session with Karl, pick your respondents, schedule sessions, conduct testing.

  • Tutorial? Karl would be a good person to do that bc he knows the lingo--a walk-through, or Rick. We should provide some constraints, like just two minutes. Eliz will ask Karl about making a demo. Could use a portion of a recording from ad hoc testing.

  • Use the Feedback form if we find bugs.

  • Is there anything we should not ask about? Mikah: just that the data isn’t complete yet. Goal is not to check the accuracy of the data, just how it works.

  • Book the Jan 8 session.

11-12-24

Drone testing

  • Dec 9 from 9-12 in person, show app, live focus group.

  • Elizabeth will schedule a follow-up session for drone folks to discuss the testing plan. Will invite others from past drone meetings.

VegSpec

  • GT completed three ad hoc sessions. No huge hang-ups; people understood the tool and could use it. Now GT is nailing down the testing plan and the materials.

  • Does Karl want us to be focused on any other specific questions before the full testing? Shannon will ask.

  • Shannon: target responses 60; 35 facilitators; each tests with 2-3 respondents; 1 high experience in conservation or field agent, 1 limited experience, 1 partner from BLM, Park Services, field staff. Might go more quickly, responses might trickle in more. Heather: if they want to test with more people, they should. But she thinks people won’t feel as stressed about it with the 2-3 tester expectation. Wonders if the optional people should be more defined by NRCS.

  • Format of 1-hour session: Intro, demo of the tool and share the link, discussion questions, tester completes a survey. Discussion: mainly bout features. Survey: test assumptions about the tool, gather demographic data and more. This combo tends to give good results.

Our overall goals have been: Are Features missing? Features not working well? So, go page by page in logical order through the tool. Heather: sometimes when I’m watching on Zoom, it’s easy to lose focus. Keeping people engaged is really important. Your approach sounds good. Tell people to join on their computers.

Next steps: Shannon will share the working test plan with us for feedback.

 

10-29-24

Attending: Rob Austin, Shannon Mitchell, Marguerite Dibble, Chris Reberg-Horton, Victoria Ackroyd, Elizabeth Seyler, Heather Darby, Jinam Shah, Emily Unglesbee

Update:

  • GT is working on a design review for the drone app, planning for full-testing materials creation, and booking sessions for ad hoc.

Drone software:

  • Quality of life design; log things in Confluence--a list of design elements on Confluence. Chris: Akshat can probably help with these things, it’s mostly front-end stuff. On Dec 9 we might incorporate anything that’s not complete into the testing. Margaret: yes, we can certainly do that. We’ll create an agenda when we get closer.

  • Rob agrees that we can do it 2 hours; is the group is small enough? Chris: breeders 20, 3-4 supers, 3-4 admins. We want the others to know how breeders will use it. Supers are used to this type of software.

  • Drone testing: planning will happen as we get a little closer to Dec 9

VegSpec:

  • Marg: We have clear goals for the testing

  • Shannon: Ad hoc testing planning – ends Nov 22; we need to know whom to bring in and how to contact them. John IDed three things: goals tab, species tab, seed mix tab. We have time to execute these. The sessions will be short with a few people at a time, and we’ll use a form to get initial feedback.

  • Marg: We’d also like feedback on some design items for VegSpec – over next couple of weeks.

  • Shannon: VegSpec wants to focus on features during this window; but design can be fit in as needed as we have time.

  • Steven: scope or scale changes in VegSpec testing? Shannon: flexible on the scope, it’s scalable. Marg: we’ll need to review the plan to be sure it covers all the bases, all the goals.

  • For VegSpec, we’ll create fewer materials and make the testing process more streamlined.

Testing Materials (Elizabeth)

  • VegSpec Internal Ad Hoc Feature Testing Feedback Form: Shannon says we’re most likely to ask for feedback on individual features at a time, as this document does. “Feature” will usually mean “testing focus” or “screen.” Will probably need tweaks to the text, depending on what the “feature” is.

    • Replace third and fourth question with: “what is the feature/goal being tested” and sub in the script from pre-written templates that Game Theory has created.

  • VegSpec Internal Ad Hoc Design Testing Feedback Form: Could make same consolidation changes to the top of this form as discussed above.

    • Marguerite wants more questions on what this empowers you to do; what could you see yourself being able to do; is this getting them excited to do something; is it communicating resources/valuable information that you need for a particular goal/job?

    • Eliz asks whether these are features questions. Marg clarifies that it’s helpful to ask these things in the design testing, too, because users should respond to what they’re seeing on a screen.

    • Heather: need a question about was anything missing? Anything you expected to see and didn’t? Did the user enter with any preconceptions that weren’t met?

  • Mikah and team need to book a meeting with Game Theory to get up to speed on all the design progress that has been made and see if everyone is on board with what has been done. Jinam, Elizabeth and Victoria would like to be on that meeting. Booked for Wednesday, October 30 at 4 p.m.

10-15-24

Attending: Rob Austin, Shannon Mitchell, Marguerite Dibble, Chris Reberg-Horton, Victoria Ackroyd, Elizabeth Seyler, Heather Darby, Jinam Shah, Amanda Hulse-Kemp

Agenda

  1. Update on defining the testing goals and then defining the testing approach for VegSpec.

  2. Discuss drone testing schedule, attendees, goals

Drone testing

  • Chris: maybe a two- or three-hour session with breeders. Best for them: 9-12 on Monday, Dec 9, in Raleigh. That’s good for GT and Rob. Breeders are having a meeting that afternoon, starting at 1p. Jinam could join from India. Chris: developers aren’t always in the room when people do user testing. Jinam doesn’t have to be there, but he could come if he’s curious.

  • Rapid iteration on version 2 before then?

  • GT can take a look at design for the drone testing. Jinam putting it in the shared Confluence library. Jinam does mostly back-end work, which is very complex. The front-end changes should be fairly simple. GT will provide design feedback by Nov 15 to Jinam. GT already checked out the UI previously, and they don’t expect crazy changes.

  • Dec 9 session: three user groups, 30 people max:

    1. field researchers who need access to the tool but not involved in the flying of it. It flies weekly over the whole station, and it’s there as a resource. This group can download the data and use it as they wish. Also plant breeders at NC State, and field researchers from other disciplines. About 20 people.

    2. superintendents at the research stations; many work for Dept of Ag, and we imagine their staff will be the longterm operators of the drone

    3. people who are already flying drones at many stations, and they’ll continue to do so on their own. Our drone program won’t hit all the stations for a long time, not until we have the money. This group will interact with application 1, as well.

  • Chris’s sense of Dec 9 agenda: Walk through version 1 to get feedback for version 2, and then break into groups by level of experience.

  • What we want to learn from this testing group. Chris: watch them interact with it and see how it goes. Major question: should the tool give a vegetation index/number or an image of all the plots? Ask each group: how many of them prefer either tool output?

  • Amanda: we should gather everything we need from the groups at our Dec 9 meeting. They won’t have time to do anything afterward.

  • Rob: there are strong personalities. Good idea to break up the 20 people into smaller groups. Chris is working on reducing the number. It has gotten politically complex.

  • Next step: Chris will send an email re hold the date. He, Rob, and Amanda are discussing whom to invite.

VegSpec

  • NRCS feels the goals are on point; we’ll check in later today in the meeting.

  • Next: what will testing look like? What needs to be created?

Steven: our selector and seeding rate calc will live within VegSpec. Shannon: the design work we’ve already done on DSTs will likely be helpful for VegSpec design. We should meet so you can see how they’re looking.

GT: We’ll review to be sure our questions are addressing collaboration and flow between the tools.

Materials for VegSpec testing: GT and Eliz considering using just a survey instead of a survey and spreadsheet for gathering feedback. Smaller group of testers this time, and we want to reduce the number of materials and any redundancies compared to summer seed calc testing.

10-08-24

Attending: Rob Austin, Shannon Mitchell, Marguerite Dibble, Chris Reberg-Horton, Victoria Ackroyd, Elizabeth Seyler, Heather Darby

VegSpec Schedule Check In

Marg and Shannon entered GitHub epics for VegSpec full testing and ad hoc testing. See GitHub for details.

  • Sep 30-Oct 25: Full testing: goals definition and process outline

  • Sep 30-Oct 18: Ad hoc: plan and materials

  • Oct 21-Nov 22: conduct ad hoc testing

  • Oct 28-Dec 13: full testing materials creation

  • Dec 2-13: Reporting on ad hoc testing

  • Dec 10-31: VegSpec accessibility concerns

  • Dec 30-Jan 20: VegSpec to use shared component library

  • Jan 6-17: train on full testing, provide materials

  • Jan 20-Feb 14: conduct full testing

  • Feb 17-28: full testing results and data review

AB testing: Maybe as part of ad hoc testing, but not as formal as for seed calc tool. NRCS feeling good about the features that exist; more concerned about gaps. Will ask some design questions. NRCS wants to show off a few features and discuss whether they fit and what people think of them. Does the flow make sense? Am I getting what I need as a conservation agent?

Hard for people to give feedback on design out of context, so talking with people 1-1 will be more helpful.

Drone Testing

Chris: we have three user groups: plant breeders need intensive work, most are faculty so don’t have lots of time. Maybe something live where they are trying it and bouncing ideas off each other.

Rob: When do we anticipate having some data to work with? From there, we can put dates on the calendar.

Jinam: we have some data, and I’m working on some failures. Don’t have design yet.

Rob: how do we normally kick off testing?

Marg: GT does prelim testing first and passes that on to developers to fix before full testing. Also clarifying the purpose, audience, goals of testing. What do we want to measure? Can we distill to 5 or 6 measurable goals? Create a testing plan that is informed by the state of the tool.

Rob: next step is internal review to fix little things?

Marg: Yes. Chris you agree?

Chris: yes, people obsess over stuff that is broken. Could you come down to do a live session in one day or a half day with all three groups? Then break them out into separate rooms in their own groups?

Marg: Sure! If that feels like a good fit with our testing plan, we’d love to.

Rob: You want to get the most out of them at that point, so we’d want to be sure tool was really ready to go. Use a more iterative process to get it there.

Chris: we have a stable thing up and running. Now we need to improve it. In general, development changes that occur quickly based on feedback can be great for political support.

Marg: Also could come up with a handful of user scenarios, and that gives developers clear goals for the tool. Work backward from a date we set for in-person group testing in order to guide development.

Chris: VegSpec is huge priority, so let’s work the timing of drone testing around it.

We discuss timing options:

  • Rob: Dec works well bc school finishes--three weeks before holidays. Chris: early Dec a nice time for growers. Marg: Yes, looks like Dec is a good time to do this.

  • Rob: spring break works well. March 10-14. Chris: that could be a good time for faculty. Marg: March 17-21 GT is at a conference.

Chris: if Dec, it’s week of the 9th. If Jan, then just after the holidays. If March break, could work bc most of these folks are not Extension people.

Action steps:

  • Chris and Rob will check in about timing at the next drone meeting.

10-01-24

Attending: Rob Austin, Shannon Mitchell, Marguerite Dibble, Chris Reberg-Horton, Victoria Ackroyd, Elizabeth Seyler

Ad hoc testing for VegSpec

Marguerite lays out goals 1 and 2, and we discuss 1 in dept:

  1. Meet to talk about the purpose of the tool, the audience groups, and specific measurable goals for reporting purposes.

    1. Chris: tell a specific group of NRCS agents what our understanding of VegSpec purpose, audience, and goals are. See what they think. Come to the weekly Tue meeting. Cover crops are one standard within a sea of standards in VegSpec. Conservation planning is a huge part of NRCS budget. Have relatively untrained field staff bc have expanded quickly, so need VegSpec to help them create conservation plans. Best to have farmers enter their data once, then get recommendations. His big design question: does it bug them that users need to go to other sites? They love VegSpec and will commit to testing.

    2. Chris: We want user testing done in 2025, and they’ve provided ample funding to build it.

    3. Shannon: Who is not a fit for this tool? Chris: Field agents are first audience, farmers are secondary. No other audiences.

    4. Shannon: how much time can they commit? Chris: I think they’re up for intense focus-group testing. I think they’ll prefer that over testing with farmers.

    5. Shannon: you feel good about the features. is the data complete? any big gaps? Chris: Which plants are best for which uses: there are many exceptions state by state. Updating the federal dbase can be very slow, so they’ve been circulating their own forms with the exceptions. They don’t want that to delay the launch of VegSpec. We don’t have to do data quality checking in our testing.

    6. Shannon: so we’re doing validation that features are correct and getting input to guide redesign of the UI, correct? Chris: Correct.

    7. Chris: Rick is able to sit with a client group and understand what they’re saying. He creates prototypes very quickly. He’ll make the first pass, then we’ll reconstruct development behind him.

    8. Chris: we’re trying to get shared components across the tools, and some of that is already happening. Marg: AB testing will help us with this

    9. Chris: user history will be the tricky part so farmers can enter data just once and it’s used across the tool

    10. Shannon: tell me more about the NRCS group? Chris: it’s time to come to a meeting and take over a sessin. They’re fun and engaging and even know about databases. They’re high in the organization.

  2. GT will draft goals for testing to send to us for review.

Drone testing

  • Maguerite: Same questions: high-level purpose, audience, necessary reporting outcomes. Clear goals.

  • Chris: Rob can speak to needs of plant breeders, which is very important in drone testing.

  • Rob: we could also engage extension agents

  • Marg: what’s the experience level among them? Rob: older breeders are very hands-on, younger ones are more into AI and computer work. Marg: both groups can have their own unique needs.

  • Chris: gathering features right now. Rob: putting bounds on what we do will be important.

  • Marg: we’ll need some baseline, casual testing to create the flow of the software. pair that with elements people really want to see.

  • Chris: ARS has hired us to create version 2. Would like to have this done in a year. Scale is an issue on the back end. We spent a lot of time on the back end of version 1.

  • Chris: people will use it on their computers on the federal VPN for access; people without that access won’t be able to use it

  • Shannon: how do we hope people will use this? Testing could be more focused on the need and how diff people are experiencing the problem, which can give us more info on design and what’s included.

  • Chris: Rob and I have done a lot of work with plant breeders. Rob: they don’t know what they don’t know yet. Workflow, data, pipeline will be 80% the same across all of them.

  • Shannon: OK let’s just start with the features you know they will need. Rob: Yes, it won’t be hard to add elements as needed. I don’t think it will change the interface.

  • Shannon: the UI is very useable now.

  • Rob: other commercial groups have already created something like this. Do you draw on that? Look at them?

  • Marg: we looked at a few, and we use it as a reference, but we really base what we do on what our users want. Shannon: we’ll ask users what they’re already using.

  • Rob: I know of only two.

  • Time frame: VegSpec is our focus for next few weeks. Rob probably doesn’t need to attend all these Tue meetings. We meet weekly leading up to testing, then less frequently. We’ll let you know when we’re going to focus on drone testing.

09-24-24

Attending: Mikah Pinegar, Shannon Mitchell, Marguerite Dibble, Chris Reberg-Horton, Emily Unglesbee, Steven Mirsky, Victoria Ackroyd, Elizabeth Seyler, Amanda Hulse-Kemp

Agenda Items

GT’s Timeline for the Upcoming Testing: See Confluence/GitHub page they’ll put in Slack user testing channel. Discussion:

  • Chris: VegSpec might take priority in next testing. No rush for drone testing.

  • Shannon: design testing should be done at a different time from user testing

  • Chris: the current VegSpec is pretty stable. Mikah: unifying the theme/design could take time, but we can ask VegSpec people tonight about what their priority is for testing. AB testing for the data or tool? John, Karl, and Lori will probably be at the meeting.

  • Mikah likes idea of finishing our work this fall, then launching the testing in the new year. Victoria agrees.

  • Marguerite: agrees that we can ask them for light engagement this fall, then launch in the spring.

  • Marg: do we want prelim baseline design feedback on VegSpec before we finalize it and begin user testing. We can do AB testing on a few things in the fall, fix glaring things.

  • Mikah: there’s a lot we can do without finalizing the design; we can do a first design round, then make final changes. Not matter to me if it’s totally finalized, but he’d like to start on that design work soon.

  • Shannon: key design decisions. 16 of them, and each will need some focused attention over the next few months. We had good engagement on AB testing in the past, so we could probably get feedback fairly quickly. The development timeline matters a lot.

  • Mikah: we can make design changes on a rolling basis as AB info comes in.

  • Marg: get feedback on the 16 design elements via AB testing, then do user testing.

  • Mikah: likes that approach bc VegSpec is the umbrella tool. A landing page and then redesign the flow. Then we can apply that to all the other tools.

  • 16 design needs to address, include:

    1. goal selection

    2. progress bar

    3. iconography--our style, accessibility

    4. rationale--critique and analysis of the flow w/ a subject matter expert

    5. cards vs tables

    6. summary and expert flow at the end

    7. browse vs recommend mode

    8. calendar and phasing visualization

    9. citations--where did the data come from, why these numbers, transparency

    10. site header

    11. comparison view feature

    12. charts and visual guidelines--what’s included, axes

    13. filtering components

    14. equation displays

  • Some specific feedback from user testing is in the Design page of Projects/PSA Planning in GitHub.

09-17-24

Attending: Marguerite Dibble, Elizabeth Seyler, Mikah Pinegar, Steven Mirsky, Victoria Ackroyd, Rick Hitchcock

Big Picture Testing Needs

  • Drone software set of apps nearing completion.

  • VegSpec: ready to be tested. Mikah: NRCS ready and excited to get some testing done. Just testing VegSpec itself as an umbrella tool. We want to test the core functionality. Enter interests, goals, pick species, adjust for what you want to do for conservation planning. Not cover crop tool testing.

    • Marguerite: Step 1: what are our goals for testing? Who’s going to be impacted, what impacts do we hope for? GT will create timeline around that.

    • Mikah: we should meet with NRCS folks to ask about their goals.

    • Marguerite: Question: is VegSpec ready to test? Quality of life improvements? Mikah will ask Chris and Steven on Tue next week about that. He’ll see if Marg or Shannon should come to that meeting.

    • Mikah: We should get VegSpec accessibility concerns fixed before testing. And anything else NRCS wants done before.

  • OK to test both at the same time? Marguerite: Yes, because two distinct groups. Once we’ve put together the materials, we can run them concurrently.

  • Marguerite will put together a schedule for both testing processes

  • Mikah: not sure who’s supposed to be overseeing the drone testing. Chris and Brian have been working on it, plus Amanda? at NRCS. He’ll find out.

DST User Testing Process Feedback

  • Elizabeth: six responses so far. Deadline is this Fri, Sep 20. She’ll remind people again on Thursday and will share results with our team on Friday. Feedback so far:

    • After initial training, they felt prepared to conduct user testing, but they would like more thorough training on how to use the tool (online via Teams or a recorded webinar) and reference materials for troubleshooting.

    • User-testing packet and flowchart were easy to use. So were the 1-1 testing materials, but some felt the async materials were impacted by the bugs in the tool.

    • They were split on whether 1-1 or async testing was a better way to gather feedback on the tool.

    • It was frustrating and embarrassing to run into bugs in the tool while doing 1-1 testing.

    • There are too many questions in the async testing feedback sheet--people may gloss over or skip some.

  • Victoria: We need to provide a list of the absolute necessary things people must enter in order to use the tool properly and have it actually work well.

  • Marguerite: Set clear expectations for the facilitators, also for the facilitators to set for the testers. Victoria: yes, even though she told people the southern data weren’t all available, people still complained.

  • Marguerite: We can do our own testing and put together some troubleshooting tips.

  • Mikah: We should make a video of someone walking through the tool. NRCS could create it, recording them working on it themselves.

  • Eliz: Karl ran about 10 meetings for groups of NRCS agents in last round of testing--in part to be sure they were going to comply with request to run testing. Does he prefer to train people again? If so, a recording of one could be sufficient.

Next steps:

  • GT will create timelines for the upcoming testing.

  • Eliz will send everyone a summary of facilitators' feedback this Friday.

  • Mikah will take notes next week. Eliz at conference Sep 23-27.

09-10-24

Attending: Shannon Mitchell, Chris Reberg-Horton, Elizabeth Seyler, Mikah Pinegar, Emily Unglesbee, Steven Mirsky, Victoria Ackroyd, Rick Hitchcock

User Testing Results

  • Responses spreadsheet:

  • GT Data Summary:

  • 62 respondents

  • Good regional coverage, least from northeast. Great response from south and west. Also had responses from other regions, such as pacific islands

  • 90% agents or consultants from a range of backgrounds, 4.9% growers, 4.9% both