User Testing Notes

10-29-24

Attending: Rob Austin, Shannon Mitchell, Marguerite Dibble, Chris Reberg-Horton, Victoria Ackroyd, Elizabeth Seyler, Heather Darby, Jinam Shah, Emily Unglesbee

Update:

  • GT is working on a design review for the drone app, planning for full-testing materials creation, and booking sessions for ad hoc.

Drone software:

  • Quality of life design; log things in Confluence--a list of design elements on Confluence. Chris: Akshat can probably help with these things, it’s mostly front-end stuff. On Dec 9 we might incorporate anything that’s not complete into the testing. Margaret: yes, we can certainly do that. We’ll create an agenda when we get closer.

  • Rob agrees that we can do it 2 hours; is the group is small enough? Chris: breeders 20, 3-4 supers, 3-4 admins. We want the others to know how breeders will use it. Supers are used to this type of software.

  • Drone testing: planning will happen as we get a little closer to Dec 9

VegSpec:

  • Marg: We have clear goals for the testing

  • Shannon: Ad hoc testing planning – ends Nov 22; we need to know whom to bring in and how to contact them. John IDed three things: goals tab, species tab, seed mix tab. We have time to execute these. The sessions will be short with a few people at a time, and we’ll use a form to get initial feedback.

  • Marg: We’d also like feedback on some design items for VegSpec – over next couple of weeks.

  • Shannon: VegSpec wants to focus on features during this window; but design can be fit in as needed as we have time.

  • Steven: scope or scale changes in VegSpec testing? Shannon: flexible on the scope, it’s scalable. Marg: we’ll need to review the plan to be sure it covers all the bases, all the goals.

  • For VegSpec, we’ll create fewer materials and make the testing process more streamlined.

Testing Materials (Elizabeth)

  • VegSpec Internal Ad Hoc Feature Testing Feedback Form: Shannon says we’re most likely to ask for feedback on individual features at a time, as this document does. “Feature” will usually mean “testing focus” or “screen.” Will probably need tweaks to the text, depending on what the “feature” is.

    • Replace third and fourth question with: “what is the feature/goal being tested” and sub in the script from pre-written templates that Game Theory has created.

  • VegSpec Internal Ad Hoc Design Testing Feedback Form: Could make same consolidation changes to the top of this form as discussed above.

    • Marguerite wants more questions on what this empowers you to do; what could you see yourself being able to do; is this getting them excited to do something; is it communicating resources/valuable information that you need for a particular goal/job?

    • Eliz asks whether these are features questions. Marg clarifies that it’s helpful to ask these things in the design testing, too, because users should respond to what they’re seeing on a screen.

    • Heather: need a question about was anything missing? Anything you expected to see and didn’t? Did the user enter with any preconceptions that weren’t met?

  • Mikah and team need to book a meeting with Game Theory to get up to speed on all the design progress that has been made and see if everyone is on board with what has been done. Jinam, Elizabeth and Victoria would like to be on that meeting. Booked for Wednesday, October 30 at 4 p.m.

10-15-24

Attending: Rob Austin, Shannon Mitchell, Marguerite Dibble, Chris Reberg-Horton, Victoria Ackroyd, Elizabeth Seyler, Heather Darby, Jinam Shah, Amanda Hulse-Kemp

Agenda

  1. Update on defining the testing goals and then defining the testing approach for VegSpec.

  2. Discuss drone testing schedule, attendees, goals

Drone testing

  • Chris: maybe a two- or three-hour session with breeders. Best for them: 9-12 on Monday, Dec 9, in Raleigh. That’s good for GT and Rob. Breeders are having a meeting that afternoon, starting at 1p. Jinam could join from India. Chris: developers aren’t always in the room when people do user testing. Jinam doesn’t have to be there, but he could come if he’s curious.

  • Rapid iteration on version 2 before then?

  • GT can take a look at design for the drone testing. Jinam putting it in the shared Confluence library. Jinam does mostly back-end work, which is very complex. The front-end changes should be fairly simple. GT will provide design feedback by Nov 15 to Jinam. GT already checked out the UI previously, and they don’t expect crazy changes.

  • Dec 9 session: three user groups, 30 people max:

    1. field researchers who need access to the tool but not involved in the flying of it. It flies weekly over the whole station, and it’s there as a resource. This group can download the data and use it as they wish. Also plant breeders at NC State, and field researchers from other disciplines. About 20 people.

    2. superintendents at the research stations; many work for Dept of Ag, and we imagine their staff will be the longterm operators of the drone

    3. people who are already flying drones at many stations, and they’ll continue to do so on their own. Our drone program won’t hit all the stations for a long time, not until we have the money. This group will interact with application 1, as well.

  • Chris’s sense of Dec 9 agenda: Walk through version 1 to get feedback for version 2, and then break into groups by level of experience.

  • What we want to learn from this testing group. Chris: watch them interact with it and see how it goes. Major question: should the tool give a vegetation index/number or an image of all the plots? Ask each group: how many of them prefer either tool output?

  • Amanda: we should gather everything we need from the groups at our Dec 9 meeting. They won’t have time to do anything afterward.

  • Rob: there are strong personalities. Good idea to break up the 20 people into smaller groups. Chris is working on reducing the number. It has gotten politically complex.

  • Next step: Chris will send an email re hold the date. He, Rob, and Amanda are discussing whom to invite.

VegSpec

  • NRCS feels the goals are on point; we’ll check in later today in the meeting.

  • Next: what will testing look like? What needs to be created?

Steven: our selector and seeding rate calc will live within VegSpec. Shannon: the design work we’ve already done on DSTs will likely be helpful for VegSpec design. We should meet so you can see how they’re looking.

GT: We’ll review to be sure our questions are addressing collaboration and flow between the tools.

Materials for VegSpec testing: GT and Eliz considering using just a survey instead of a survey and spreadsheet for gathering feedback. Smaller group of testers this time, and we want to reduce the number of materials and any redundancies compared to summer seed calc testing.

10-08-24

Attending: Rob Austin, Shannon Mitchell, Marguerite Dibble, Chris Reberg-Horton, Victoria Ackroyd, Elizabeth Seyler, Heather Darby

VegSpec Schedule Check In

Marg and Shannon entered GitHub epics for VegSpec full testing and ad hoc testing. See GitHub for details.

  • Sep 30-Oct 25: Full testing: goals definition and process outline

  • Sep 30-Oct 18: Ad hoc: plan and materials

  • Oct 21-Nov 22: conduct ad hoc testing

  • Oct 28-Dec 13: full testing materials creation

  • Dec 2-13: Reporting on ad hoc testing

  • Dec 10-31: VegSpec accessibility concerns

  • Dec 30-Jan 20: VegSpec to use shared component library

  • Jan 6-17: train on full testing, provide materials

  • Jan 20-Feb 14: conduct full testing

  • Feb 17-28: full testing results and data review

AB testing: Maybe as part of ad hoc testing, but not as formal as for seed calc tool. NRCS feeling good about the features that exist; more concerned about gaps. Will ask some design questions. NRCS wants to show off a few features and discuss whether they fit and what people think of them. Does the flow make sense? Am I getting what I need as a conservation agent?

Hard for people to give feedback on design out of context, so talking with people 1-1 will be more helpful.

Drone Testing

Chris: we have three user groups: plant breeders need intensive work, most are faculty so don’t have lots of time. Maybe something live where they are trying it and bouncing ideas off each other.

Rob: When do we anticipate having some data to work with? From there, we can put dates on the calendar.

Jinam: we have some data, and I’m working on some failures. Don’t have design yet.

Rob: how do we normally kick off testing?

Marg: GT does prelim testing first and passes that on to developers to fix before full testing. Also clarifying the purpose, audience, goals of testing. What do we want to measure? Can we distill to 5 or 6 measurable goals? Create a testing plan that is informed by the state of the tool.

Rob: next step is internal review to fix little things?

Marg: Yes. Chris you agree?

Chris: yes, people obsess over stuff that is broken. Could you come down to do a live session in one day or a half day with all three groups? Then break them out into separate rooms in their own groups?

Marg: Sure! If that feels like a good fit with our testing plan, we’d love to.

Rob: You want to get the most out of them at that point, so we’d want to be sure tool was really ready to go. Use a more iterative process to get it there.

Chris: we have a stable thing up and running. Now we need to improve it. In general, development changes that occur quickly based on feedback can be great for political support.

Marg: Also could come up with a handful of user scenarios, and that gives developers clear goals for the tool. Work backward from a date we set for in-person group testing in order to guide development.

Chris: VegSpec is huge priority, so let’s work the timing of drone testing around it.

We discuss timing options:

  • Rob: Dec works well bc school finishes--three weeks before holidays. Chris: early Dec a nice time for growers. Marg: Yes, looks like Dec is a good time to do this.

  • Rob: spring break works well. March 10-14. Chris: that could be a good time for faculty. Marg: March 17-21 GT is at a conference.

Chris: if Dec, it’s week of the 9th. If Jan, then just after the holidays. If March break, could work bc most of these folks are not Extension people.

Action steps:

  • Chris and Rob will check in about timing at the next drone meeting.

10-01-24

Attending: Rob Austin, Shannon Mitchell, Marguerite Dibble, Chris Reberg-Horton, Victoria Ackroyd, Elizabeth Seyler

Ad hoc testing for VegSpec

Marguerite lays out goals 1 and 2, and we discuss 1 in dept:

  1. Meet to talk about the purpose of the tool, the audience groups, and specific measurable goals for reporting purposes.

    1. Chris: tell a specific group of NRCS agents what our understanding of VegSpec purpose, audience, and goals are. See what they think. Come to the weekly Tue meeting. Cover crops are one standard within a sea of standards in VegSpec. Conservation planning is a huge part of NRCS budget. Have relatively untrained field staff bc have expanded quickly, so need VegSpec to help them create conservation plans. Best to have farmers enter their data once, then get recommendations. His big design question: does it bug them that users need to go to other sites? They love VegSpec and will commit to testing.

    2. Chris: We want user testing done in 2025, and they’ve provided ample funding to build it.

    3. Shannon: Who is not a fit for this tool? Chris: Field agents are first audience, farmers are secondary. No other audiences.

    4. Shannon: how much time can they commit? Chris: I think they’re up for intense focus-group testing. I think they’ll prefer that over testing with farmers.

    5. Shannon: you feel good about the features. is the data complete? any big gaps? Chris: Which plants are best for which uses: there are many exceptions state by state. Updating the federal dbase can be very slow, so they’ve been circulating their own forms with the exceptions. They don’t want that to delay the launch of VegSpec. We don’t have to do data quality checking in our testing.

    6. Shannon: so we’re doing validation that features are correct and getting input to guide redesign of the UI, correct? Chris: Correct.

    7. Chris: Rick is able to sit with a client group and understand what they’re saying. He creates prototypes very quickly. He’ll make the first pass, then we’ll reconstruct development behind him.

    8. Chris: we’re trying to get shared components across the tools, and some of that is already happening. Marg: AB testing will help us with this

    9. Chris: user history will be the tricky part so farmers can enter data just once and it’s used across the tool

    10. Shannon: tell me more about the NRCS group? Chris: it’s time to come to a meeting and take over a sessin. They’re fun and engaging and even know about databases. They’re high in the organization.

  2. GT will draft goals for testing to send to us for review.

Drone testing

  • Maguerite: Same questions: high-level purpose, audience, necessary reporting outcomes. Clear goals.

  • Chris: Rob can speak to needs of plant breeders, which is very important in drone testing.

  • Rob: we could also engage extension agents

  • Marg: what’s the experience level among them? Rob: older breeders are very hands-on, younger ones are more into AI and computer work. Marg: both groups can have their own unique needs.

  • Chris: gathering features right now. Rob: putting bounds on what we do will be important.

  • Marg: we’ll need some baseline, casual testing to create the flow of the software. pair that with elements people really want to see.

  • Chris: ARS has hired us to create version 2. Would like to have this done in a year. Scale is an issue on the back end. We spent a lot of time on the back end of version 1.

  • Chris: people will use it on their computers on the federal VPN for access; people without that access won’t be able to use it

  • Shannon: how do we hope people will use this? Testing could be more focused on the need and how diff people are experiencing the problem, which can give us more info on design and what’s included.

  • Chris: Rob and I have done a lot of work with plant breeders. Rob: they don’t know what they don’t know yet. Workflow, data, pipeline will be 80% the same across all of them.

  • Shannon: OK let’s just start with the features you know they will need. Rob: Yes, it won’t be hard to add elements as needed. I don’t think it will change the interface.

  • Shannon: the UI is very useable now.

  • Rob: other commercial groups have already created something like this. Do you draw on that? Look at them?

  • Marg: we looked at a few, and we use it as a reference, but we really base what we do on what our users want. Shannon: we’ll ask users what they’re already using.

  • Rob: I know of only two.

  • Time frame: VegSpec is our focus for next few weeks. Rob probably doesn’t need to attend all these Tue meetings. We meet weekly leading up to testing, then less frequently. We’ll let you know when we’re going to focus on drone testing.

09-24-24

Attending: Mikah Pinegar, Shannon Mitchell, Marguerite Dibble, Chris Reberg-Horton, Emily Unglesbee, Steven Mirsky, Victoria Ackroyd, Elizabeth Seyler, Amanda Hulse-Kemp

Agenda Items

GT’s Timeline for the Upcoming Testing: See Confluence/GitHub page they’ll put in Slack user testing channel. Discussion:

  • Chris: VegSpec might take priority in next testing. No rush for drone testing.

  • Shannon: design testing should be done at a different time from user testing

  • Chris: the current VegSpec is pretty stable. Mikah: unifying the theme/design could take time, but we can ask VegSpec people tonight about what their priority is for testing. AB testing for the data or tool? John, Karl, and Lori will probably be at the meeting.

  • Mikah likes idea of finishing our work this fall, then launching the testing in the new year. Victoria agrees.

  • Marguerite: agrees that we can ask them for light engagement this fall, then launch in the spring.

  • Marg: do we want prelim baseline design feedback on VegSpec before we finalize it and begin user testing. We can do AB testing on a few things in the fall, fix glaring things.

  • Mikah: there’s a lot we can do without finalizing the design; we can do a first design round, then make final changes. Not matter to me if it’s totally finalized, but he’d like to start on that design work soon.

  • Shannon: key design decisions. 16 of them, and each will need some focused attention over the next few months. We had good engagement on AB testing in the past, so we could probably get feedback fairly quickly. The development timeline matters a lot.

  • Mikah: we can make design changes on a rolling basis as AB info comes in.

  • Marg: get feedback on the 16 design elements via AB testing, then do user testing.

  • Mikah: likes that approach bc VegSpec is the umbrella tool. A landing page and then redesign the flow. Then we can apply that to all the other tools.

  • 16 design needs to address, include:

    1. goal selection

    2. progress bar

    3. iconography--our style, accessibility

    4. rationale--critique and analysis of the flow w/ a subject matter expert

    5. cards vs tables

    6. summary and expert flow at the end

    7. browse vs recommend mode

    8. calendar and phasing visualization

    9. citations--where did the data come from, why these numbers, transparency

    10. site header

    11. comparison view feature

    12. charts and visual guidelines--what’s included, axes

    13. filtering components

    14. equation displays

  • Some specific feedback from user testing is in the Design page of Projects/PSA Planning in GitHub.

09-17-24

Attending: Marguerite Dibble, Elizabeth Seyler, Mikah Pinegar, Steven Mirsky, Victoria Ackroyd, Rick Hitchcock

Big Picture Testing Needs

  • Drone software set of apps nearing completion.

  • VegSpec: ready to be tested. Mikah: NRCS ready and excited to get some testing done. Just testing VegSpec itself as an umbrella tool. We want to test the core functionality. Enter interests, goals, pick species, adjust for what you want to do for conservation planning. Not cover crop tool testing.

    • Marguerite: Step 1: what are our goals for testing? Who’s going to be impacted, what impacts do we hope for? GT will create timeline around that.

    • Mikah: we should meet with NRCS folks to ask about their goals.

    • Marguerite: Question: is VegSpec ready to test? Quality of life improvements? Mikah will ask Chris and Steven on Tue next week about that. He’ll see if Marg or Shannon should come to that meeting.

    • Mikah: We should get VegSpec accessibility concerns fixed before testing. And anything else NRCS wants done before.

  • OK to test both at the same time? Marguerite: Yes, because two distinct groups. Once we’ve put together the materials, we can run them concurrently.

  • Marguerite will put together a schedule for both testing processes

  • Mikah: not sure who’s supposed to be overseeing the drone testing. Chris and Brian have been working on it, plus Amanda? at NRCS. He’ll find out.

DST User Testing Process Feedback

  • Elizabeth: six responses so far. Deadline is this Fri, Sep 20. She’ll remind people again on Thursday and will share results with our team on Friday. Feedback so far:

    • After initial training, they felt prepared to conduct user testing, but they would like more thorough training on how to use the tool (online via Teams or a recorded webinar) and reference materials for troubleshooting.

    • User-testing packet and flowchart were easy to use. So were the 1-1 testing materials, but some felt the async materials were impacted by the bugs in the tool.

    • They were split on whether 1-1 or async testing was a better way to gather feedback on the tool.

    • It was frustrating and embarrassing to run into bugs in the tool while doing 1-1 testing.

    • There are too many questions in the async testing feedback sheet--people may gloss over or skip some.

  • Victoria: We need to provide a list of the absolute necessary things people must enter in order to use the tool properly and have it actually work well.

  • Marguerite: Set clear expectations for the facilitators, also for the facilitators to set for the testers. Victoria: yes, even though she told people the southern data weren’t all available, people still complained.

  • Marguerite: We can do our own testing and put together some troubleshooting tips.

  • Mikah: We should make a video of someone walking through the tool. NRCS could create it, recording them working on it themselves.

  • Eliz: Karl ran about 10 meetings for groups of NRCS agents in last round of testing--in part to be sure they were going to comply with request to run testing. Does he prefer to train people again? If so, a recording of one could be sufficient.

Next steps:

  • GT will create timelines for the upcoming testing.

  • Eliz will send everyone a summary of facilitators' feedback this Friday.

  • Mikah will take notes next week. Eliz at conference Sep 23-27.

09-10-24

Attending: Shannon Mitchell, Chris Reberg-Horton, Elizabeth Seyler, Mikah Pinegar, Emily Unglesbee, Steven Mirsky, Victoria Ackroyd, Rick Hitchcock

User Testing Results

  • Responses spreadsheet:

  • GT Data Summary:

  • 62 respondents

  • Good regional coverage, least from northeast. Great response from south and west. Also had responses from other regions, such as pacific islands

  • 90% agents or consultants from a range of backgrounds, 4.9% growers, 4.9% both

  • Experience level: just a few had low levels of experience with mixtures; technology few people

  • Overview of results: if many species for your region, them you really like the tool. That skewed some of the results. Questions about accuracy of data for the location. Consistently felt they could make their way through the tool, they understood the results, they could modify mixtures, helpful given their resources.

  • Growers: Had the inverse of majority of testers--hard to use tool, terms less familiar. Do more targeted testing of farmers and have conversations with one or two growers.

  • Midwest: Worked well for them, but they wanted more species and felt the rates were too high for them. Dig into the other tools they use and compare/contrast with ours for more info

  • Northeast: very happy with the tool, accurate for region but still need more species. There were some bugs that threw people off

  • South: People really enjoyed it. But need to improve accuracy for their location, range of species, easy to adjust seeding rate.

  • West: Mostly positive, wanted more species, difficulty understanding units/visuals. Need definition of terms.

  • Motivates you to recommend mixtures: I already have my mind made up on whether I’m suggesting mixtures

  • What people enjoyed:

    • love that it supported economic decisions, very strong compared to other tools

    • right amount if information, could understand the results, felt clear

    • could make their way through the tool; they made their way to end--that’s significant per usability

    • liked detailed calculations and amount of explanation, they liked digging into the calculations and would bring them up

    • liked having visuals: charts and pictures

  • What users would like to see:

    • A PDF export. They misunderstood the export feature. Give them more explanation? Add PDF option?

    • more species! how would rates change in diff parts of the year. Input their own species?

    • a clear, exciting finale page. They were looking for this. Could have fun with design

    • more definitions for terms and info on the sources we used for calculation and where the numbers come from

    • More of the variables exposed and editable. They wanted to be able to edit things. Make the whats and whys very transparent

    • More clarification on “not recommended” and why. The desire for more species could be related to this. Some education could help here.

    • Seeding method should be early in the flow of the tool. It impacts all the other decisions they make for the tool.

    • more flexibility when moving between sections

    • a goals-based approach: they wanted to start with a goal, then move through the tool from there

    • some users were confused about the diff between the seeding rate calc and the species selector. how make it clear that we can use both tools?

  • See spreadsheet for breakdown of results. Single, multiple (1-4), many (5 or more people)

  • More discussion:

    • indicate the cash crop earlier in the process

    • Some people wanted how seeding method impacted their choices

    • Mikah: be more clear on old rate, modifier, new rate. Make the rates' impacts more clear. Or having percentages instead of pounds per acre. Either could work. Have more clarity on what that calculation is doing until you get to the next step, the calculation. People not sure what slider is telling them

    • Victoria: lots of white space. We could show people what’s happening when they move the slider, or putting that in the next step.

    • Seed tag info was problematic

    • planting date and window should be easier to use; hard to pick exact dates. Better to have a planting window

    • wanted seeding mix percentages more clear. Victoria: this is disagreement about the guts of the tool. E.g., oats, rye, mustard. One kind of logic: take total rec seeding rate for each species and divide by three. Other logic: oats and rye are grass are similar so they should be treated as a functional group, so instead you divide by two with the oats/rye in half the mix and the mustard the other half. Mikah: maybe we should explain this and the diff CCCs' choices/logic. Victoria: the logic was different by state, not just by region.

    • Wanting more species: many of the ones people want will be added, said Mikah. He curious about people adding their own species. We’ll have to explore that, but could be a can of worms.

    • Shannon: Not having data for all regions played a role in people’s experiences and feedback

    • If people pick many species, the pie chart was diff to read. Mikah, I wonder if a bar chart would be better to show mixes of many species. Mikah: some NRCS folks want to use 10-20 species, but more for other practices than cover cropping. Rick shows an example of a successful way to indicate pie charts with many species.

    • Why did it show pie chart for 50 acres even if user chose a different acreage.

    • How numbers were typed in slowed some people down.

    • They wanted to back and forth between pages without starting over.

    • wanted soil fertility info--they didn’t know what it was. Should explain

    • Seed rates seemed high for West

    • People wanted refresh or restart button without losing data

  • Mikah: most of these seem very actionable. He’ll create tickets for easy ones and consider the others.

What is Next in the User Testing Pipeline?

  • Chris: timing issue for some tesing. Drone software set of apps nearing completion. Up for new location next year. We don’t want to do NCalc yet, so we have a gap of time. With new funding, we could do some user testing.

  • Mikah: that and VegSpec. they’d be diff user groups

Tool Design

  • Mark VanGessel is red/green color blind, so we could ask him to make sure the color differences are visible to him.

 

09-3-24

Attending: Shannon Mitchell, Marguerite Dibble, Chris Reberg-Horton, Elizabeth Seyler, Mikah Pinegar, Emily Unglesbee, Soumya Batra, Sarah Seehaver, Steven Mirsky, Victoria Ackroyd

Tool Design:

  • After meeting last week with Mikah, Eliz, Soumya, GT slacked PSA folks for feedback on style/color.

  • Results: rounded design is favorite (16 to 3). For color, the preference wasn’t clear. Sarah likes the progress bar and wants the text to pop for ease of reading. Mikah and Steven and I like the left one, but with a more muted background. Sarah likes the far right one for readability, and Emily likes far left but also right for continuity with PSA brand colors.

  • Steven: what about color scheme for other tools? Marguerite: One approach could be keeping color scheme same across all tools but with slight differences among them.

  • Victoria: green could clash with some of the CCC logos. Marguerite: Yes, we’re keeping the bar across the top white so each CCC logo stands out and doesn’t clash with colors.

  • Soumya: if the colors are too bright, they can fight for attention and pull it away from the content.

  • Mikah: would you show the accordions?

  • Marguerite alters the colors while we watch to see if we can agree on a version.

  • Sarah: have we investigated using the PSA colors? Are we going in that direction? Emily sends the palate to Marguerite. We compare the PSA colors, and they are very close to the current designs.

  • Marguerite inserts the PSA green into a design we’re liking. Shannon recalls feedback that people didn’t like dark colors. Middling darkness in the background allows for white or black text.

  • Marguerite recalls people liked a serious tone, and the darker banner across top and bottom achieves that.

  • Sarah: Purple seems out of place to her. Marguerite shows some options that work for people with color blindness and don’t introduce yellow (too light) or red (problematic).

  • GT wants to lock in the color scheme asap. People like the one M mocked up during meeting. Victoria, consider a somber magenta? They’ll mock up how would look with CCC logos.

User Testing

  • GT has gone through all the results and found themes in the diff regions. They’ll share results with us at next week’s meeting.

08-20-24

Attending: Shannon Mitchell, Marguerite Dibble, Chris Reberg-Horton, Elizabeth Seyler, Victoria Ackroyd, Steven Mirsky

User Testing:

  • Results: 61 respondents; one shy of our low-tech goal. More keep trickling in. Good coverage in all the regions, cover crop experience levels, 6 farmers (approx 10%).

Screen Shot 2024-08-19 at 12.24.22 PM-20240820-134736.png
  • GT has gone through 12 responses, mostly from the south and no farmers. Consistent positive feedback: ease of use, understanding it. Neg feedback: species missing, want more data for country. Overall impressions are positive. Many people figuring out mixes by hand, not using another tool.

  • We’ll need to cut off responses at some point so GT can finish analyzing results.

  • They’ll link the results to the goals we set in June.

  • Fewer async responses than we wanted. We have 12 now, so keeping it open to hopefully get more.

  • Eliz asks about Anna’s offer to do more testing with an MCCC group Sep 9 at a cover crop training. Answer is yes, but just for async.

  • Need to do outreach to facilitators: thank you, we hit our goals, you did a great job facilitating. We’ll leave the async testing open for now.

  • We'll have a post mortem on the process for PSA and GT. Then we’ll decide how to include NRCS and CCC managers in that process.

Next steps:

  • Eliz encouraging Anna to invite MCCC cover crop training attendees to do async testing.

  • Eliz thanking all NRCS facilitators by email.

  • No meeting 8/27/24 while GT analyzes results. Next meeting 9/3/24.

08-13-24

Attending: Shannon Mitchell, Marguerite Dibble, Chris Reberg-Horton, Elizabeth Seyler

User Testing:

  • Very close to hitting our participation goals: 38 responses including 1-1 tests and async. Plenty of feedback to review in the coming weeks.

  • Almost all experience levels and regions, but there are a few gaps we’d like to fill with additional testing, and inviting people to do async testing may be the best way to achieve this. We need:

    • Feedback from growers (most of our results are from agents/consultants)

    • Feedback from folks in the Northeast

    • Feedback from folks with low tech experience and/or low experience with seeding mixes

  • Game Theory will provide an update later this week; some gaps may fill in as final results come in

  • Eliz will check in with Trevor and Anna on Wed to see whether he’ll be able to get the Midwest data into the Seeding Rate Calc so Anna can test it by 8/15. He wasn’t sure he’d have time. Shannon said it’s fine to give Anna until Fri, 8/23, to test Midwest data. GT has plenty to do with feedback already received.

image-20240813-135037.png

Style Survey

  • Feedback from 46 respondents points the way to clear design changes for all of the DSTs. Results are summarized here by Game Theory.

Next Steps

  • August: final feedback coming in; GT summarizing and analyzing results in light of the user-testing goals we set in June.

  • September: developers improving the tool per feedback. [Eliz wonders: comms and outreach to announce finished tool, e.g., press release?]

  • Then: GT will lead reflection to evaluate how this testing process went for PSA, GT, and facilitators. What can be improved for the next round of testing?

  • Chris: design review of VegSpec will begin soon, and Rick will be guiding preliminary review on the development end. Chris expects GT will help test VegSpec over the next 12 months, with user testing of EconCalc and NCalc interspersed.

08-06-24

  1. Login problems with Teams

  2. Extending user testing deadline thoughts

    1. Revisit on the final day with Karl and decide whether to extend or not. Don’t extend preemptively.

07-30-2024

Attending: Shannon Mitchell, Marguerite Dibble, Mikah Pinegar, Elizabeth Seyler, Emily Unglesbee, Victoria Ackroyd

AB Testing Survey

  • 43 people have completed the DST Style Survey; CCC managers have done the most, then PSA, then NRCS and others

  • It’s looking like the feedback will be helpful--similarity among responses

User Testing

  • Four NRCS people have completed or started testing

  • Victoria is the only CCC manager to complete so far

  • Decrease numbers we want or extend the deadline? just some amount of testing better than nothing

    • Eliz will ask Karl what he thinks would be better

  • Shannon will send a reminder to NRCS facilitators re office hours and to remind them of the Aug 31 deadline.

  • Testing took longer for Victoria than one hour; never got to Scenario 2; important to have buffer time on either end for getting people up to speed and for discussion after people use the tool

  • In future rounds of testing, we can make adjustments to reduce length of testing

07-23-2024

Attending: Shannon Mitchell, Marguerite Dibble, Mikah Pinegar, Elizabeth Seyler, Chris Reberg-Horton

AB Testing Survey:

  • 32 people had completed it as of yesterday afternoon.

  • Fewest response from NRCS and growers. Most responses from CCC members. A few more NRCS or growers would be good.

  • Today Eliz will remind those groups and PSA whole_team of the July 31 deadline

User Testing

  • three people have sent Shannon feedback, each spreadsheet included a few testers' responses

  • Async survey: 3 responses so far

  • Eliz will send a reminder to the CCC managers and NRCS this week. Remind facilitators that they could complete the async themselves. Remind that they may send feedback as they receive it.

  • Think about next steps if we don’t get enough responses. Could be to simplify the testing process.

  • No one signed up for the office hours last week, so they canceled it. Will keep offering them (thursdays at 12:30)

  • At VegSpec meeting today, we’ll ask Karl what he’s heard from people. Offer him some options for how to keep it rolling and ask what we can do to support him.

07-16-2024

Attending: Shannon Mitchell, Marguerite Dibble, Mikah Pinegar, Elizabeth Seyler, Steven Mirsky, Chris Reberg-Horton, Heather Darby (for part of it)

AB Testing Survey:

  • Ready today after Eliz and Marguerite make final tweaks.

  • Will go to NRCS facilitators, PSA whole-group, and CCC managers for their members. Game Theory and Eliz will send.

  • GT drafting message text for invitation to complete survey.

  • Allowing two weeks for responses. We’ll remind in a week, then a few days before deadline. Can extend of needed.

  • Fun, engaging way for people to learn what we’re doing, buy in to our work.

  • Will give valuable info on how various groups respond to this kind of survey vs other testing methods we’re using.

  • If successful, we could send a similar style of survey every few months to get feedback on other things and buy-in from various groups.

1-1 and Async Testing:

  • Lisa, an NRCS facilitator, could only see top lines of text on the spreadsheet when she downloaded it. Shannon tried to troubleshoot with no luck using Excel and CSV versions. Eliz will ask Karl whether he can download and use it and what might be holding Lisa up. Turn computer off and on again? Delimitor in CSV? If Lisa still having trouble, she could go to GT office hours this Thurs.

  • Elizabeth informed the group of the change that she, Victoria, and Shannon made to the hypothetical scenarios: added a second option to the first scenario so there’s a Midwest one. Anna Morrow felt this was important. Now people select Scenario 1A (Indiana) or 1B (Maryland), then Scenario 2 (Penn), Scenario 3 (of their choosing).

07-09-2024

Attending: Shannon Mitchell, Marguerite Dibble, Mikah Pinegar, Victoria Ackroyd, Elizabeth Seyler, Steven Mirsky, Chris Reberg-Horton, Emily Unglesbee

NRCS Facilitator Meetings:

  • Elizabeth has had two meetings so far. Trained about 20 people now. There were questions about the timeframe.

  • In west, harvest about to start and worried about not meeting August 15th deadline. Karl feels that there are enough people tapped that they can meet that timeline goal, since each only needs 3-5 people, out of lists of 120 who have been “volun-told” to do it.

  • NRCS doesn’t use Google documents very easily, so Elizabeth made PDFs of all materials. Karl prefers to stick with PDFs moving forward, since everyone knows how to use that platform.

    • Chris worried about lack of dynamic document option – especially for Excel documents that need to be “live”, as Shannon notes. Elizabeth says the users/facilitators have been able to get into Excel and Google form, just not a Google folder to upload screenshots and other Google docs.

    • Steven says same editing live features exist in Word doc, but might be more available to NRCS.

    • Google docs can be published as websites so that anyone with the link can access it. Confluence has the same option. Then we can make tweaks and adjustments as needed. The only question is that it is a less secure location, technically. Chris says using MS word in Sharepoint has been successful. Conclusion is that since folks are downloading and then returning the Excel file, this isn’t necessary (Sharepoint).

    • So yes, we’re okay with making public web links via Google Drive to allow them to access things. File > Share > Publish to Web. “Automatically republish when updates are made.” You still edit within the document, and when you’re done, the changes automatically publish unless you hit “Stop Publishing.”

    • Marguerite recommends having a folder of static, downloadable documents, with links to the live, dynamic docs/website.

    • In later phone call with Marguerite, she and Eliz agreed that NRCS folks should use the current PDFs for training and to get started. Eliz will create a Google folder with a doc listing all the user-testing materials and their URLs. In future testing (and for CCC managers/facilitators), we’ll use the URLs.

  • Steven might need to get Elizabeth a guest contractor status with USDA, down the line. For now, Elizabeth just needs access to the Teams group with NRCS. Mikah says not as long a process – fill out a form, schedule an appointment for fingerprinting and go through security clearances. That will at least get an eAuth working for her. Reach out to Chris Hidalgo on Slack.

  • A-B Testing

    • Want to create an add-on survey with side-by-side shots of different designs and ask users yes or no for each. Only 10 questions.

    • Mikah: can we just send it to the Teams channel and ask people to fill it out on their own? That way we don't burden the facilitators with it. If that doesn’t generate enough feedback, then take it to the facilitators. Can also send it to our own Slack channels, DST dev, etc. Yes, Game Theory likes that idea.

    • Will add a quick framing question to help them understand how to respond.

    • Cover Crop Council members can email it out to their subscriber base and add a question about what kind of user they are.

    • Game Theory will start working on that and send it out for everyone’s review sometime next week.

07-02-2024

Attending: Marguerite Dibble, Mikah Pinegar, Victoria Ackroyd, Elizabeth Seyler, Heather Darby

User Testing Materials

Eliz sent Karl all the user testing materials yesterday. She met with him and two NRCS agents yesterday afternoon to review the materials. Karl requested six more sessions: one for each region. Marguerite suggested just two more meetings for NRCS agents and that GT attends. GT avail to answer questions during Thu office ours after that.

Eliz showed flowchart she made of user testing materials. Can we delete any materials? So many of them. Group: no. Flowchart suggestions: create three tiers of materials to simplify it:

  • facilitators' materials

  • testers' materials (put script and min-script in same box)

  • feedback collection materials

Heather suggested a short refresher session for the four CCC managers--Victoria and Heather in Northeast; Victoria in South; Anna in Midwest; and Clare in West.

Heather: NRCS wants their own meetings bc they use Teams, etc. She thinks it’s best to have a separate meeting for CCC managers.

Heather: If every NRCS regional group is 4-5 people, and each is getting 3-5 people to give feedback, that’s a lot of feedback. If CCCs got two or three farmers from each region, that would be best. No need to also solicit people’s feedback at conferences.

Eliz will send the materials and flowchart ahead of time and ask that they review them all. In sessions, go through the objectives, timeline, flowchart, overview of each doc, and ask if they have questions. Expectation that people come with questions.

GT avail next week except Wed 3-4:30 and all day Friday.

Time Line

Mikah created a Gantt chart in GitHub to track tasks and timing. He added the kickoff meetings: two more with NRCS and one with CCC managers.

Next Steps

  • Eliz scheduling two more NRCS sessions for next week.

  • Eliz revising flowchart and resending to Karl

  • Victoria scheduling CCC managers' session

06-25-2024

Attending: Shannon Mitchell, Marguerite Dibble, Chris Reberg-Horton, Mikah Pinegar, Victoria Ackroyd, Elizabeth Seyler, Heather Darby, Emily Unglesbee, Steven Mirsky

User Testing Materials

Elizabeth showed the CC Seeding Rate Calculator Testing Plan. Feedback:

  • GT is creating the spreadsheet for facilitators to use to record one-on-one and async feedback from observations and conversations.

  • Keep mention of AB testing in the plan (and update per below bullets), but delete it elsewhere.

    • Instead of including it in this round of testing, we’ll roll out AB testing every few months. It will go directly to facilitators for them to rate and for them to send to some async testers.

    • We can invite them to do AB testing via Teams, which has a forms function we could use for collecting feedback. Chris has used it; easy like Google Forms.

  • Clarifications:

    • Supporting Questions are not for one-on-one testing, just for async, and only if the feedback form isn’t too long already, (see below).

    • The Testing Plan is an internal document, not one that facilitators will see--in part because they’ll be testing the tool. So keep the Facils' User Testing Packet separate, though there is a lot of overlap between the two documents.

  • Eliz will be the primary email contact for serious problems with the tool. M-F, 9-5 fine, and be clear with them on when to expect a reply.

Elizabeth showed the CC Seeding Rate Calculator Survey. Feedback:

  • Yes, do as planned: delete all headings and make numbers consecutive throughout

  • At the start, let people know they can upload screenshots or URLs for stored images to explain any trouble they had. Eliz to figure out how to do that.

  • For Async testers only, add Supporting Questions if the form doesn’t feel too long. At the very least, add these at the end:

    • How do you feel about the tool?

    • How would you rate the overall value of the tool?

    • What about the tool do you really like and why?

    • What about the tool do you really dislike and why?

Next Steps:

  • Elizabeth will make changes discussed today.

  • Victoria is creating the scenarios today. She will also do a close read of the materials today and possibly Wed.

  • Eliz will send the materials to Heather late Wed for review.

  • Heather will review materials on Thursday, and Eliz will make any needed changes.

  • If materials are ready, Eliz will send to Karl Anderson and the CCCouncil contacts on Friday. If materials not ready, Shannon recommends waiting until after the July 4 holiday week. Eliz to get Heather’s thoughts on this.

06-18-2024

Attending: Shannon Mitchell, Marguerite Dibble, Chris Reberg-Horton, Mikah Pinegar, Victoria Ackroyd, Elizabeth Seyler, Heather Darby

User Testing Questions

  • We discussed questions that directly address our user testing goals.

  • Some might fit better in async or one-on-one, depending on the audience.

  • Questions will split off depending on the person’s role--farmer, NRCS agent, etc.

  • GT will create a Slack channel for finalizing the user testing materials.

AB Testing

Shannon showed drafts with visuals.

How Stay Connected to Facilitators

  • GT will have office hours for answering questions.

  • Eliz will monitor Teams and convey tech questions to GT via Slack. Let GT know if it becomes cumbersome and they can be on Teams, too.

  • Heather, Mikah, and Eliz will be in Teams.

  • Eliz and Emily tried using Karl’s directions for eAuth access without success. Karl couldn’t figure it out, either. Eliz calling NRCS today to get eAuth and Access to Teams

 

06-11-2024

Attending: Shannon Mitchell, Chris Reberg-Horton, Mikah Pinegar, Victoria Ackroyd, Karl Anderson, Elizabeth Seyler, Emily Unglesbee

Plan for creating testing materials.

  • Elizabeth has materials from Game Theory and feels adjusting them for this testing will be manageable.

    • Will shift gears and start tackling this, pivoting away from website as some pages go out for reviews.

  • Workshop testing materials this week (week of June 10th) and finalize them next week (week of June 17th)

  • Game Theory working on AB designs for the actual testing

  • Thinking we will be able to start testing in July

Recruitment

  • NRCS recruitment mostly done (120 people lined up)

  • Need to start Council recruitment

    • Victoria and Heather can brainstorm on this

    • Need 2 to 3 facilitators from each council, and then each finds 3 to 4 farmers.

    • They have facilitator prospects lined up in the Northeast and Western region. Need to loop in other regions and alert Anna Morrow (Midwest).

Async v. 1-on-1 Testing

  • Async materials take more time to create; good process for people with less time to contribute

  • People can self-ID which they can do

VegSpec facilitators meeting, Mon 6/17 noon.

  • all NRCS facils plus Karl; intro to facilitating, describe two testing types, frame the expectations of testing phases. Prep ahead of meeting? Draft an agenda--Eliz will do.

    • Need to explain the tool and our goals for testing and facil role

    • materials we’ve used in the past, watch some of the videos

    • include the WCCC in future recruitment and meetings, so they can see what the process is like

  • these are the facilitators, but we don’t know yet which types of facil they’ll do; 120 more people will be testers

  • Agenda:

    • Why are we here? Heather

    • What are the tools? Mikah and Victoria

    • What is facilitation? vs respondent What does testing look like? Shannon

    • Timing: when, materials review, what we need from them; they can watch videos while we pull together testing materials Elizabeth

    •  

06-06-2024

Attending: Shannon Mitchell, Marguerite Dibble, Steven Mirsky, Chris Reberg-Horton, Mikah Pinegar, Victoria Ackroyd, Karl Anderson, Elizabeth Seyler

NRCS: They have all their user testers and most of their facilitators. We have a short survey to put people in groups.

User Testing Goals Discussion

  • They will send us their Testing Goals: Seeding Rate Calculator/Mix Maker doc when it’s finalized.

  • High-level goals: The tool is empowering, easy to use, and provides accurate information.

  • Users: Farmers, Extension agents, NRCS staff, Crop Consultants, Conservation District staff

    • Shannon: how do you expect different groups using the tool?

    • Karl: NRCS commonly works with producer and/or planner to include cov crop in plan. We may have mixes we recommend for meeting specific goals. We may talk with the seed vendor about supply. Farmer and crop consultant might put together own mix and check with NRCS to evaluate. We have to certify the practice, so we have to make sure there’s enough seed and it’s on our list, and then we can pay them. May work with farmers, planners, supply vendors, and others.

  • Testing Goals at high level, use cases that impact goals. These are things we can measure with testing and hold value for PSA. Hypotheses that will prove right or wrong during testing.

    • Empowerment:

      • users feel empowered in their ability to make a seed mixture. Karl agrees that it works.

      • users feel empowered to calc a seeding rate based on the mixture they’re interested in. GT: is this nuance important? Karl: yes. Suggests an open-ended box for people to enter their own text.

      • Users can easily modify an existing seed mixture to better meet their goals. Karl: yes

      • Users cal equip to take next steps. Karl: yes

      • Ext or NRCS staff can eval mixtures created by someone else. Karl: If he created a mix, could a file or folder be easily shared with seed vendors? GT: yes, we can include something like that. Mikah: we have something like that, a CSV. Would that work? Not for the user, just internal. Karl: yes, that’s useful, but if there are other ways to share the info, could be useful. GT: good feedback for us to consider. Chris: RUSLE2 is a good package (Google: estimates soil loss caused by rainfall and its associated overland flow). Are there others we should consider? Karl: others for engineering, like CP Nozzel. I can help suss out. Chris: across all the tools we’d like them to be interchangable with NRCS, so this is a good time to plan for that. Karl: APIs can pull data from one database and put into another. Still in development. Steven: a separate meeting for a walk through of these products would be good. Karl: happy to do that.

      • Ext or NRCS staff feel equipped to support growers re seed mix options

      • tool increases motivation for Ext and NRCS agents to recommend cov crop mixtures. Karl: both sound great.

      • tool helps increase motivation for farmers to use cov crops

      • Above tool is true no matter which region or conditions exist where farmer located

      • above is true no matter what level of experience someone has with seeding mixes and rates. GT: Does this feel like a valuable data point to measure? Karl: there may be species people should not recommend or should use only in specific sites based on conditions; people may not have enough plant knowledge to determine. Mikah: all the goals may not be true for people who have no cover crop experience, but it could push them in the right direction. GT: we can ask whether people have cover crop experience. Karl: some seed sellers have a lot of experience with producers and agronomists, and maybe we can ask what seed calculators they already use. GT: we have a question like that in next section of our goals.

    • Provides accurate information to support target audience

      • info accurate to user’s location

      • provides a seeding rate that fits their needs and resources.

      • also provides a seeding mix that works for them. Karl: what are the high and low rates of seeding based on methods of seeding, or inter-seeding or wildlife seeding? Would be interesting to know.

      • Ext and NRCS staff feel the tool provides a seeding rate that’s a good fit for the needs and resources of those they support

      • Ext and NRCS staff feel the tool provides a seeding mix that’s a good fit. Karl: add a blank where testers can write in.

      • Tool gives an accurate expectation of what a seed mixture will look like in the field. Karl: sometimes we’re wanting vigor (e.g., fast germination), sometimes other characteristics. Could we have photos so people know what to expect? Mikah: capturing images of single cover crops is taking a long time, and mixes are infinite. Descriptions manually written based on region would be very time consuming and difficult. Karl: we have images of what various residue look like, which can help. But I understand devel limitations. Mikah: currently the tool does not say how the mix will perform in your location. Victoria: I agree that photos of live mixes not feasible. Individual species are possible. Maybe drawings that show what species will look like? DT: crowd-source? Mikah: possible. Karl: user expectations are always mixed. GT: sounds like it’s not something we should measure formally, so we’ll move it to Supporting questions. Shannon: there may be an accuracy goal here? Karl: some vendors offer low-density cover crops, and we may eventually agree with them but not there yet. Comparing the numbers they have to what our tool suggests would be good to measure.

    • Ease of Use

      • users understand how to use the tool from start to finish

      • users understand the results they get from the tool

      • it’s easy for Ext and NRCS staff to evaluate a mix created by someone else (seed vendor, farmer, consultant, etc.)

      • it is easier to use this tool than other seeding rate calculators out there. Steven: “easier” may not be the measure bc accuracy and trustworthiness are more important. People may use a clunkier tool but trust in the response

      • the above is true no matter what level of technology experience someone has. Karl: sounds good.

  • Karl, as we start looking into this further, we may have more thoughts, questions, changes. But as a start, this sounds good to me.

  • DT: education piece. Are we comfortable putting that in supplementary questions but not let is drive us as a testing measure. Karl: we have links already to good info. Maybe gather some of that together as we do deeper dives and other types of seeding, ask what kinds of databases would be interesting to users.

  • Karl: facilitators and their roles, I have thoughts and questions. There will be various staff in NRCS; what are the various roles? GT: facilitators can go through the testing process themselves to learn about the tool. Then use the testing materials to work with respondents. Two levels: asyncronous testing for people to use on their own; meet and ask people about the tools, richer result but more time consuming. All our regional specialists will be facilitators, but determining the willing vs nonwilling in other groups may be tough. Karl can tag and communicate with people separately depending on

  • DT: putting the final goals on Confluence. Chris: tell people it’s like running a small focus group; helps them understand their roles. Some agents like working with three farmers together, first teach, then watch them work with tools. Some agents preferred one-to-one with their farmers. We asked our Ext agents to do at least 10 farmers each, don’t know correct number for NRCS. Karl: some people are more responsive than others. Making sure they all have roughly the same experience with their testers could be tough. GT: not essential that they all have same experience, it’s oK to have a spectrum of responders and detail. We have good resources to share. Facils can buddy up to get familiar with the process and be accountable to each other. Karl: that sounds good and could work well.

Action steps:

  • A separate meeting with Chris, Steven, and Karl for a walk through of data packages such as Russell 2, CP Nozel (sp?) for sharing data between programs. Chris and others discussing it internally first.

  • GT will finalize goals and share with us, then we’ll get started on user testing materials

 

06-04-2024

Attending: Shannon Mitchell, Emily Unglesbee, Elizabeth Seyler, Victoria Ackroyd, Marguerite Dibble

User Testing Goals

  • We reviewed the Values of user testing: 1. empowerment to use cover crops, 2. ease of use, 3. relevance to farmers, 4. education (bias-free information). Within education:

    • users need to plant enough to make it worth the expense

    • how much seed you need is based on diff seeding methods

    • helps improve ROI to get creative with how they’re planting, what’s in the mixes, when they’re planting

    • alternatives: will users explore how to handle shortages of seeds and alternatives?

  • 5. farmers likely to pick lowest rate of cover cropping for ROI; NRCS will be focused on agency goals, reducing erosion, etc. Might not pick lowest rate. May use tool to help farmers with choices

  • NRCS agents have specific standards they’re trying to meet; the agents are quite focused will use the tools to meet specific goals. Extension agents have more broad goals in helping farmers. Extension agents' knowledge of cover crops will vary by region. Wild card: NRCS has a lot of funding and are hiring many people (some may not have much education related to cover cropping); ask Karl about this carefully.

  • Certified Crop Advisors: similar to Extension agents, but CC knowledge is variable

  • Steven has a collaboration with Willard to teach farmers and advisors about the tools.

Developing user testing materials: Eliz will draft

  • who will review? Game Theory is a resource here, as are Emily & Victoria & Heather (esp Heather); the async testing materials will be the more complex ones.

  • Time frame: test in June? talk about this on Thur

  • Testing Materials Templates:

  • Session 6 (DST Recordings) might be good to watch before creating materials

  • Async will be new: need a reference sheet for how to access materials and how to use them. Start with the interview materials bc similar to last year.

    • Test Plan Template – we’ll start fleshing out as we know; Eliz will draft; M and S will review and support

    • Testing Brief – one geared to facilitators; might be good for NRCS agents; also good for async testing; Eliz will start fleshing this out

    • Mini-Script – tips for how to explain things to testers.

    • Surveys (Google form)

    • a place where facilitatorscan log their feedback

Current status of each DST and VegSpec; update the timeline? Victoria: Myrtle Beach July conference; Mikah will have an update for time lines on Thu, we think.

05-28-2024

Attendance: Shannon Mitchell, Elizabeth Seyler, Victoria Akroyd, Emily Unglesbee, Heather Darby

Re question from last meeting on whether NRCS people’s availability to do user testing varies by region. Short answer from Karl: availability should not vary. Long answer from Karl: “The only reason some may not be available is if data for their state is not yet available in VegSpec. However, I think that would be a good opportunity for us to assign those folks the task of pretending to be a planner that moved to a new state (one that has all applicable data). One LARGE reason for developing VegSpec is to make it easy for an experienced planner to move across states and plan in a new location. Right now every state is so different, but VegSpec will be an equalizer. Particularly as it gets incorporated into our planning software in the future.”

Meeting Notes: two people have completed the survey on defining testing goals; we’ll extend the deadline. Heather, Steven, and Chris plan to complete it soon.

Shannon: next week let’s get leadership in this meeting to nail down goals and go over survey results. In meantime she’ll review responses and prep for that meeting. The Product Owner role is the most important role to be represented at the meeting. Doesn't have to be all of them, but need at least one representative from that group.

Emily: we’ll message leadership that they should be at this meeting.

Shannon: how handle testing if we have to push schedule later? Testing window is June. For GT, that’s fine, but wondering about PSA.

Victoria: funding not the big issue, more that we need to keep the project moving so we can use them in trainings.

Heather: What was the original logic for our testing deadline of early fall? Consensus: the need to deliver the tool to NRCS on time, secure future funding and move on to trainings for this tool that are already scheduled, as well as user testing for other tools.

Shannon: sending reminders, esp re testing in June. Testing material creation should be easier. Should we revise the schedule?

Victoria: leadership is always hard to nail down. It never changes. Unless Heather feels we can pause, we should just push them to give input.

Shannon: one-on-one with Chris and Steven?

Victoria: maybe after you’ve compiled the results from this survey and have a feeling for what the rest of the group thinks are the goals.

Shannon: new deadline: this Friday. Then use next week’s meeting for discussion of the goals/results. Survey is a way to get input from people outside the inner circle. Leadership must be at that meeting.

Heather and Victoria: survey was a bit hard to follow in places bc difficult if we’re using different terms. But it will work, then we’ll need to discuss it.

 

05-21-2024

In attendance: Marguerite Dibble, Shannon Mitchell, Victoria Akroyd, Elizabeth Seyler

M: How are we going to run the collaborative goals design? Approach 1: schedule when everyone avail, and M and H run a meeting. Approach 2: Async process: Group takes a survey remotely, then M and H compile results and group responds to them.

Victoria: Approach 2 more feasible given the schedules of Steven and Chris. Eliz agrees.

M and H: this is how it will work:

  1. Game Theory (GT) drafts a survey regarding proposed goals and methods based on past convos with PSA

  2. Goals Group takes the survey, which includes spots for adding alternative goals and methods

  3. GT collects survey results and iterates on the goals and methods

  4. GT shares with Goals Group for finalization.

M: the catch: people have to take the time to complete the survey. Survey deadline: Mon mid-morning, 5/27. GT will remind people, then have compiled goals and methods by our User Testing Meeting on 5/28.

M: Next steps after this goals/methods ID: creating testing materials. GT is booked to do management and offer support. Not scoped to build out the materials. Could do it, but the training materials GT created last spring could be changed as need for this round.

S: once goals and methods are clear, creating materials not hard.

Eliz: I can create the materials based on last year’s. GT will provide support. Eliz will share with PSA core people and GT for feedback, then finalize the materials.

Victoria: June is when the user testing should happen. PSA funds run out in July. NRCS funds run out in Aug. NRCS people’s availability to do user testing varies by region. Eliz will ask Karl for details; will help us schedule effectively.

Action items:

  • GT will create and send survey to Goals Group asap

  • Eliz will ask Karl about NRCS people’s availability for doing user testing

 

05-07-2024

In attendance: Mikah Pinegar, Emily Unglesbee, Marguerite Dibble, Heather Darby, Elizabeth Seyler, Shannon Mitchell,

Next Steps for User Testing. Reviewing this document today: https://precision-sustainable-ag.atlassian.net/wiki/x/BoAuGw

Week of May 6:

  • Mikah and team will meet tomorrow to talk about low hanging fruit on design and usability.

  • will set up roles for coordination and have meetings with those folks. Goals will inform the approach and people, goal setting, etc.

  • Team will id resources people have in summer.

Week of May 13: Combine resources with goals to clarify testing plan

Week of May 20: review AB testing options with team, compare resources and goals to decide on our plan for how to test and when. Finalize testing goals approach. We decide who’ll create materials and testing kickoffs: us or GT.

Week of May 27: make AB testing materials

June 3 week and beyond: building materials, testing plan for 1-1 testing and async testing starts to go throughout July

Emily: How trainers getting trained?

Marg: 1. give all training materials to trainers to review on their own, OR 2. a half-day in-person workshop to kick things off. If people not paid for the work, harder to get them to review materials when they have time, so #2 better.

Heather: NRCS is so dispersed that in-person not going to work. Send them materials and set up a meeting to answer questions, then let them go ahead. Have periodic check-ins with NRCS folks.

Marg: that’s what we’re scoped for, check-ins and to be on call if people have trouble. GT to provide that support.

Heather: communicating about NRCS about the timeline would be good.

Eliz: I’ll share this with them.

ROLES for SRCalc

Multiple people can hold these roles. Depends on their perspectives.

Next step: We should decide who can speak to each need within the roles. GT would review and propose roles for people, then show us.

Heather: Anna Morrow, Dean Bass, a farmer, others have enough knowledge about the tool to be Respondent Reps. She’s thinking of people who could play roles and have experience with it.

Marg: we could email specific Qs to other folks about the tool. We have flexibility in how we inform people as we consider goals and roles.

Shannon: for this step, we’d just be asking people to come to meetings, not much else in terms of work.

Heather: how many people are we thinking for each role?

Shannon: we just want reps from each group, e.g., developers. Same for respondent rep. We should feel empowered to opt in or opt out.

heather: we finalize the list of names, send to GT, and then ask them to read role description and invite them to meetings.

Marg: 1. we ID people for roles, 2. GT sends role descriptions, 3. during meeting we’d set up goals and ask for input from the proposed people.

Heather: our next step: within the week, we ID the people.

Marg: yes. Goals meeting by middle of month would be ideal.

Heather: Mikah and Victoria can take a first pass on IDing people.

Victoria: how leave comments in Confluence?

Mikah: fine in Confluence or Slack; comments on a doc that are connected to a specific line are best.

 

04-30-2024

In attendance: Mikah Pinegar, Emily Unglesbee, Marguerite Dibble, Heather Darby, Elizabeth Seyler, Chris Reberg-Horton, Victoria Ackroyd, Steven Mirsky

  • WCCC meeting is about setting down a timeline for development, not initiating testing.

Heather: we want to be clear on questions you need answered so we can move forward.

Marg: May:

  • design and usability improvements for SRCalc. Simple stuff to make testing more successful.

  • Also ID clear goals for SRCalc remote user testing so providing approp data for this time in the devel. What PSA wants from the process. Heather: e.g., we want people to understand their outcomes. Marg: yes, we want to look at specific elements of the calc. E.g., does the calc give relevant info?

  • Think through AB testing materials to build for our user testing. Example A vs Example B: ask user which they prefer. Fun, easy way to get feedback from people. Heather: good.

  • Nail down testing resources: people. Who’s doing what? Making sure we have national reach, understanding availability and for what roles: facil? test? Making lists.

June: get beta test ready:

  • create AB testing designs for user testing,

  • build out materials for testing,

  • start conducting some 1on1 AB testing for components based on tester/facil avail

July/summer:

  • testing management for beta testing process (1-1 feedback for ab tests and general testing, remote beta testing avail for a larger group)

  • Ongoing collection and correlating feedback from the beta testing

Steven: we need consistent language usage so we understand each other. Meaning of “Beta testing” is different inside PSA from how Marg uses it. We have “Facilitators” and “User testers.” “Beta testing” actually means “user testing.” For PSA: beta testing is the ergonomics and function of a system. eg. what does it take for the user to walk into field and take images? User testing: a version is finished and is ready for people to test. After user testing, it gets released.

Marg: I agree. Will use the term “user testing” from now on, not “beta testing.”

Marg: Remote user testing description: instructional kit provides info on how to access application, what browser needs are, feedback sheet for them to use. How to ccess tool and how to test it. A google form they will fill out re using it was like. Very specific questions. Time limit, dates when they should do it. Then talk it through.

Heather: was expecting we’d be on a remote meeting with one or a group of people. But that’s not what this is.

Marg: Correct. we could do that but if we can use a lighter touch and achieve our goals, let’s do that.

Victoria: should we include Anna Morrow in this meeting or soon? We decided earlier that Heather, Anna, and I would handle facilitation.

Heather: seems that falls under ID our people. Once we know what we want from testing for SRCalc, we may be able to scale down our testing or revise it somehow. Victoria agrees.

Heather: remote testing will probably will work well for many people. We may have higher participation with CCAs and NRCS people. I like it but we have to be clear on our goals and whether remote testing will achieve them.

Chris: AB testing: we’ve intentionally allowed some things to bury that we might want to homogenize later. Developers chose different paths to advance across the tool. We could use AB testing to decide among these options.

Marg: agreed!

Emily: we need to supply NRCS with a short doc on tasks and timing for facilitators. Do you have anything already written?

Marg: we have write-ups from last time, we could send those, if this approach sounds good re remote user testing

Chris: We need to distinguish two groups: 1. NRCS facil for farmers, 2. NRCS facil for fellow NRCS users, who will be major users, perhaps more often than farmers. Diff groups. I bet the majority are envisioning being #2. Facilitators for farmer users will be like the CCC facilitators. Facils for NRCS will be different.

Marg: goals will help clarify what the facil roles will look like.

Heather: a job description for NRCS can be simple. Facil tasks, work with # people, make sure they answer a survey, turn results into a Google doc.

Marg: we can write up one for facils and one for testers themselves.

Heather: we spent a lot of time training facils last year. Will we want that this time? So facils are clear on their role and how to go about it.

Marg: GT could provide materials and answer questions. We’re not contracted to train the facils. We could provide training sessions for facils, or we could provide materials in a concise package so people could self-train. The latter wouldn’t add to our contract/scope.

Heather: our team needs to decide on which we want, then we’ll know what GT should do

Marg: if this sounds good to all, we’ll flesh out the May focus. Break out next steps and who’s needed to flesh out each piece, on Confluence.

Heather: sounds good.

Action steps:

  • Marg will flesh out items in Confluence for May and beyond so GT can decide on who’s needed for each task

  • Marg will supply short descriptions of facilitator and user tasks/time for us to send to NRCS