Testing Goals & Approach

With user testing we want to be able to answer the question: Does [TOOL] provide [VALUE] to [AUDIENCE]?

For this project, we define those variables as:

  • [TOOL]: Seeding Rate Calculator, Mix Maker

  • [VALUE]: The tool is empowering, easy to use, and provides accurate information.

  • [AUDIENCE]: Audience in the collective below will be referred to as “users” and include the following groups

    • Farmers

    • Extension Agents

    • NRCS Staff

    • Crop Consultants / Conservation District Agents

GOALS:

OUR GOAL is that through testing, we will be able to assess if the Seeding Rate Calculator empowers the user to calculate seeding mixtures and generate seeding rates. We can apply the following user stories to gather data on how well the tool achieves this objective:

  1. Users feel empowered in their ability to make a seed mixture.

  2. Users feel empowered in their ability to calculate a seeding rate based the mixture they are interested in.

  3. Users can easily modify an existing seed mixture to better meet their goals.

  4. Users calculations equip them to take next steps towards their goals.

  5. Extension Agents or NRCS staff feel empowered to evaluate mixtures created by someone else (e.g. a seed company, farmer, consultant, etc.)

  6. Extension Agents or NRCS staff feel equipped to support growers in exploring and confirming seeding mix options.

  7. The tool increases motivation for Extension Agents or NRCS staff to recommend cover crop mixtures.

  8. The tool increases motivation for farmers to use cover crop mixtures.

  9. The above is true no matter which region or what conditions exist where someone is located.

 

OUR GOAL is that through testing, we will be able to assess if the Seeding Rate Calculator feels easy to use for the target audience as they calculate seeding mixtures and seeding rates. We can apply the following user stories to gather data on how well the tool achieves this objective:

  1. Users understand what to do to make their way through the tool.

  2. Users understand the results they get from the tool.

  3. It is easy for Extension Agents or NRCS staff to evaluate a mixture created by someone else (seed company, farmer, consultant, etc.)

  4. It is easier for users to use this tool when compared to existing seeding rate and mix tools.

  5. The above is true no matter what level of technology experience someone has.

 

OUR GOAL is that through testing, we will be able to assess if the Seeding Rate Calculator provides accurate information to support the work of the target audience. We can apply the following user stories to gather data on how well the tool achieves this objective:

  1. The tool provides information that is accurate to a user’s location.

  2. Farmers feel the tool provides a seeding rate that fits their needs and resources.

    1. Is there a high or low of what you’re typically using?

  3. Farmers feel the tool provides seeding mix options that fit their needs.

    • Are there species missing? Does each species have enough info?

  4. Extension Agents or NRCS staff feel the tool provides a seeding rate is a good fit for the needs and resources of the growers they support.

  5. Extension Agents or NRCS staff feel the tool provides seeding mix options that are a good fit for the growers they support.

    • Are there species missing? Does each species have enough info?

  6. Users can check the accuracy of a recommended mix or seeding rate.

  7. Users trust the information they get from this tool.


Testing Approach:

How would the testing be conducted in order to achieve our goals?

Async Testing Approach

  1. Respondents are provided with a Testing Materials packet.

    1. The packet will include information on where and how to access the tool.

    2. The packet will contain the post-testing survey that respondents will complete.

    3. The packet will provide instructions on how respondents can report a bug / what to do if they encounter a bug during testing.

    4. The packet include contact points for who to reach out to with questions or if a respondent is having issues.

  2. Respondents try out the tool based on guidance in the materials including example scenarios to explore.

  3. Respondents complete the post-testing survey.

    1. They survey will ask what region the respondent is from, their level of experience with seed mixtures, and their level of comfort with technology.

    2. The survey will only include questions relevant to that respondent (e.g. agents won’t be asked questions about being a farmer).

    3. The survey will ask respondents to score relevant user stories listed above on their level of success. Some of the above user stories will be broken into multiple questions to ensure clarity and relevance of answers (for example, the survey should include questions that allow respondents to rate the ease of use for many different aspects of the tool).

    4. The survey will include several AB (or ABC) options for look and feel / user-experience adjustments to the tool. Respondents will be asked to choose which option they prefer and provide additional information / context on why that was their choice.

    5. The survey will ask if the respondent is available for a short interview on their experience if the dev team determines such a conversation would be helpful.

  4. Optional: The Development Team may reach back out to a respondent who has completed the survey and opted-in to interviews in order to discuss about their experience and ask clarifying questions.

  5. Testing feedback is collected, scored, and organized by the Testing Managers (GameTheory).

  6. Testing Managers (GameTheory) provide a report that scores each user-story’s level of success and notes the factors that led to that score based on the testing feedback.

Async Testing Participant Needs:

  • A minimum of 5 respondents from each relevant geographical area.

  • A minimum of 10 farmer respondents and 10 agent / consultant respondents.

  • At least 5 respondents with each low, mid, and high levels of technical experience.

  • At least 5 respondents with each low, mid, and high levels of seed mixture experience.

  • Note: an individual may fit multiple categories (e.g. an extension agent from the midwest region who has high technical experience and mid experience with seed mixes can be one of the respondents in each of these categories)

1-on-1 Testing

  1. Facilitators are provided with a packet with the information and materials they need to run a testing session.

    1. The pack will include information on where and how to access the tool.

    2. The pack will include AB test options and visual references

    3. The packet will contain the post-testing survey that respondents can optionally complete.

    4. The pack will contain a form / database for recording feedback and observations.

  2. Facilitators schedule an hour long meeting with the respondent.

  3. (15 mins) The facilitator will meet with the respondent, and let the respondent try out the tool. Facilitators should encourage respondents to explore on their own and try out options without extra help. Observe and note where they’re struggling or what comes easily to them.

  4. (15 mins) The facilitator will ask the respondent a series of questions to assess whether the user stories are true or not for the respondent. The facilitator will record the respondent’s answers.

  5. (30 minutes) Show the respondent two-three approaches to a particular aspect of the tool such as user experience, or look and feel. Ask them questions to prompt them to compare and contrast the options, and select their favorite using the prepared questions that are packaged with the visual references. Facilitators will record the feedback.

  6. Optional: Respondents fill out the survey [Async Survey].

  7. Testing feedback is collected, scored, and organized. Testing Managers (GameTheory) will provide a report that scores each user-story’s success and notes factors that led to that score, as derived from the notes and observations. GameTheory may follow up to clarify feedback or to debrief with the Facilitator on their findings.

1-on-1 Testing Participant Needs:

  • 2-3 respondents from each relevant geographical area.

  • 1-2 farmer respondents and 1-2 agent / consultant respondents.

  • 1-2 respondents with each low, mid, and high levels of technical experience.

  • Note: an individual may fit multiple categories (e.g. an extension agent from the midwest region who has high technical experience and mid experience with seed mixes can be one of the respondents in each of these categories)


Potential Supporting Questions for Testing

  • Does the tool increase your knowledge of the function, benefits and use of cover crop mixtures?

    • Does the tool make users feel empowered to find additional information / expand their knowledge?

  • Does the tool help users see a ROI by getting creative with how they’re planting, whats in the mixes, when they’re planting?

  • What type of characteristics users will be looking for in plant materials (example: soil erosion & N fixation)?

  • How do users currently get their seeding mixes? For example do you take soil tests in the planning phase.

  • How easy is it to adjust seeding rates?

  • Is it easy to understand the seeding rates based on how they look in the tool?

  • What parts of the tool seemed most useful?

  • How else can the tool give users an accurate expectation of what a seed mixture will look like in the field?

  • What level of experience do you have with seeding mixes and seeding rates?