Why should we do usability testing?
The short answer is that we (the designers and implementers) are not the user. We might empathize with them, understand their workflows, and think we know how to simplify the app to make it usable, but ultimately, we aren’t the ones who will be using it day to day.
Therefore we need to perform some simple check-ins with actual users, to ensure the app meets their needs and suits their level of knowledge.
Sometimes during the scoping phase where we gather all the requirements, we miss some aspects. These will be picked up during user acceptance testing with a version 0 of the application, an app which is working and built, but not yet deployed. For example, in a project in Malawi, the community health workers follow paper-based protocols to screen sick children for malaria. During scoping, we understood their needs and based the app on the paper workflow. Everything seemed to work perfectly.
But during user testing, a community health worker noticed that the app didn’t let him pause the screening for one child while he waited for the malaria test result. In real life, when they have to wait 30 mins for the test to confirm, they carry on with screening the next child, to save time. But the app didn’t let the health worker exit at that point, rather they were forced to wait the full 30 minutes without moving onto anyone else. This important issue was flagged, and during version 2 of the app design, we built in a pause functionality that enabled them to submit half the form, and carry on with another child, and fill in the test results later.
Quick tips for usability testing:
Usability testing does not have to be that hard, but you do need to make sure you understand it. For instance, some easy tips are:
- Any user testing is better than no user testing
- An ideal number of people to test with is five
- Consider having two people to run the user test – someone to facilitate and someone to take notes
Here are some more in-depth tips that you should consider while performing usability testing on your own mobile data collection app:
1. Consent is important.
Always give people the option to opt out, and if they do want to participate, get them to sign a simple form that explains the following:
- Reason for research
- Allows you (NGO/ Company) to use the research to improve the app
- Add in any details about if you can use the voice recordings, and pictures, and for what purposes
2. Consider reimbursing participants for their time or travel costs.
You value the research and feedback you are getting, so ensure you adequately compensate the fieldworker for their time, or at least their travel costs. You could provide an airtime voucher, a meal, or something of value to the participant instead of cash if necessary.
3. Focus on the app, not the person’s skills.
Explain you are focusing on what can be changed with the app, and not evaluating their skills. Give an example such as:
There is a bridge, and every time too many people walk on it, another brick falls into the water. Whose fault is that? The people walking on the bridge? Or the bridge engineer? (Answer: the bridge engineer, but we needed to use the people to test the bridge, now we can fix it)
Remove yourself from the process – claim you are helping to audit another team’s mobile data collection app, or that this tool was given to you broken and you need to fix it – this helps people feel less shy about giving honest feedback to the “person who built the app.” Remember to state that honest feedback is what you are after, and changes can be easily made.
4. Don’t tell the user what to do. Observe while they follow a storyline.
Create a script that enables you to test certain functionality piece by piece. This helps the user know what to do, while you can see how they approach the problem naturally without intervention:
- A child walks into your clinic, very sick. Please register the child
- Now screen the child for illnesses
- Say they have a fever for 8 days and no other danger signs
- You do have the equipment to perform the malaria test
What you don’t want is step by step details that prompt the user too much and doesn’t let them interact naturally with the app. This will just evaluate if they can follow instructions, and not how intuitive your app is to use.
- Unlock the phone
- Press CommCare
- Now press “Register New Client”
- Now press next, then select “age 5”, it’s over there by the green button
You want to evaluate how the user reacts naturally, and where they might get stuck. If they need more detail, give story details, not technical steps.
5. Note down any and all observations.
It helps to get people to “think aloud” while they are following the steps. This helps the facilitator reduce assumptions about the participant’s behavior, as it gives them a chance to explain their own reasoning for their actions. For example:
Interviewer: Please register the child, her name is Abigail. While you do this, speak about what you are doing step by step.
User: “Okay, I am looking for CommCare, now I am clicking it, but it’s opening so slowly. Okay, I’m inside, now I’m trying to see. Where should I go? Hmm.. that didn’t work. Okay, found the form, I’m trying to write ‘Abigail’ but there is no capital letter. Ah, there I found it.”
Any exclamations of frustration, side comments, expressions can be signs that point to something that is not working. Note down when and where that happens.
If you come across any bugs, note them down too, but remember to focus on workflows, ease of usability and any assumptions the users are making, not just the quality of the app.
6. Explore solutions
If you notice something during user testing (for example, the community health worker who said, “Okay, while I wait for my malaria test I want to carry on with another child but I can’t.”), start exploring solutions together.
Ask more questions:
- Why do you need to do this (“The clinic gets busy.”)
- How do you do this on paper currently? (“We just change to another paper and come back later to fill in the test results; the child sits and waits.”)
- How do you think the app could help you? (“I want the app to freeze on this child and open another one without losing information.”)
If you have an idea for the tech workflow, sketch it out on different screens. Show them to the user one by one and see if it helps them understand your proposed solution and if it needs changing.
This is very useful for user-testing reports – sketch out the table of what data they want to see, the headings and columns and where the data is coming from.
Checklist for Usability Testing
- Facilitator and notetaker (they should either understand local languages or be able to effectively communicate within the cultural context)
- Consent forms printed
- Participants identified – try to find some who are in different areas, ie. urban users, rural users, experienced clinicians versus new clinicians, older people versus younger people (to see how the technology is used)
- Scripts for workflows printed
- Notebook and pens
- Voice recorder (if necessary)
- App set up with a demo account so that the user can make mistakes, register test clients, etc.
- If you are user testing in remote areas, make sure the app is synced and ready beforehand
- “Screens” templates printed that you can tweak to add more details – or just more blank paper to draw out workflows
Once you have completed your user acceptance testing, analyze the results to find patterns of behavior, acknowledging both things that worked well and that should be improved. What did users like, and find easy to navigate and understand? Which features confused them, and could be improved in future iterations? It can be helpful to tag and group common themes across users, to help in determining the prevalence of certain behavior and also which areas need the most improvement first.
Sometimes you will have an outlier – a user whose behavior does not match the others. Try to understand from the qualitative interviews why that could be, and either incorporate the feedback to improve everyone’s experience or use your judgment if the feedback is less applicable for other reasons.
Now that you know how simple user testing can be, it’s time to get out into the field, whether that is a rural area, a hospital, agricultural co-op, or supply chain warehouse, and test with the people who will use your mobile data collection app on a daily basis. With careful listening and observation, you will find many clues to areas that need improvement in your app – from the icons, the written wording, and most importantly, the workflow.
Read up on more ideas of how to improve your app here, and if you want to consider a pilot as part of your testing process, read up on why piloting is still important.