Receiving consistent feedback throughout the app building process (in both the design and build phases) is vital to building an app that truly solves the problems relevant to your users and beneficiaries and helps you avoid any surprises when you test the final product.
This can be feedback from your end users to understand which workflows make sense or it can be feedback from your tech team explaining why certain aspects of the way you built your app just aren’t quite working. Pilots remain a great way to collect feedback on the entire process of developing and deploying your program before launching more widely.
1001 Fontaines’ team in Cambodia runs through the workflows of their new mobile data collection app
Here’s our advice on three ways to collect all the feedback you will need to make sure your mobile data collection program launches smoothly and collects the clean data you expect of it:
The quality assurance (QA) process can often be the most headache-inducing phase of testing, but at the end of the day, you need to make sure the app you have in your head actually works in real life.
Here is our advice to organizing the most frustrating, but necessary stage of testing:
Build a dedicated QA period into your timeline – and hold yourself to it.
When you lay out your app building timeline, make sure to dedicate time for testing and bug-fixing (usually just before user testing or deployment). You may be able to get a simpler app to a high level of reliability in a week or two, but a complex one can take months to explore fully. If you are doing QA right, you should always feel like there is more to do.
App documentation is life!
Writing test plans is hard. And time-consuming. And sometimes even boring. But it’s vital to the success of your app. Before end users start relying on it, you need a systematic way of assuring that your app does what it’s supposed to – you simply will not find all of the bugs with exploratory or ad-hoc testing.
The simplest way to make sure you do not forget important tests when conducting QA is to invest time up front in designing clean technical specifications for your app, like workflow diagrams and models of your case structure. When it comes time for that crunched QA you promised yourself, you can hit the ground running because you already know what to test. This documentation is especially helpful when you need to revisit this process later – for instance, when you introduce a new feature to your app.
You can work in design programs like draw.io, drop a bunch of shapes into a PowerPoint slide, or use traditional pen and paper. Just make sure your docs are readable, shareable, and get saved somewhere for reference down the line.
Work through the most used workflows of your app and keep track of what works and what doesn’t
Make your test plan write itself
Once you start building your app, learn to rely heavily on the available tools, such as Case List reports. You will quickly realize how useful it is to visualize case relationships to see all the places they are affected.
If your documentation is exhaustive and up-to-date, you will find that QA tests start to generate naturally. Each one of the use cases you outlined in the design phase is a solid test to run, custom-written for a test user. These tests will read along the lines of: “Enroll a female client, age 28, who is pregnant. Search for her case on the case list report and verify that date of birth, gender, and pregnancy status are set correctly.”
QA is never over
Though QA ‘cycles’ may end, QA is really a state of mind: Hold your applications to a higher standard by developing strong habits around testing as you build, even as early as the design phase. Your app is doing an important job, helping a frontline worker do hers. When you find bugs, it means she doesn’t have to!
Put your app in the hands of your end users to get some insightful first impressions
Usability (or ‘user acceptance’) testing is the process of putting a functioning app in front of your end users and seeing what they think. The concept is quite simple, but the execution is a bit more nuanced.
From selection bias to asking leading questions, there are many ways that your approach to collecting feedback can lead your users to give you the answers you want. As tempting as this can be, you need to make sure that the feedback you collect is as unbiased as possible. Here are six quick tips on how to make sure you keep it clean:
- Identify the right users. Make sure you have a representative sample of your users to ensure that everyone who ends up using the app is accounted for:
- High- & low-performing
- High & low digital literacy
- Rural and urban
- Ask open-ended, rather than leading, yes/no questions. Just like the questions you ask in the app, the goal of user testing is to uncover clear, unbiased feedback on how you might improve your platform.
- Observe, don’t demo. You want to see the users interact with the system in as realistic an environment as possible. See what flows they get caught up on and which go smoothly. This will be representative of when the users you might never meet get their hands on your app.
- Consolidate observations and feedback. Organizing your feedback in a structured way helps to get a clear picture of the trends. It also ensures that you can easily compare and contrast the feedback with that on future iterations of the app.
- Sort through the noise. Develop a clear idea of your priorities and how the feedback you receive aligns with them. You cannot incorporate every change request, so have a decision-making process that all stakeholders agree on, and stick to it.
- Factor enough time in your project plan to incorporate feedback, and QA your app thoroughly again before deploying.
Along with testing the effectiveness of the system you built in the eyes of your users, usability testing helps determine how the technology fits into your overall project objectives. This phase is a great opportunity to review everything from your data requirements to your user stories to ensure that your new mobile data collection tool delivers on everything you expected from it.
For six more tips, check out this guide to usability testing from our global services team.
There’s no substitute for getting your mobile data collection app out in the field
Pilots are the time and place where we get our technology right and ensure we are developing something useful for the user. They are when we establish and test processes that are required for the long-term success of a program. The tension is that pilots are meant to be both a special, focused program effort that may differ from the final program, as well as a process to get us ready to scale and succeed in that same final program.
Here are three key considerations for this phase:
1. Pilots should be understood as special
Pilots, by their role as places for testing and iteration, require extra attention and support. It is a time to stress test our theory of why this program will be effective and we want to give it the best chance to succeed. This means the pilot users or sites might be unique in terms of geography, user types, connectivity, or other factors.
For this and other reasons, pilots may not then be fully representative of the experience for a diverse set of users at scale. However, we can be intentional about articulating what makes those pilot users special and strive to get good representation where feasible.
2. Pilots are not just for testing apps
Just as pilots are not only for proving a program’s viability with more users, pilots are not just for apps. They can test so much more, like training methods, supervision performance, and reporting processes. All aspects of a program can benefit from a bit of focused iteration and testing before going big. For example, in many projects, when we roll out a new customized reporting tool, we may again start with a small group of pilot users within the larger pilot cohort to get feedback and iterate before rolling it out more broadly.
3. Pilots are not just for the start of projects
Even if your program has been successful for some time, you can always work on improving it. But at the same time, you don’t want to just introduce wholesale changes. In our spirit of continuous learning, pilot users or sites can serve as the “beta testers” for new app content, integrations, or processes that are introduced throughout the course of the project.
The CRS ReMiND project in India uses this approach to pilots quite well. The project had 10 ASHAs in the initial cohort of CommCare users and showed that they would experience increased productivity and efficiency at scale. So, when the app was scaled to over 250 users, those ten initial ASHAs served as the pilot for various iterations and phases of the project in the months that followed, as well as key advisors on the program’s direction.
Don’t shy away from the insights you can glean from getting your app in the hands of your end users
Testing is a key way to ensure that after selecting a tool, designing the structure of your app, and building a prototype, you have still placed the end users and beneficiaries at the center of the process.
When your end users and beneficiaries are the focus of and play a role in your process, you ensure a higher quality application with workflows that address the reality on the ground. Most importantly, with more engaged users on well-designed workflows, your application will see higher usage and ultimately higher impact.