Lessons from Terre des hommes in Burkina Faso
Terre des hommes’ “Registre Eléctronique des Consultations” (REC) project uses the WHO Integrated Management of Childhood Illness (IMCI) guidelines to support over 4,000 healthcare workers treating young children in Burkina Faso. Today, the REC guides these workers in treating around 180,000 children every month in more than 700 clinics across the country. As part of the Integrated e-Diagnostic Approach (IeDA), these workers receive coaching and individualized feedback, while program administrators have access to dashboards of almost real-time epidemiological data.
But as you might expect, this project did not start at 4,000 workers. And 700 clinics did not sign up on day one. At first, the app was designed in a way that made it hard to scale to these new regions and stakeholders. But a recent update not only improved overall performance and added content like meningitis detection and malnutrition evaluation, but through a new app architecture, allowed the REC to be more adaptable for future scale-up. This is all good news, as we look to expand the project to additional West African countries in the coming months and years.
What did we update?
The issues underlying the previous versions of the REC were clearly visible in the performance of the application. It could take 15 to 20 seconds just to move from one form to another. The way we had coded the application also made data management a challenge, requiring significant manual work. Furthermore, the limited complexity of clinical management meant more complicated cases would receive the same basic treatment as simple cases.
The response to these issues was a collaboration between a team of doctors, developers, and data managers to find solutions and modify the structure of the app to prepare for the scale-up phase. The core code needed to be as modular and flexible as possible to expand both content offerings and country and clinic support.
How we redesigned the app logic for more complex workflows
One of the key issues in the old version of the app was that the diagnoses of the children’ illnesses (“classifications”) were linked to treatments through intermediary variables that were too generic. Many different and unrelated diagnostic classifications were grouped into the same drug-related severity umbrella, or “cluster”. For example, pneumonia, measles, and acute ear infections were all grouped under the same cluster (‘moderate_X’) since they all require Amoxicillin. Once that cluster was established, information about the specific classification that triggered this cluster was partially discarded.
REC 2.5 architecture diagram
This did help the app avoid double prescription of the same drug, but it posed certain limitations for the management of children presenting with comorbidities (two or more medical conditions at the same time). Continuing with the same example, if Patient 1 had pneumonia and Patient 2 measles, they would both need the same treatment (Amoxicillin). However, Patient 1 would need an additional remedy to ease their cough, while Patient 2 would require additional Vitamin A. These specific treatments were more complicated to implement in the former version.
The solution was to separate each diagnostic classification into its own treatment group, giving the app the ability to consider all classifications (not just all drug-related clusters) when recommending a treatment.
The new architecture is much more “linear,” meaning every diagnostic classification is linked directly with a specific treatment. In CommCare language, it means the treatment is a combination of question groups, which all have a single medical classification as a display condition.
REC 2.7 architecture diagram
With the main challenge of avoiding the double or triple prescription of the same drug or drug category, we developed a hierarchy between the different drugs and classifications. For example, a child in need of both Ciprofloxacin for dysentery and Amoxicillin for pneumonia would receive only Ciprofloxacin. Once again, simple logic in the display condition of the different treatment recommendations (in this case Amoxicillin would only be displayed if the classification for dysentery is “no_dysentery”) formed our technical solution.
We also added logic that calculated contraindications (instances where a specific treatment might be harmful to the patient) and used this to display only treatments that were appropriate to the patient and their particular set of diagnoses.
What content did we add?
Significant efforts were also made to improve the reference indications and the management of children whose transfer to the reference hospital was not possible. Hidden values were added to better differentiate the scenarios and new entries in the medication list were created to adapt the prescription (mainly the dosage) to this category of patients.
How did we improve performance?
To improve app performance, we restricted the number of “candidate drugs” transferred from the consultation to the treatment form. In past versions, the computer power required to review these iterations on the candidate drugs prior to prescription meant the transition time between the two forms sometimes took between 15 to 20 seconds. We resolved this issue by identifying only the relevant drugs through a series of variables when the patient was assigned the linked diagnostic classification. By limiting the amount of data transferred between certain forms, we were able to speed up form-loading significantly. With the REC 2.7, it only takes a few seconds to pass from one form to another.
Introducing a new collaborative design process
As we intended a total redesign of the REC architecture, we decided to make it the team priority for six months. We adopted a new design approach by graphically representing the IMCI algorithm, using the free software draw.io. This new approach helped to highlight the logical inconsistencies in the code and greatly facilitated the communication between the medical and the IT teams later on.
Part of IMCI Decision Tree Designed with Draw.io
We analyzed data from previous REC versions to identify all the limits of our app and integrated feedback from data managers to optimize the application structure for future data analyses. This also forced us to review and update our data dictionary.
To ensure that this substantial update was sustainable and endorsed by all stakeholders, a particular emphasis was set on the communication and feedback between the different TdH teams both at national and local levels. Changes had to be medically correct, technically possible, and most importantly, realistic in the field. We tested the application over six weeks, both internally and with Healthcare Workers (HCWs) in one district, having our field officer channeling the feedback to us. The application was released in the field through a one-day training session used to brief HCWs on the app changes, and results were monitored by Field Officers and the TdH team.
Initial feedback and results
The initial feedback was highly positive, as HCWs felt the application was faster and a better translation of the IMCI protocol. The application’s usage rate stayed as high as before (average REC usage rate of over 90% for consultations with children under five in all districts in the first half of 2018), data management has improved, and we are now able to more accurately measure HCWs’ actions regarding diagnosis and treatment.
In the end, we found a handful of key lessons to pass on to other projects hoping to develop a mobile data collection application for scale:
- Application content that has grown organically over time can often be redesigned to be more modular, increasing flexibility and scalability for future iterations.
- You don’t need to transmit all your data between every set of forms. Limiting unnecessary data loaded into forms can greatly speed up the next form’s load time.
- Existing application data can point to data and workflow inconsistencies that can be leveraged to make an application better.
- Developing a shared set of visual representation tools can be as beneficial during an application’s design phase as for new additions. It can highlight friction points and structure limitations, improve the communication inside a team, and serve as a practical marketing tool when reaching out to new partners
Redesigning a mature app can require a significant investment. For us, it meant six months of work, consultation with many stakeholders, weeks of field testing, and more. But our trust in this investment meant we ultimately saw meaningful improvements in application performance and data analysis. And in the coming year, the newly redesigned REC will support thousands of clinicians better treat millions of children across Burkina Faso. How might this process help your program improve its own outcomes?