Utilisation-focused evaluation (UFE) is an approachthat emphasises the importance of engaging with stakeholders throughout the evaluation process to ensure that findings are relevant, useful, and actionable. It focuses on providing feedback and insights that lead to more effective decision-making and ultimately improving the success of the project, programme, or policy being evaluated. This approach is aligned with our mission to analyse the performance of institutions to help them amplify, improve, and sustain societal benefits.
With that approach in mind, Plan Eval, in partnership with Action Against Hunger, conducted the evaluation of UNICEF Brazil’s Country Programme 2017 – 2021. The evaluation had as its objectives to provide accountability for the work performed in the period under analysis and to be a learning tool to inform the upcoming programme cycle. To achieve such objectives, the evaluators applied a utilisation-focused and participatory approach. It consisted of engaging with stakeholders from the inception to the reporting stages, going over each evaluation question to make sure that it served a practical purpose.
The team held weekly check-in meetings during the research phase to report on the progress of information collected and likewise during the analytic phase, to discuss preliminary findings. Come the reporting stage, the conclusions were presented in a series of online discussions involving UNICEF Brazil’s programme officers, whose criticism was essential for the evaluation team to hone in on the relevance of the findings. The latter round of validations consisted of participatory SWOT seminars, where everyone involved in the management response to the evaluation had the opportunity to rank recommendations in terms of their priority for implementation and likely impact.
UNICEF presence in Brazil
The Country Programme Evaluation (CPE) is a mandatory assessment conducted by UNICEF Country Offices every two programme cycles (i.e., every ten years) and is among the most complex, as it looks at all programmatic areas and operations. Evaluating a broad range of activities and outcomes across different sectors can be challenging to manage as it involves multiple stakeholders like government partners, civil society organisations, and other UN agencies. Managing the input and feedback from these different groups required integrating data into a dynamic evidence matrix organised by evaluation question and intended purpose.
Boris Diechtiareff, Monitoring and Evaluation Specialist at the Brazil Country Office (BCO) highlighted the usability and influence of the evaluation findings and recommendations. According to him, “the findings not only focused on the mandatory aspects but also saw the necessity and the benefits of doing the exercise to help design the new country program”. The evaluation was “shared and used widely by different teams and stakeholders, including the Country Management Team, the UNICEF Latin America and Caribbean Regional Office, the Brazilian Cooperation Agency and the Brazilian Government”.
The evaluation report‘s findings and recommendations, in addition to informing the Country Programme Document (CPD), also served as a learning tool to improve the response to the Venezuelan migrant emergency in Brazil.
For the last couple of months, Plan Eval has been working on the evaluation of a social protection program using the QuIP methodology. In this blogpost, Pauline Mauclet, Evaluator at Plan Eval, explains what this methodology is all about and reflects on some of the challenges and lessons learned from this evaluation.
The Qualitative Impact Assessment Protocol, commonly referred to as QuIP, is a qualitative evaluation method used to assess the contribution of an intervention, without the use of a counterfactual. In other words, it is part of a wider family of approaches providing an alternative to quantitative impact assessments, which tend to be quite time-consuming and costly, to assess the impact of an intervention.
The method was developed by Bath SDR, a non-profit organization founded by a small team of researchers from the Centre for Development Studies (CDS) at the University of Bath.
In practice, the method assesses the contribution of an intervention by relying on the perceptions of beneficiaries and stakeholders. Therefore, the method consists in asking beneficiaries about the changes, both positive and negative, that they observed in their lives over a certain period of time and to then inquire about the causal factors that might have caused those changes (in their opinion).
In the following paragraphs, I will discuss some of the key features of the QuIP methodology, which help bring robustness and credibility to the research findings. The interesting thing is that most of these features can easily be replicated with other methodologies.
A common issue when asking beneficiaries about a certain benefit they received is that their responses might be biased, meaning that they might not be speaking the truth. Some respondents might for example be inclined to speak very positively about an intervention just to please the interviewer or because they are afraid to lose the benefit if they say anything negative about it. This type of bias is referred to as a response bias. In order to avoid this issue, the QuIP method uses a technique called (Double) Blindfolding. Blindfolding consists in asking the respondent questions without directly mentioning the program or intervention that is being evaluated. With Double Blindfolding, both the respondent and the interviewer are unaware of the intervention that is being evaluated.
In practice, the interview therefore starts with general questions about the changes observed in one’s environment over a certain period of time and then continues with probing questions about the factors that might have caused these changes. The idea is that the respondent would then mention the intervention by him- or herself, without any pressure or expectations.
But what if the respondent doesn’t mention the intervention? In that case, it might mean that the intervention wasn’t that noteworthy or impactful for the respondent, which is an interesting result in itself.
The key advantage of the QuIP method is that by asking general questions which are not focused on the intervention, we open up the possibility for respondents to surprise us. For example, respondents might mention a change which was not anticipated in the intervention’s theory of change. They might also explain how the intervention impacted them, but not in the way that was originally expected. Respondents could also mention other interventions or external factors that brought significant changes in their lives. In other words, the QuIP methodology puts the intervention’s Theory of Change to the test and can be used to refine it.
Now, asking beneficiaires about their perceptions seems nice, but which beneficiaries should we interview? It is impossible to interview everyone, so how do make sure that our results are representatitve and are not just a reflection of the opinion of a small portion of the population?
This is a common issue with qualitative research. Quantitative semi-experiments are able to work around this problem by collecting data from a representative, randomly selected sample of the target population. However, while quantative studies are appropriate to collect “factual” data, they may not be ideal to ask respondents about their experiences and opinions. In those cases, qualitative studies are much more appropriate. So, then, how do we select cases in a way that supports robust and credible generalisation of the results?
In order to rebuff criticisms of “cherry picking” , the QuIP methods favours a transparent and reasoned approach to case selection. Depending on whether a list of beneficiaries exists; whether a theory of change has already been defined; and whether data on outcomes exists and can be used for the case selection, different case selection approaches can be used, as shown in the diagram below (source: Bath SDR)
Case Selection Strategies (Source: Bath SDR)
Finally, a third feature of the QuIP methodology(not exclusive to the QuIP methodology) is the use of codification to bring transparency and credibility to the analysis process. What is specific to the QuIP methodology is that the codification will focus exclusively on identifying Influence factors and Change factors.
Influence and Change factors (Source: Bath SDR)
By identifying the different influence factors and change factors, we aim to build causal claims. Note that one change factor can also lead to another change, as shown in the diagram below.
Building Causal Claims (Source: Bath SDR)
The objective of the codification process is to find stories of change. Through the use of codification, we can present those stories of change visually, while also facilitating internal and external peer review and audit.
Now that we have presented the QuIP methodology, I would like to reflect on some of the challenges and lessons learned from implementing the method for the evaluation of a social protection program in Mozambique.
The evaluation was commissioned by one of Plan Eval’s clients and the research methodology was defined based on the Terms of Reference provided by the client. The evaluation questions included questions related to the changes brought about my the program, but also questions related to the program’s implementation. As a result, our team set up a methodology that included the use of the QuIP methodology, along with a more classical evaluative approach using the OECD DAC Criteria of relevance, effectiveness, efficiency and cohesion. The intervention consisted in cash transfers provided in two parcels to a group of beneficiaries, with a Communication for Development (C4D) component.
In terms of case selection, our initial research design considered the possibility of using beneficiary data to select beneficiaries for the semi-structured interviews. The program had an existing Theory of Change and there was even data available on certain outcomes thanks to a short survey that was conducted by the client to a sample of beneficiaries after reception of each parcel of the cash transfers. Under this scenario, we planned to conduct a Confirmatory analysis stratified by context and outcomes. In practice, this meant that we would use the existing outcome data to select different profiles of beneficiaries to be interviewed in the field. By doing so, we were sure to cover a variety of profiles, while also opening up the possibility of triangulating the qualitative data with the existing quantitative data at the analysis stage.
Unfortunately, we ended up not receiving access to the beneficiary data before the start of the data collection activities. As a result, we had to adapt our case selection approach at the last minute and ended up going for an Opportunistic selection, by location and by beneficiary profile. The beneficiaries were identified and mobilized in the field, with support of the local authorities.
In terms of data collection, we ended up going for the Blindfolding of beneficiaries, without blindfolding the researchers, mainly for practical reasons.
Data Collection activities using the QuIP methodology (Source: Plan Eval)
In addition to the last-minute change in approach for case selection, another difficulty was that of ensuring the blindfolding of beneficiaries, due to the fact that we conducted in each location both QuIP and non QuIP interviews. In accordancae with the evaluation objectives, the QuIP interviews focused on the contributions and changes brought about by the intervention, while the non QuIP interviews focused on the program’s implementation. By conducting both QuIP and non QuIP interviews in the same location, and considering that beneficiaries were mobilized with the support of local authorities, we had to take a special care to clearly explain to the local authorities the difference between the two types of interviews and to make sure that the respondents to the QuIP interviews weren’t “contaminated” (in other words, that they were informed of the fact that the study aimed to evaluate the social protection program before the start of the interview).
Finally, we observed that it was sometimes difficult to get people to talk during the interviews. People responded to the interview questions, but without providing much detail. This can be problematic for the QuIP methodology, because it may limit our understanding of the real stories of change. As a result, we played around with the format of the interviews and conducted some QuIP interviews in a Focus Group Discussion format in order to see if it helped stimulate the conversation. Additionally, we observed the importance of using open-ended questions to stimulate the conversation and to be patient with respondents, giving them the time to feel enough at ease to open up.
Another important aspect is to make sure that the respondent focuses on his own experience, rather than speaking about the experience of the community and neighbours. Therefore, it is important to remind the person from time to time to talk about their own experience and to focus on the observed changes.
Overall, in terms of lessons learned, I would identify the following elements:
(If possible) Conduct the QuIP and non-QuIP interviews in different locations in order to avoid the risk of “contamination”
Importance of open-ended questions to stimulate conversation
Importance of being patient and letting the respondent speak freely, but reminding the person (when necessary) to talk about their own experience and focusing on observed changes
Encourage respondents to focus on their own experience, rather than the experience of the community, neighbours, etc.
Importance of being well acquainted with the questionnaire BEFORE starting data collection activities
The study is currently at the analysis and reporting phase. Once the study will have been finalized, I will report on any challenges and lessons learned from that stage of the evaluation process.
In the meantime, if you are interested in the results of this evaluation or if you have any questions on the use of the QuIP method, please feel free to contact us by email: