Meet the team #3 – João Paulo Cavalcante

In Plan Eval’s “meet the team” series, we invite you to get to know more about the incredible people who make up our dynamic team.

Our third guest is João Paulo Cavalcante. João holds a degree in International Relations from the FACAMP in Brazil and a master’s degree in International Development from the Université Grenoble-Alpes in France. He has worked for 5 years in the World Food Program (WFP), ultimately gaining experience in south-south and trilateral cooperation, policy advocacy, and capability building to support governments and local actors in adopting initiatives to fight poverty, malnutrition, and improve governance in Latin America, Africa, and Asia. 

He joined Plan Eval in 2022 as a Project Manager and shares his insights into his  experience at Plan Eval in this brief interview.

  1. How did you start working at Plan Eval?

Coming to work at Plan was a natural progression in my career. My background is in international relations and development cooperation and, since completing my master’s degree in 2012, I have been working in the management of different projects related to development cooperation. I came to work at Plan when I saw an open position for the project management position, which I applied for and was selected. Working with development project management is something that interests me a lot because I can combine project management, which is perennial with projects in both the public and private sectors, and I can also align with the focus on international development.

João supporting a school feeding project in Guine Bissau

Especially at Plan, we have a very specific focus on Monitoring and Evaluation, mainly evaluation, which allows us to look at projects after they have been implemented, which is very interesting. Also being able to manage projects of different themes (environment with UNEP, technological security with the US Department of State, COVID in Mozambique) for different clients and regions is also very interesting.

  1. What is it like to be a Project Manager at Plan Eval?

The big difference I see at Plan is that we have a niche, a specific focus. Other consultancies have a wider scope of work and have different types of services, whereas we are specialised in monitoring and evaluation.

On a day-to-day basis, interaction between Plan employees is also very important and positive. We all know each other and have a very direct openness, so problem solving, troubleshooting  and support becomes much more efficient. In a way, this even facilitates project management: if I need something, I can contact the person directly. This improves the relationship between the internal and external team, as problems are resolved more quickly and efficiently.

  1. What was your first project at Plan Eval? What project do you consider to be the most meaningful to you?

My first projects at Plan had already started, but I also took on some projects from the beginning. The nutrition project in Madagascar, for example, had already started when I started to manage the project. This project was very challenging, not because of the project itself, but more because of the context in which the country is inserted and its geographical extension, in addition to involving both local and international consultants. Another very interesting project was the project in partnership with Dev Tech on technological security for the US State Department, in which we participated in a consortium and did not have access to the end customer.

Playing football by the beach in Senegal

Amongst the projects that I participated in since its early phases, the Red Cross project was remarkable, as it was linked to the development of activities and the autonomy of certain communities to deal with challenges related to diseases, an extremely interesting topic. There was a lot of flexibility and openness on the part of the client to define the products, which also brought challenges and opportunities for the team.

Another outstanding project was UNICEF Uruguay on digital education. The topic was very interesting and the evaluation team, as well as the UNICEF and Ceibal team (implementation partners), were proactive, engaged and open to suggestions, which greatly facilitated the management of the project itself. It was a project that generated good results and the client was very satisfied in the end.

Finally, the COVID-19 project in Mozambique was very remarkable. It was a cash-transfer program implemented by WFP and UNICEF that had a positive impact on the well-being of society due to the transfer of income. Evaluating such a project was extremely interesting both for its theme (cash transfers) and also for the moment in which the project took place, in the middle of a pandemic.

  1. What do you consider to be the best part of working at Plan Eval?

In addition to the collaborative environment, the quality management system and the company’s own organisation are, without a doubt, one of its strengths. All documents are organised, we have a standard for information and project management. This makes our job a lot easier.

  1. Finally, what are your expectations of the future?

Having an office in Brazil is a great starting point for other projects in Latin America and given that we are a small organisation based in Brussels and that has already carried out large projects makes me optimistic that we will expand our projects to more and more countries in Europe, Africa and Asia. With the support of our internal quality system and our experiences, I am sure that we will continue to expand.

Creating Impactful Evaluation Reports: Strategies for Meaningful Communication

An evaluation is only as good as the changes it helps promote”.

In the realm of evaluation, the essence of empirical insights is not encapsulated in the information provided but rather embedded in a narrative framework. A persuasive evaluation report serves as a vessel through which evidence is distilled into actionable recommendations.

In this guide, we will explore the key components of creating evaluation reports that leave a lasting impact.

  1. Establishing the Foundation: Objectives and Contextual Anchoring

Begin the process by delineating the overarching objectives that the evaluation report intends to achieve[1]. Precisely define the scope, objectives, and intended audience for the report. This groundwork not only imparts coherence but also serves as a compass, ensuring that the report remains focused and pertinent in its content (EvalCommunity, 2023).

A structured approach is paramount to ensuring that the narrative remain comprehensible. Start the report with a succinct executive summary that covers the main findings, conclusions, and recommendations. Follow with a brief section on approach and methodology and then present your findings per evaluation question, conclusions and recommendations (Canadian International Development Agency, 2002).

  1. Data Visualization and Analysis: The Art of Substantive Simplification

In the context of an evaluation report, the role of data visualization is paramount. Data, in its raw form, can often be dense and complex, challenging the reader’s ability to discern patterns and glean insights efficiently. This is where data visualization steps in as a powerful tool.

Incorporating graphical representations is instrumental in holding the reader’s attention. Visual representations, such as graphs, charts, and diagrams, distil intricate data sets into clear, intuitive visuals. These visuals not only enhance the report’s accessibility but also enable stakeholders to grasp key trends and correlations at a glance.

By harnessing the power of visualisation, evaluation reports transcend mere text. The nuances of data and the complexity of the analysis become more accessible, fostering a deeper connection with the insights presented (Caswell & Goodson, 2020).

  1. Directing Action: The Essence of Pragmatic Recommendations

The main aim of an evaluation report is to inspire action. The report should clearly lay out suggestions for improvement based on what the evaluation has uncovered. These recommendations need to be practical and feasible, allowing decision-makers to take clear steps forward.

Crafting effective recommendations involves connecting the dots between the findings and real-world solutions. Each recommendation should come directly from the findings and conclusions of the evaluation. This connection ensures that the suggestions are relevant and can genuinely address the identified areas needing improvement. Moreover, recommendations should consider the broader context in which the project or initiative operates and what is and isn’t within the governance of the stakeholders.

To make recommendations actionable, it’s crucial to break them down into clear steps. This means explaining how each suggestion can be put into practice. Details like who will be responsible for what, the timeline for implementation, and any resources needed should be included. Additionally, highlighting any potential challenges and offering strategies to overcome them adds a practical dimension to the recommendations. Prioritisation is also key. Not all recommendations are equal, and it’s important to figure out which ones will have the most significant impact and tackle them first. This strategic approach ensures that efforts are focused where they matter most and that resources are used wisely (Wingate, 2014).

  1. Stakeholder Alignment: Engaging with Different Audience Needs

Consider the needs and expectations of your stakeholders. Tailor the report’s content to address their concerns and questions. As highlighted in our short guidance on fostering utilisation focused evaluation, when stakeholders see their concerns being addressed, they are more likely to engage with the report and act on its recommendations. In a similar fashion, avoid jargon and technical terms that may alienate non-expert readers – clear and concise language is preferable to communicate complex concepts.

Lastly, before finalising the report, seek (rounds of) feedback from the key stakeholders and evaluation management group, and record their suggestions in a comments’ matrix. Consider having your report peer-reviewed and/or undergo a review from a panel of experts to ensure the content is accurate, coherent, and aligned with objectives. Continuous refinement ensures the report is polished and effective (Canadian International Development Agency, 2002; EvalCommunity, 2023).

In conclusion, an impactful evaluation report is more than a compilation of data; it is a strategic communication tool. By setting clear objectives, presenting data visually, weaving a compelling narrative, and aligning with stakeholders’ needs, you can create reports that not only inform but also inspire action. Remember, the true mark of success lies in how well your report triggers positive change based on your evaluation insights.

[1] For a guide on utilisation focused evaluation, please visit: https://www.plan-eval.com/blog/?p=1172

Works Consulted

Canadian International Development Agency, 2002. How to perform evaluations – evaluation reports. [Online]
Available at: https://www.oecd.org/derec/canada/35138852.pdf
[Accessed 28 August 2023].

Caswell, L. & Goodson, B., 2020. Data Visualization for Evaluation Findings. [Online]
Available at: https://oese.ed.gov/files/2021/02/DataVisualization_508.pdf
[Accessed 28 August 2023].

EvalCommunity, 2023. How to Write Evaluation Reports: Purpose, Structure, Content, Challenges, Tips, and Examples. [Online]
Available at: https://www.evalcommunity.com/career-center/structure-of-the-evaluation-report
[Accessed 28 August 2023].

Wingate, L., 2014. Recommendations in Evaluation. [Online]
Available at: https://www.betterevaluation.org/tools-resources/recommendations-evaluation
[Accessed 28 August 2023].


Meet the team #2 – Natália Martins Valdiones

In Plan Eval’s “meet the team” series, we invite you to get to know more about the incredible people who make up our dynamic team.

Our second guest is Natália Valdiones, Business Manager, specialist in Strategic Business Management and Business Intelligence Management. She has more than 15 years of experience in implementation of ERP systems, coordination of administrative, financial, and Business Process Management (BPM) areas, building Quality Management Systems (QMS) and internal audit of ISO 9001:2015.

In this brief interview, she shares insights into her experience at Plan Eval, highlighting her achievements and her expectations for the future.

  1. You once mentioned that studying business management was not a decision made by someone who didn’t know what they wanted to do, but by a person who knew exactly what they wanted. Tell me more about it.

Looking at it today, I really think that 17 years old is too early for someone to choose what they want to do for the rest of their lives. But I also believe that some people are born with a gift. In my case, I always knew exactly what I wanted to do, and I really love what I do.

It is true that business management course is very generalist, and that a lot of people who don’t know what to do end up on the course; I myself came across several people like this during my studies. Once you graduate as a Business Manager, you become a jack of all trades and a master of none. You understand accounting, but you are not an accountant. You understand law, but you are not a lawyer. You understand technology, but you are not an IT professional. You have all the skills required to manage a business; however, I believe the most important thing for a Business Manager is to specialize because with this generalist approach, you end up getting a little “lost”, which ends up harming you in the job market.

Therefore, I chose to pursue a specialisation. I did an MBA in Strategic Business Management and then another specialisation, this time in Business Intelligence Management. And I am very proud to say that I am a Business Manager and that I do exactly what I love.

2. You’ve worked your whole life in retail. How did you end up in the evaluation world?

I worked in retail since I was 14 until I was 20, when I left to do an internship in information technology (IT). Until today I have a “crush” on IT, so much so that I focused my two dissertations on it: one talked about ERP System Implementation and while the other talked about Returns and Advantages of implementation of such systems[1]. The internship lasted a year, and, during that period, the company was in the middle of implementing a similar system. I had the opportunity to closely follow the whole process and I found that I really liked BPM.

After that year, I was invited to be a managing partner on the family business, and to be responsible for automating the processes to meet Federal requirements, where I had the opportunity to put into practice everything I had learned in school, including the IT systems part.

The entire process of restructuring and implementing processes, as well as employee training, lasted about two years. After that, I was managing partner for another eight years. The only problem was that I didn’t like my job! We then decided to sell the family business.

After that, I chose to dedicate myself to motherhood for just over a year and when I decided to return to the job market, I wanted to go back to work with what I loved! I fell into the world of evaluation by accident, through a selection process for administrative manager at Plan Eval.

Plan Eval’s team in January 2023

For me it was a very big impact, because I left the retail sector to come to the services sector, but it was very positive, because in addition to working with what I like, we have an environment that does not have a routine, where every day we are dealing with different and challenging projects, one in each corner of the world. When we see the result of the evaluation and see the results of the project, it is very satisfying. In the end, I do what I love, in an environment that provides me with daily learning and challenges, and which also provides me with a quality of life of balance between personal and professional life.

3. One thing that was very visible in his trajectory, both academic and professional, is your passion for implementing processes from scratch. Is that why you started to work on QMS?

I’ve always loved working BPM. And during my undergrad, it was like that too. I remember an occasion when we were asked to build a business plan. I took the lead on the project, started talking to the people in the group and developing the idea. In the end, our business plan was the standout amongst all the groups. We won several accolades and recognitions for having done a very good job. I remember that I led the whole process, because I was always very organized, especially in these activities.

When I arrived at Plan Eval, obtaining the ISO 9001:2015 certification was already a goal. At the time, I noticed that I could contribute precisely with this experience in BPM, as there was a certain difficulty in scaling the work needed to map all the processes and fulfill all the necessary steps, because it is not as simple as it seems. As I have a natural affinity with processes, I started to help the team and, naturally, I became responsible for the entire certification process. We had the support of an external auditor, who was essential to bring the perspective of those who audit, and thus align all processes in our QMS. In the end, this whole structuring process lasted around seven months, and we received the certification. This happened before I completed a year working with Plan Eval. I remember that, at the time, we were the only evaluation company in Brazil with this certification.

Natália and Plan Eval’s ISO 9001:2015 certification

After that, I received training on the ISO 9001:2015 standard and also participated in an ISO 9001:2015 auditor training course. Today, I have the ISO 9001:2015 knowledge that I did not have at the time. Once the entire certification process was finalised, it was agreed that I would also be responsible for monitoring and maintaining Plan Eval’s QMS.

4. How does having the ISO 9001:2015 certification increases the quality of our evaluations and services? What sets Plan Eval apart by having this certification?

I believe that the main impact is on the structure of the company. Having the guarantee that we can meet all the customer’s requirements, a procedure, a process in which things need to happen and providing the guarantee of continuity makes all the difference. Before the QMS, Plan Eval did not have all this structure and documentation.

This brings quality, as nowadays project managers have greater control over the activities that are happening in each project. We also have access to more information, both from the client and from consultants and internally. Before, we did not carry out standardized surveys with all interested parties, and today we have this resource that provides us with inputs to promote the continuous improvement of our processes and our deliveries to our customers.

Nowadays, we value the fact that we can understand if the clients were satisfied, and, if not, in which areas we can improve. We assess the consultants, the consultants assess us, the client assess the project management and the consultants, almost like a 360-degree assessment. In this way, we bring this information back into the company so that we can deal with these issues on a recurring and comprehensive basis. Now, we also have a new process, in which we seek to understand, together with our client, the impact caused by our work. It has changed and improved things. Ultimately, I believe this turns the PDCA cycle and gives us the opportunity to improve daily.

PDCA cycle

What I consider most important thing is having everything documented, knowing what is happening in real time in each project, understanding what everyone is doing and recording all this information. If it is necessary to make any changes in the team, e.g., a new project manager, they would have an exact record of what happened, as well as all relevant information about that project. Now we have much more quality in our final product and greater control and a different organization than we had before. In addition, project budget control has become more precise and meticulous, which allows for safer and smoother execution.

5. What is the best part of working at Plan Eval?

Without a doubt, it’s the part of working with what I love. So, no matter what happens, I’m always very happy with my work. I also really like the daily learning because we are always in contact with different situations, with different realities and, whether we like it or not, we learn many things about the realities of other countries and cultures. Finally, the fact that we are a small company makes us very close and we end up all working together and helping each other.

6. What are your expectations for Plan Eval’s future?

That the new structures in place will result in more and more visibility for Plan Eval and that we will also be able to increase the quality of our services and the satisfaction of our customers, conquering more and more markets and having more and more projects!


[1]All these articles are available at https://www.webartigos.com/autores/nataliavaldiones

How can evaluators maximise the usability of evaluations? Practical suggestions

Por: Yasmin Winther Almeida

Evaluation is a powerful tool that enables organisations to measure, assess, and improve their programmes, policies, initiatives, and processes. It can provide valuable insights, identify areas of strength and weakness, and guide decision-making for optimal outcomes.

However, the true value of evaluation lies not only in conducting it, but also in utilising its findings effectively. In this article, we will explore the significance of evaluation utilisation and discuss strategies to maximise its impact.

Understanding Evaluation Utilisation

Evaluation utilisation refers to the process of incorporating evaluation findings into decision-making and subsequent actions. According to Patton’s Utilisation-Focused Evaluation (UFE) approach, which emphasises the intended use and influence of evaluation findings, evaluations should be conducted with the primary purpose of promoting effective decision-making and utilisation of evaluation findings. Patton emphasises the importance of including diverse perspectives and engaging stakeholders in a collaborative and participatory process throughout the evaluation; this ensures that the evaluation is specifically tailored to address their unique information needs and concerns.

Patton’s 17-step Utilization-Focused Evaluation (U-FE) Checklist (2013)

This approach also requires that the evaluation team and the key stakeholders work closely to define the purpose, scope, and the intended use of the evaluation to ensure that the evaluation is relevant and aligned with stakeholders’ goals. Another important aspect of UFE is the emphasis on utilisation-focused reporting. The evaluation findings are communicated in a way that is accessible and meaningful to the intended users, providing them with actionable recommendations and insights that can inform decision-making and programme improvement.

Throughout the UFE process, Patton emphasises the importance of fostering a culture of learning and using evaluation as a tool for social change. This involves cultivating a respectful and collaborative relationship between evaluators and stakeholders, creating an environment where evaluation findings are seen as opportunities for growth and improvement.

How can evaluators maximise evaluation utilisation?

Engage Stakeholders

To engage stakeholders effectively, evaluators typically identify and involve a diverse range of individuals or groups who have a vested interest in the programme or initiative being evaluated. These stakeholders may include programme staff, donors, policymakers, community members, implementing partners and other relevant parties.

The engagement process begins by establishing clear communication channels and building relationships with stakeholders. It is essential to communicate the purpose and benefits of the evaluation, address any concerns or misconceptions, and establish trust and rapport. This helps create a supportive and collaborative environment where stakeholders feel valued and encouraged to contribute their insights and perspectives.

During the evaluation inception phase, evaluators actively seek input from stakeholders to identify their information needs, expectations, and desired outcomes. This collaborative approach ensures that the evaluation is focused on the issues and questions that matter most to stakeholders, increasing the relevance and usefulness of the findings.

Engaged stakeholders ensure that the evaluation is relevant

Engaging stakeholders also involves involving them in data collection activities. This may include conducting interviews, focus groups, surveys, or observation sessions with stakeholders who have first-hand experience or knowledge related to the programme being evaluated. Involving stakeholders in data collection not only enhances the quality and richness of the data but also promotes their active engagement and investment in the evaluation process.

Throughout the evaluation, regular communication and feedback loops are maintained with stakeholders to provide updates on progress, share emerging findings, and seek their input and validation. This ongoing engagement allows stakeholders to contribute their expertise, provide contextual insights, and offer alternative perspectives, thereby enriching the evaluation process and enhancing the credibility of the findings.

Finally, engaging stakeholders also involves involving them in the interpretation and sense-making of the evaluation findings. This collaborative sense-making process allows stakeholders to make sense of the data in light of their experiences, knowledge, and priorities. By actively involving stakeholders in this process, evaluators facilitate their ownership of the findings and increase the likelihood that the evaluation results will be used for decision-making and programme improvement.

Tailor Communication

When presenting evaluation findings, it is important to consider the diverse backgrounds, knowledge levels, and roles of the intended audiences. This may include programme managers, policymakers, frontline staff, community members, funders, or other relevant parties. By understanding their perspectives and information needs, evaluators can customise the communication to resonate with each audience.

To tailor the communication effectively, evaluators should focus on presenting the key messages and insights derived from the evaluation. This involves distilling complex data and analysis into succinct and easily understandable points. By highlighting the most relevant and significant findings, stakeholders can quickly grasp the main takeaways without getting overwhelmed by excessive detail.

In addition to presenting the findings, UFE emphasises the importance of providing actionable recommendations. Evaluators should translate the evaluation results into practical suggestions and strategies that stakeholders can implement to improve programmes or make informed decisions. These recommendations should be specific, feasible, and supported by evidence from the evaluation. By offering clear guidance, evaluators empower stakeholders to take meaningful actions based on the evaluation findings.

Tailoring communication also involves selecting appropriate formats and channels for sharing the evaluation findings. This may include written reports, presentations, infographics, dashboards, or interactive workshops. The chosen formats should align with the preferences and communication styles of the target audiences. For instance, policymakers may prefer concise executive summaries, while programme staff might benefit from detailed reports with practical implementation guidelines.

Tailored communication ensures that each stakeholder receives the most relevant information in the most useful format.

Furthermore, it is crucial to consider the potential impacts of the evaluation findings on different stakeholders. Evaluators should explicitly communicate the relevance and implications of the findings for each audience, addressing their specific interests and concerns. By highlighting the potential benefits or consequences of acting upon the evaluation findings, stakeholders are more likely to recognize the value and urgency of utilising the evaluation results.

Throughout the communication process, evaluators should also encourage feedback and dialogue with stakeholders. This open and interactive approach allows stakeholders to ask questions, seek clarification, and engage in discussions around the evaluation findings. By promoting an ongoing conversation, evaluators can deepen stakeholders’ understanding, address any misconceptions, and build a shared understanding of the implications of the evaluation.

Visually engaging reports

Effective communication of evaluation findings is vital to ensure the utilisation and impact of evaluation reports. One key aspect of this communication is the presentation of the findings and recommendations in a visually engaging manner. This can be achieved through several ways, such as:

  1. Using infographics and data visualisations: Incorporate infographics, charts, graphs, and diagrams to illustrate key findings, trends, and relationships in the data. Ensure that the visualisations are clear, labelled properly, and easy to interpret.
  2. Employing dashboards (e.g., PowerBI): Consider creating interactive data dashboards that allow users to explore the evaluation findings dynamically. Dashboards can enable users to filter data, view different visualisations, and customise their analysis based on their specific interests. Interactive dashboards offer a more engaging and exploratory experience for users.
  3. Emphasising key messages and quotes: Highlight key findings, recommendations, or quotes by using visually distinct formatting, such as bold text, larger fonts, or coloured text boxes. This draws attention to important information and makes it easier for readers to grasp the main points at a glance.
  4. Using a visually consistent layout: Pay attention to the overall design and layout of the report to create a cohesive and visually consistent look. Use consistent fonts, font sizes, and formatting throughout the report. Ensure that headings, subheadings, and sections are clearly defined. Leave enough white space to improve readability and avoid clutter.
Visually-engaging reports, courtesy of cross-content (https://www.crosscontent.com.br/)

By incorporating these visual elements, evaluators can support the understanding, retention, and utilisation of evaluation findings, ultimately increasing the impact of evaluations and informing decision-making processes.

Final considerations

Evaluations are valuable tools for programme and process improvement. However, their true value is realized when the results are effectively utilized. By implementing the strategies discussed in this blog post, you can maximize the utility of your evaluations and drive the success of your initiatives.

Meet the team #1 – Laís Bertholino Faleiros

In Plan Eval’s “meet the team” series, we invite you to get to know more about the incredible people who make up our dynamic team.

Our first guest is Lais Bertholino Faleiros, our Project Manager. She has a Masters in Management of Public Organizations and Systems from the Federal University of São Carlos and a degree in Public Administration from São Paulo State University. She is a specialist in Cost-Benefit Analysis and Social Return on Investment (SROI).

Lais joined Plan Eval in 2017, initially as an administrative-financial manager. Later, she transitioned into the role of a researcher and currently serves as a project manager. In this brief interview, she shares insights into her experience at Plan, highlighting the highlights and her expectations for the future.

  1. How did you start working at Plan Eval?

“I started my career at Plan in 2017 as an administrative-financial manager. I held this role for approximately a year and a half, handling administrative responsibilities. Then, an opportunity in the research area arose and I became a researcher. During this period, I worked on projects such as C&A and Caixa. These were the main experiences as a researcher.

Later, in 2019, I left Plan acted as a consultant on the Coca-Cola Institute project, where I conducted a qualitative assessment of their Youth Collective Programme. However, in 2021, I returned to Plan Eval as a project manager. Therefore, along this path, I went through different roles, starting as an administrative-financial manager, then becoming a researcher and, finally, assuming the position of project manager.

I noticed a significant transformation at Plan over time, to the point where I felt, when I returned in 2021, that I was joining a completely different company. This perception is due, in part, to the changes that occurred during the pandemic period. In addition, I was able to observe a significant increase in the company’s operations in Belgium.”

2. What is it like be a Project Manager at Plan Eval? What do you like the most?

“The most important point, I believe, is that it [my work] motivates me. Working with public policy and being involved with many relevant organisations around the world that seek to create change in society is something that I am passionate and that drives me. The work of the United Nations and governments, together with public policies, provides the opportunity to act in projects and programmes that bring systemic changes to society. This possibility of comprehensive learning on different topics, without the need to become an expert in just one of them, is one of the fascinating characteristics of the evaluation area, which I chose to dedicate myself to.

Another important factor is being a project facilitator. In addition to management, I believe that my role involves integrating different interests, which come from donors, funders, executors and evaluators. Dealing with reconciling these interests is an exciting challenge, even if it is not always easy. People management is also part of this universe, adding even more complexity to everything we do.

However, being a project manager goes beyond that, also involving understanding the needs of people, both in the evaluated organisations and the evaluators, and I feel rewarded when I successfully complete this mission. Being able to lead and manage people, understanding their demands, is rewarding and gives my work an even greater purpose.”

3. What was your first project at Plan Eval?

“My first project at Plan was carried out in partnership with Instituto C&A, involving data collection for the evaluation of cotton farming projects supported by Instituto C&A and Porticus in Northeast Brazil. This project covered three different states: Piauí, Ceará and Pernambuco. As the research coordinator, I had the opportunity to travel to train the researchers. It was an extremely enriching experience, as in addition to being responsible for coordinating the research, I also actively participated in training the team and conducting the evaluation.”

Lais facilitating the training of enumerators in Ceará

4. What project do you consider to be the most meaningful to you?

“The PPCAAM project was an incredible experience for me, as we went through several evaluation stages. We had the opportunity to carry out comprehensive qualitative research in addition to quantitative research. We also worked on developing indicators and reconstructing the theory of change. We carry out activities to disseminate knowledge, training and even a seminar. Now, we are about to publish a book about the project. I believe that this project is remarkable, as it covered a wide range of activities that we master. It is gratifying to be able to accomplish so many things in a single project.

Another standout project for me is “Children on the Move“. It is a new project, which is just starting and involves multiple countries in different regions. This is a very relevant and exciting project and I look forward to providing meaningful evidence to our clients.”

Laís and the PPCAAM team in Brasília

5. What do you consider to be the best part of working at Plan Eval?

“At Plan, one of the things I appreciate most is the opportunity to engage in meaningful conversations with people, including consultants, co-workers and clients. In addition, being a global company gives us great freedom to work with different people and clients. I also believe that building relationships and networks is a fundamental aspect of our work. That ability to build relationships is something I really value from my experience at Plan.”

6. Finally, what are your expectations of the future?

I hope that Plan Eval can increasingly spread its operations around the world and acquire experiences in increasingly diverse contexts. I believe this will enrich our evaluative capacity, not only in different contexts, but also when working with more diverse organisations. In this way, we will be able to build more educational assessments, with greater quality, equity and diversity.”

Incorporando a perspectiva de gênero em avaliações de políticas públicas e programas sociais e humanitários

Ao longo dos últimos três anos, atuando como especialista de estudos de gênero, eu fui convidada pela Plan Eval para participar de algumas pesquisas e avaliações que consideravam a perspectiva de gênero na análise e avaliação de intervenções sociais e humanitárias. Não obstante, participei de outras tantas avaliações que não faziam a incorporação desta perspectiva, mas que ao longo das análises ficava evidente que ao menos parcialmente esta incorporação poderia ser feita. O fato é que muitas das intervenções sociais e/ou humanitárias podem não fazer referência explícita, mas isso não significa que não tenham um impacto diferenciado por gênero em determinados grupos ou localidades, principalmente devido à desigualdade estrutural entre homens e mulheres ainda persistente na maioria das sociedades – dada as devidas especificidades regionais e/ou culturais, são as mulheres e meninas que continuam a sofrer os maiores impactos em situações de vulnerabilidade, como de grave crise econômica, violência e/ou conflitos armados, por exemplo. Dessa forma, segundo Medina (2021):

“incorporar a perspectiva de gênero implica considerar sistematicamente as diferenças entre mulheres e homens nas diversas esferas de políticas ou programas, com a vontade de identificar essas desigualdades e os fatores que as geram, torná-las visíveis, projetar e aplicar estratégias para reduzi-las e, assim, avançar para sua erradicação. Da mesma forma, significa abordar o estudo dos fenômenos sociais sem assumir a universalidade das experiências masculinas e também questionar o sistema sexo-gênero e suas implicações”.[1]

© UNHCR/Patrick Brown

Para a reflexão que proponho sobre este tema, compartilho algumas das avaliações que realizei no último ano, em 2022, que poderiam ter dado ênfase à questão de gênero desde a sua concepção, mas que não o fizeram explicitamente. Os dois projetos de avaliação que destaco foram solicitados pelo Comitê Internacional da Cruz Vermelha (CICV) no Brasil.

O primeiro, conduzido por uma equipe formada pela Plan Eval, tratou-se de uma avaliação de resultados sobre o Programa Acesso Mais Seguro (AMS), que visa reduzir e mitigar as conseqüências da violência armada sobre os serviços públicos essenciais, como educação, saúde e assistência social. O segundo, realizado de forma independente, fazia referência ao mapeamento das necessidades de comunicação das pessoas migrantes e refugiadas, especialmente da população venezuelana, atendida no âmbito da Operação Acolhida do governo federal, no estado de Roraima. Eram avaliações completamente diferentes entre si, tanto de escopo e estrutura, como de públicos alvos. Contudo, em cada uma havia evidências de que estas intervenções poderiam ter assumido a perspectiva de gênero, uma vez que eram as mulheres que estavam sendo impactadas de forma mais negativa e desigual pelos problemas que a organização procurava mitigar.

© Foto: UNFPA Brasil/Pedro Sibahi

O Programa AMS trabalhava em parceira, no momento da avaliação, diretamente com as secretarias de Educação, Saúde e Assistência Social[2] de quatro municípios: Rio de Janeiro, Duque de Caxias, Fortaleza e Porto Alegre. Por sua vez, o Programa era executado por profissionais que estavam na ponta dos três serviços referidos, como: diretores(as) de escolas, professores(as), médicos(as), dentistas, enfermeiros(as), agentes comunitários e assistentes sociais. Há pesquisas que mostram que as mulheres ocupam em maior número os serviços públicos de forma geral, atuando especialmente no atendimento direto aos usuários dos serviços, porém são minoria nos cargos de maior liderança e tomada de decisão[3]. Além desta desigualdade de gênero “vertical”, há também a desigualdade “horizontal”, quando há a diferença na distribuição de gênero em algumas carreiras específicas. São as consideradas “posições de cuidado”, como as das áreas de assistência social, educação e saúde, que são, historicamente, pior remuneradas, representando um abismo salarial entre os gêneros[4].

Esta intervenção que visa, entre outros objetivos, manter funcionando com segurança os serviços essenciais públicos em áreas de grande vulnerabilidade, passou a impactar a rotina de centenas de profissionais, especialmente que se identificam como mulheres, assim como tem potencial de impactar a vida de muitas beneficiárias/usuárias mulheres que dependem, por exemplo, quase que exclusivamente das escolas abertas para poder trabalhar. Esta afirmação baseia-se nos relatos dados por mães, e demais pessoas cuidadoras, durante os grupos focais realizados com as comunidades, nos quais 90% das pessoas participantes eram mulheres e se apresentavam como chefes de família ou como principais responsáveis pelas suas crianças[5].

Já no caso da avaliação das necessidades de comunicação da população migrante e refugiada em Roraima, havia desde o princípio a preocupação de se mapear as necessidades de subgrupos específicos como: pessoas indígenas; pessoas com deficiência; pessoas LGBTQIAP+; pessoas idosas; mulheres jovens e/ou mães solteiras. Mesmo sendo reconhecido pelas organizações humanitárias que atuam em parceria com o governo federal na Operação Acolhida de que a maioria das pessoas migrantes e refugiadas que entram por esta fronteira são homens, ao longo da avaliação ficou evidente que eram as mulheres – pertencentes a cada grupo específico citado – que tinham maiores dificuldades para se comunicar e conseguir informações sobre as ajudas humanitárias e sociais no país, ficando dependentes de companheiros, demais familiares, ou ainda, exclusivamente, de agentes externos (como no caso das mulheres solteiras com crianças pequenas).

Em ambas as avaliações, o que foi possível de se apresentar como resultado foi uma contextualização das desigualdades de gênero em cada cenário como parte do diagnóstico realizado, sendo ainda elaboradas algumas recomendações e sugestões específicas para que a perspectiva de gênero se fizesse mais presente e explícita nas intervenções propostas pela organização. Não obstante, havia outros passos que poderiam ter sido dados antecipadamente, os quais eu sugiro fortemente aos avaliadores e avaliadoras que me lêem ao se depararem com casos de avaliações que apresentem evidências irrefutáveis de desigualdade de gênero no contexto da intervenção em questão.

Guias práticos sobre avaliação com a incorporação da perspectiva de gênero, como o lançado pelo Instituto Catalão de Avaliação de Políticas Públicas[6], ou como os que foram elaborados pela ONU Mulheres[7], podem ser de grande valia para apoiar nesta tarefa. A seguir, aponto brevemente algumas das orientações destes guias para destacar o que pode ser seguido para que uma avaliação considere minimamente a perspectiva de gênero.

Perguntas iniciais

É conveniente levantar algumas perguntas iniciais, seja para realizar a avaliabilidade da avaliação de um programa ou política, seja para a análise prévia que se desenvolve na primeira parte de uma avaliação em qualquer âmbito de estudo. Tais perguntas podem ajudar a entender e classificar o enfoque de gênero de uma intervenção a partir das informações disponíveis. Elas podem ser como estas:

  • O programa coleta dados da situação inicial para os valores serem analisados? Estes dados são segregados por sexo?
  • Eles incluem informações sobre outros marcadores sociais, como etnia/raça, recursos econômicos, idade, escolaridade?
  • Há informações disponíveis sobre como mulheres e homens respondem e valorizam a intervenção?
  • O programa possui indicadores específicos de desigualdade de gênero e tem acompanhamento?
  • Durante a implementação da intervenção, houve um acompanhamento dos perfis das pessoas beneficiárias do programa?

Perguntas de avaliação

Perguntas de avaliação com foco em gênero são igualmente importantes e devem fazer parte da fase de planejamento metodológico da avaliação. Medina (2021) considera útil selecionar questões de avaliação concretas e projetadas para entender as diferenças de gênero e que vão além de considerar as diferenças entre mulheres e homens, como exemplo:

• Existem normas, práticas ou estereótipos de gênero relacionados aos fatores que a intervenção está tentando mudar? Quais seriam? Como eles afetam mulheres e homens?

• A intervenção visa ou consegue mudar essas normas, práticas ou estereótipos? De que modo?

• Existem perfis diferentes de mulheres e homens entre os usuários do programa? O efeito da intervenção difere entre esses perfis?

• Existe algum perfil de participante (especialmente mulheres e pessoas LGTBQIAP+) sub-representado entre a população beneficiária? Qual seria? Por quê?

• Existe algum perfil de participante (especialmente mulheres e pessoas LGTBQIAP+) sub-representado em algumas das atividades da política ou programa? Qual seria? Por quê?

• Em função do seu gênero, os(as) beneficiários(as) da política vivenciam de forma diferente sua participação no programa? Por quê?

Execução e análise

Ao longo do processo de avaliação, o que inclui a coleta de dados e a análise, a perspectiva de gênero pode se incorporada de forma bastante prática, a partir de ações como:

• Procurar levantar amostras representativas entre os gêneros masculino e feminino;

• Incorporar a voz de mulheres e de organizações feministas ou de afirmação identitária sempre que possível;

• Desagregar dados e fazer análises diferenciais, incluindo outros marcadores sociais sempre que possível;

• Analisar as implicações da política ou do programa em termos de gênero, levando em consideração a interseccionalidade com outros marcadores sempre que possível;

• Definir recomendações específicas sobre gênero para a intervenção;

• Garantir uma linguagem inclusiva e neutra na redação dos relatórios.

Por fim, a avaliação de políticas e programas sociais e humanitários, enquanto um exercício científico de levantamento de evidências, agrega em si um grande componente de aprendizagem que pode (e deve!) gerar novos conhecimentos e práticas, sendo assim visto igualmente como um importante apoio na promoção de mudanças mais amplas e profundas nas organizações e instituições, como na sociedade em geral. Incorporar a perspectiva de gênero nas avaliações, assim como de outros marcadores sociais, faz parte deste processo de aprendizagem para todos os envolvidos, o que requer paciência, é verdade, mas que não suporta mais postergação.


[1] MEDINA, Júlia de Quintana. Guía práctica 18: La perspectiva de género en la evaluación de políticas públicas. Instituto Catalán de Evaluación de Políticas Públicas (Ivàlua), 2021, p.21 (tradução livre).

[2] Apenas em Fortaleza o CICV apresentava parceria com as três secretarias no momento da avaliação. Em Porto Alegre e Duque de Caxias as parcerias eram com as secretarias de Educação e Saúde. E no Rio de Janeiro apenas com a Educação.

[3] De acordo com a Pesquisa Nacional por Amostra de Domicílios Contínua, a PNAD, do Instituto Brasileiro de Geografia e Estatística, de 2022, as mulheres representam 57% dos profissionais no setor público, enquanto os homens são 43%. Contudo, os diretores e gerentes estão representados por 39% de mulheres e 61% de homens.

[4] HIRATA, Helena; KERGOAT, Danièle. Novas configurações da divisão sexual do trabalho. Cadernos de Pesquisa37, 595-609. 2007.

[5] De acordo com dados do Dieese de 2022, no Brasil, de 12,7 milhões de famílias monoparentais com filhos, 87% são chefiadas por mulheres e 13% por homens. Nos demais núcleos familiares, a diferença não é tão grande: 51% das famílias são chefiadas por mulheres. Das 11 milhões de mães solteiras e chefes de família, 62% são negras. Dentro desse subgrupo, 25% prestam serviços domésticos; 17% trabalham nos setores de educação, saúde humana e serviços sociais; e 15% no comércio. Entre as mulheres não negras, a proporção é praticamente inversa: 22% trabalham com educação, saúde humana e serviços sociais, 17%, no comércio e 16% com serviços domésticos (Boletim Especial de 8 de março – Dieese com dados do IBGE – PnadC, 2022)

[6] MEDINA (2021).

[7] Os guias da ONU Mulheres podem ser encontrados no site oficial desta agência: Guía de evaluación de programas y proyectos con perspectiva de género, derechos humanos e interculturalidad (2014); Manual de evaluación de ONU Mujeres: Cómo gestionar evaluaciones con enfoque de género (2015).

Use and influence of evaluations: UNICEF Brazil Country Programme

Utilisation-focused evaluation (UFE) is an approachthat emphasises the importance of engaging with stakeholders throughout the evaluation process to ensure that findings are relevant, useful, and actionable. It focuses on providing feedback and insights that lead to more effective decision-making and ultimately improving the success of the project, programme, or policy being evaluated. This approach is aligned with our mission to analyse the performance of institutions to help them amplify, improve, and sustain societal benefits.

With that approach in mind, Plan Eval, in partnership with Action Against Hunger, conducted the evaluation of UNICEF Brazil’s Country Programme 2017 – 2021. The evaluation had as its objectives to provide accountability for the work performed in the period under analysis and to be a learning tool to inform the upcoming programme cycle. To achieve such objectives, the evaluators applied a utilisation-focused and participatory approach. It consisted of engaging with stakeholders from the inception to the reporting stages, going over each evaluation question to make sure that it served a practical purpose.

The team held weekly check-in meetings during the research phase to report on the progress of information collected and likewise during the analytic phase, to discuss preliminary findings. Come the reporting stage, the conclusions were presented in a series of online discussions involving UNICEF Brazil’s programme officers, whose criticism was essential for the evaluation team to hone in on the relevance of the findings. The latter round of validations consisted of participatory SWOT seminars, where everyone involved in the management response to the evaluation had the opportunity to rank recommendations in terms of their priority for implementation and likely impact.

UNICEF presence in Brazil

The Country Programme Evaluation (CPE) is a mandatory assessment conducted by UNICEF Country Offices every two programme cycles (i.e., every ten years) and is among the most complex, as it looks at all programmatic areas and operations. Evaluating a broad range of activities and outcomes across different sectors can be challenging to manage as it involves multiple stakeholders like government partners, civil society organisations, and other UN agencies. Managing the input and feedback from these different groups required integrating data into a dynamic evidence matrix organised by evaluation question and intended purpose.

© UNICEF/BRZ/Inaê Brandão

Boris Diechtiareff, Monitoring and Evaluation Specialist at the Brazil Country Office (BCO) highlighted the usability and influence of the evaluation findings and recommendations. According to him, “the findings not only focused on the mandatory aspects but also saw the necessity and the benefits of doing the exercise to help design the new country program”. The evaluation was “shared and used widely by different teams and stakeholders, including the Country Management Team, the UNICEF Latin America and Caribbean Regional Office, the Brazilian Cooperation Agency and the Brazilian Government”.

The evaluation report‘s findings and recommendations, in addition to informing the Country Programme Document (CPD), also served as a learning tool to improve the response to the Venezuelan migrant emergency in Brazil.

Resultados com Diagnósticos Municipais da Situação da Infância e da Adolescência

A Plan Eval traz uma perspectiva da avaliação para todos os trabalhos que realiza. Essa perspectiva consiste em primeiro entender quais são as prioridades de política para então buscar dados da realidade social a fim de lhes dar seja suporte e direcionamento, seja reformulação em face da realidade observada. Essa abordagem dialógica a empresa aplica também em projetos que a princípio se consideram “diagnósticos” ou “levantamentos” mas que na prática se configuram como avaliações, como é o caso dos Diagnósticos Municipais da Situação da Infância e da Adolescência (DMSIA).

Esses diagnósticos resultam de uma diretriz nacional do Conselho Nacional dos Direitos da Criança e do Adolescente (CONANDA) para que municípios produzam um documento único com informações sobre a situação da infância e adolescência em seu território a partir do qual devem ser desenvolvidas as ações do poder público. Para entender como é a construção de um Diagnóstico, conversamos com a Liora Mandelbaum Gandelman, consultora da Plan Eval que lidera os projetos nessa área. Liora é cientista social e vem trabalhando com DMSIAs desde 2016.

Liora Mandelbaum Gandelman

De acordo com Liora, a ideia inicial dos DMSIAs era de que fossem um ponto de partida na identificação de “desafios e fortalezas” da infância e adolescência no município e que a partir deles fosse desenvolvido um Plano Municipal da Criança e do Adolescente, o qual qualificaria o município para pleitear recursos adicionais para executar as políticas na área. Em nível nacional, foi feito um trabalho que serviu de base para o que era esperado nos diagnósticos municipais. Esse diagnóstico nacional atendia ao Estatuto da Criança e do Adolescente (ECA) e aos Objetivos do Milênio das Nações Unidas.

Na prática, há motivações muito diversas para a realização de diagnósticos. Além da qualificação para o pleito de recursos federais e estaduais, notou-se também um anseio espontâneo de prefeituras de aumentar a eficácia da aplicação de recursos e trabalhar com critérios mais objetivos para o repasse de verbas a entidades conveniadas. Outra motivação comum para a realização de DMSIAs é a necessidade de resposta a solicitações de órgãos fiscalizadores como o Ministério Público para corrigir situações de violação de direitos, como o trabalho infantil e a precariedade de abrigos.

Nossa forma de fazer

Tendo em conta essa diversidade de propósitos, a Plan Eval desenvolveu uma abordagem que a produção de informação usando o critério da utilidade. De acordo com a consultora, essa abordagem passa por conversar inicialmente com os secretários municipais e diretores de órgãos relacionados à infância e adolescência para entender suas necessidades práticas e assim orientar a pesquisa para problemas concretos. A equipe também entrevista técnicos que atuam tanto nas secretarias como nos serviços de atendimento direto, incluindo aqueles geridos por organizações sociais a fim de entender como a linha de frente dos serviços pode ser melhorada[1]. Essa informação de primeira mão é então contrastada com fontes secundárias como dados administrativos e populacionais além de discussões com famílias.

“A gente passa um bom período num município, faz todas essas entrevistas, além de recolher os dados que o próprio município tem relacionado à infância e a adolescência. Por exemplo, dados do Centro de Atendimento da Assistência Social, dados da Secretaria de Saúde, de Educação etc. e em paralelo fazemos essas entrevistas com os gestores técnicos. Com isso, a gente consegue identificar as fortalezas e os desafios do município em vários âmbitos, por exemplo saúde, educação, cultura, acesso a lazer, esporte, moradia, sempre tendo em mente a interseção dessas políticas com as necessidades de crianças e jovens”.

É importante frisar, diz Liora, que a Plan Eval sempre trabalha de maneira participativa com os Conselhos Municipais da Criança e do Adolescente (CMDCA), construindo conjuntamente os instrumentos e compartilhando os achados ao longo do processo. O Conselho é um órgão deliberativo, sendo responsável pela aprovação do plano e pela fiscalização da atuação do poder municipal nessa área. O diferencial da atuação da Plan Eval, segundo Liora, é procurar envolvê-lo, em todas as etapas da elaboração do DMSIA (e não só na aprovação final) a fim de garantir a relevância e a legitimidade do diagnóstico.

Essa forma de condução rendeu muitos frutos na prática. O Município de Jundiaí (SP), por exemplo, pediu à Plan Eval que os auxiliasse na construção do Plano Municipal Decenal dos Direitos Humanos da Criança e Adolescente de Jundiaí na sequência do diagnóstico. O plano também foi construído de maneira participativa, com oficinas e estabelecimento de metas de curto e médio prazo para todas as políticas relacionadas à infância e adolescência.

A experiência exemplar

Liora relata que Jundiaí, sendo um município populoso e muito bem estruturado em termos de equipamentos públicos, trouxe novos desafios para a elaboração do DMSIA. Um exemplo são as casas de cumprimento de medidas socioeducativas. No município há uma casa de semiliberdade da Fundação Centro de Atendimento Socioeducativo ao Adolescente (Fundação CASA), que funciona como um Centro de Atendimento não só para Jundiaí, mas para as cidades do entorno. A equipe da Plan Eval passou três dias dentro da Fundação para conseguir desenhar indicadores de qualidade e serviço relevantes. Outro desafio foi a quantidade de equipamentos. O município possui seis Centros de Referência de Assistência Social (CRAS) e dois Centros de Referência Especializado da Assistência Social (CREAS). Para conseguir ouvir técnicos de todas essas instituições foram necessárias várias rodadas de grupos de discussão.

Segundo a consultora, Jundiaí é um município que busca excelência nas políticas para a infância e adolescência, atendendo a todas as prerrogativas previstas em lei. O Conselho Municipal também deu muito suporte à condução do Diagnóstico, fazendo um “bom meio de campo” com as outras secretarias municipais. Ela conclui dizendo que havia um interesse muito grande por parte da gestão municipal para que esse Diagnóstico ocorresse. A presença desses elementos – busca da excelência, envolvimento do Conselho, escuta ampla – favoreceu a realização de um diagnóstico preciso e aplicação imediata, como foi o caso.

Os resultados

Jundiaí é hoje um município referência na primeira infância. As conclusões do Diagnóstico ajudaram na construção de políticas públicas mais eficientes e investimentos em aparelhos de lazer e cultura, além da criação de um site (“Cidade das Crianças”) para acompanhar o desenvolvimento da infância e juventude.

O Diagnóstico também apoiou a “criação de metas e objetivos norteadores da aplicação de recursos públicos que garantam, de fato, os direitos da infância e da adolescência”[2]. Um exemplo disso foi a construção de um dos maiores parques públicos do Estado de São Paulo, o Mundo das Crianças[3]: o parque possui acesso gratuito e traz a proposta de integração entre a brincadeira, o aprendizado e o contato com a natureza. O Parque também possui uma proposta inclusiva, garantindo acesso universal a todas as estruturas, de modo que crianças de todos os níveis de habilidade possam usufrui-las.

Para saber mais, acesse https://cidadedascriancas.jundiai.sp.gov.br/


[1] Para mais informações, acesse esse post no blog da Plan Eval: https://www.plan-eval.com/blog/?p=962

[2] https://cmdca.jundiai.sp.gov.br/2017/12/hhh/

[3] https://jundiai.sp.gov.br/noticias/2020/12/14/com-conceito-inedito-mundo-das-criancas-e-apresentado-no-aniversario-da-cidade/

Avaliação de impacto: condições e desafios em tempo de pandemia

Certa vez, li em um post de uma especialista em avaliação no Linkedin, de que a medição de impacto teria virado um verdadeiro “fetiche” entre financiadores e/ou gestores de programas sociais e políticas públicas. Ao mesmo tempo em que concordava inteiramente com os argumentos apresentados, dada a minha própria experiência profissional como avaliadora, procurava entender a fonte dessa obsessão por este tipo de avaliação. Posso até entender que haja uma certa ansiedade por parte dos financiadores para justificar os investimentos, assim como também entendo a aflição por parte dos gestores para apresentar os resultados alcançados; porém, enquanto avaliadora, o que percebo é escassez de conhecimento sobre esse tipo de avaliação; afinal, nem todos os programas ou políticas justificam uma estimação de impacto. E nesse ponto, cabe, a nós avaliadores, compartilhar, sempre que possível, as condições que permitem a condução bem-sucedida de uma avaliação de impacto. Escrever este texto foi a forma que encontrei para contribuir nessa tarefa, trazendo ainda, de forma bastante sucinta, algumas reflexões da minha experiência nos dois últimos anos.

De maneira geral, as avaliações de impacto, ao oferecer evidências críveis quanto ao desempenho e ao alcance dos resultados desejados, são centrais à construção do conhecimento sobre a efetividade de programas sociais e de desenvolvimento, esclarecendo o que funciona e o que não funciona na promoção do bem-estar de uma população ou comunidade. Em resumo:

“Uma avaliação de impacto avalia as mudanças no bem-estar dos indivíduos que podem ser atribuídas a um projeto, programa ou política em particular. Este enfoque na atribuição do resultado é o selo distintivo das avaliações de impacto. Igualmente, o desafio central da execução de avaliações de impacto é identificar a relação causal entre o projeto, programa ou política e os resultados de interesse”[1].

A característica distintiva da medição de impacto é justamente o enfoque na causalidade e atribuição de resultados; neste sentido, a principal pergunta a ser formulada em uma avaliação de impacto é: “qual é o impacto (ou efeito causal) de um programa sobre um resultado de interesse?” Isto é, uma avaliação de impacto tem como interesse o efeito que o programa gera exclusivamente em virtude de sua existência.

Apresentação no VII Encontro da Rede do Acesso Mais Seguro 2022, em Brasília (DF).

Para poder estimar o efeito causal ou impacto de um programa sobre os resultados, qualquer método escolhido deve estimar o chamado contrafactual, isto é, qual teria sido o resultado para os participantes do programa se eles não tivessem participado do programa. Na prática, a medição de impacto exige que o avaliador compare um grupo de tratamento que recebeu o programa e um grupo que não o recebeu, a fim de estimar a efetividade do programa. Para tanto, o melhor é que estes dois grupos sejam identificados ainda na etapa de planejamento do programa ou política; o que não significa que não possam ser identificados em uma etapa posterior, contudo, neste caso, é maior a probabilidade de se produzir estimativas menos confiáveis.

Espera-se que na etapa de planejamento de um programa ou política sejam produzidos: i) dados de linha de base para estabelecer as medidas pré-programa de resultados de interesse; e o ii) desenho de uma teoria de mudança bem clara sobre os resultados pretendidos. Isso permite que se obtenha uma estimativa válida do cenário contrafactual e, assim, tende-se a produzir medições confiáveis. Ademais, evita-se que os resultados da medição sejam dependentes de contextos e/ou fatores externos que possam afetar programas ao longo de sua implementação.

Nos últimos dois anos, muitos programas sociais e de desenvolvimento tiveram que enfrentar as consequências da pandemia de COVID-19, o que, de certo modo, afetou os resultados pretendidos inicialmente. Não se tratando de afetações triviais, a pandemia, alinhada a outros problemas já conhecidos por gestores, como falta de recursos, pessoal e informação, gerou verdadeiras barreiras para uma avaliação de impacto, ao menos para programas de desenvolvimento que não criaram as condições necessárias para tal durante o processo de seu planejamento. Programas atravessados pelo período pandêmico tiveram, inclusive, seus processos de implementação seriamente afetados e/ou interrompidos. Isso foi possível de ser observado em alguns projetos de avaliação executados pela Plan Eval em 2021 e 2022.

Apresentação no VII Encontro da Rede do Acesso Mais Seguro 2022, em Brasília (DF).

Em um desses projetos, ficou evidente para a equipe de avaliadores que ao invés de uma avaliação de impacto seria mais pertinente que se realizasse uma avaliação do que estava sendo executado até aquele momento e descrevesse os processos, condições, relações organizacionais e pontos de vista das partes interessadas, como forma de situar em que estágio de desenvolvimento encontrava-se o programa para gestores e parceiros envolvidos.

Alternativamente, trabalhou-se também com análise de contribuição. Diferentemente da medição de impacto, que se ocupa da atribuição, na análise de contribuição a causalidade é estabelecida pela composição de um argumento lógico em que se leva em conta como diferentes partes envolvidas deram o seu quinhão para que o resultado observado acontecesse. Nessa abordagem, são descartadas explicações alternativas que, ainda que plausíveis, não são embasadas pelas evidências. Essas evidências, por sua vez, têm origem na documentação da memória do programa, nos dados de sistemas de monitoramento, em depoimentos, entrevistas e grupos de discussão, que são comparados para gerar a hipótese de contribuição mais plausível.

Além disso, a escuta de partes interessadas nas condições impostas pela pandemia serviu também como processo de reflexão não apenas sobre a situação dos programas mas também teve caráter muitas vezes terapêutico. Isso chama muito a atenção quando se considera as novas realidades impostas pela pandemia na saúde mental da população como um todo, tema incontornável naquele contexto de isolamento social e luto coletivo. Esses processos tanto influenciavam os resultados esperados como passaram a ser objeto dos programas no sentido de se incorporar medidas mitigatórias para aliviar as dificuldades psicossociais enfrentadas.

Apresentação no VII Encontro da Rede do Acesso Mais Seguro 2022, em Brasília (DF).

É compreensível que entre gestores as evidências quantitativas de impacto sejam fáceis de comunicar pelo poder sintético de um número que captura toda a diferença atribuída ao programa. Contudo, muitos se esquecem de que uma avaliação de impacto também se baseia em evidências qualitativas e de que estas podem explicar melhor como se deu o impacto de um programa na vida das pessoas.

Assim, na impossibilidade de se fazer a medição de impacto como esperado por algumas organizações, os últimos anos mostraram-se bastante frutíferos para se comprovar a relevância de estratégias alternativas de avaliação, o que já se nota no aumento da demanda por abordagens qualitativas no âmbito da cooperação para o desenvolvimento após um período de ascendência da abordagem experimental.


[1] Gertler, Paul J.; Martinez, Sebastian; Premand, Patrick; Rawlings, Laura B.; Vermeersch, Christel M. J. Avaliação de Impacto na Prática. Banco Mundial, Washington, D.C. 2015.

Lessons learned from using the Qualitative Impact Assessment Protocol (QuIP) to assess the contribution of a social protection program in Mozambique

For the last couple of months, Plan Eval has been working on the evaluation of a social protection program using the QuIP methodology. In this blogpost, Pauline Mauclet, Evaluator at Plan Eval, explains what this methodology is all about and reflects on some of the challenges and lessons learned from this evaluation.

The Qualitative Impact Assessment Protocol, commonly referred to as QuIP, is a qualitative evaluation method used to assess the contribution of an intervention, without the use of a counterfactual. In other words, it is part of a wider family of approaches providing an alternative to quantitative impact assessments, which tend to be quite time-consuming and costly, to assess the impact of an intervention.

The method was developed by Bath SDR, a non-profit organization founded by a small team of researchers from the Centre for Development Studies (CDS) at the University of Bath.

In practice, the method assesses the contribution of an intervention by relying on the perceptions of beneficiaries and stakeholders. Therefore, the method consists in asking beneficiaries about the changes, both positive and negative, that they observed in their lives over a certain period of time and to then inquire about the causal factors that might have caused those changes (in their opinion).

In the following paragraphs, I will discuss some of the key features of the QuIP methodology, which help bring robustness and credibility to the research findings. The interesting thing is that most of these features can easily be replicated with other methodologies.

A common issue when asking beneficiaries about a certain benefit they received is that their responses might be biased, meaning that they might not be speaking the truth. Some respondents might for example be inclined to speak very positively about an intervention just to please the interviewer or because they are afraid to lose the benefit if they say anything negative about it. This type of bias is referred to as a response bias. In order to avoid this issue, the QuIP method uses a technique called (Double) Blindfolding. Blindfolding consists in asking the respondent questions without directly mentioning the program or intervention that is being evaluated. With Double Blindfolding, both the respondent and the interviewer are unaware of the intervention that is being evaluated.

In practice, the interview therefore starts with general questions about the changes observed in one’s environment over a certain period of time and then continues with probing questions about the factors that might have caused these changes. The idea is that the respondent would then mention the intervention by him- or herself, without any pressure or expectations.

But what if the respondent doesn’t mention the intervention? In that case, it might mean that the intervention wasn’t that noteworthy or impactful for the respondent, which is an interesting result in itself.

The key advantage of the QuIP method is that by asking general questions which are not focused on the intervention, we open up the possibility for respondents to surprise us. For example, respondents might mention a change which was not anticipated in the intervention’s theory of change. They might also explain how the intervention impacted them, but not in the way that was originally expected. Respondents could also mention other interventions or external factors that brought significant changes in their lives. In other words, the QuIP methodology puts the intervention’s Theory of Change to the test and can be used to refine it.

Now, asking beneficiaires about their perceptions seems nice, but which beneficiaries should we interview? It is impossible to interview everyone, so how do make sure that our results are representatitve and are not just a reflection of the opinion of a small portion of the population?

This is a common issue with qualitative research. Quantitative semi-experiments are able to work around this problem by collecting data from a representative, randomly selected sample of the target population. However, while quantative studies are appropriate to collect “factual” data, they may not be ideal to ask respondents about their experiences and opinions. In those cases, qualitative studies are much more appropriate. So, then, how do we select cases in a way that supports robust and credible generalisation of the results?

In order to rebuff criticisms of “cherry picking” , the QuIP methods favours a transparent and reasoned approach to case selection. Depending on whether a list of beneficiaries exists; whether a theory of change has already been defined; and whether data on outcomes exists and can be used for the case selection, different case selection approaches can be used, as shown in the diagram below (source: Bath SDR)

Case Selection Strategies
(Source: Bath SDR)

Finally, a third feature of the QuIP methodology(not exclusive to the QuIP methodology) is the use of codification to bring transparency and credibility to the analysis process. What is specific to the QuIP methodology is that the codification will focus exclusively on identifying Influence factors and Change factors.

Influence and Change factors
(Source: Bath SDR)

By identifying the different influence factors and change factors, we aim to build causal claims. Note that one change factor can also lead to another change, as shown in the diagram below.

Building Causal Claims
(Source: Bath SDR)

The objective of the codification process is to find stories of change. Through the use of codification, we can present those stories of change visually, while also facilitating internal and external peer review and audit.

Now that we have presented the QuIP methodology, I would like to reflect on some of the challenges and lessons learned from implementing the method for the evaluation of a social protection program in Mozambique.

The evaluation was commissioned by one of Plan Eval’s clients and the research methodology was defined based on the Terms of Reference provided by the client. The evaluation questions included questions related to the changes brought about my the program, but also questions related to the program’s implementation. As a result, our team set up a methodology that included the use of the QuIP methodology, along with a more classical evaluative approach using the OECD DAC Criteria of relevance, effectiveness, efficiency and cohesion. The intervention consisted in cash transfers provided in two parcels to a group of beneficiaries, with a Communication for Development (C4D) component.

In terms of case selection, our initial research design considered the possibility of using beneficiary data to select beneficiaries for the semi-structured interviews. The program had an existing Theory of Change and there was even data available on certain outcomes thanks to a short survey that was conducted by the client to a sample of beneficiaries after reception of each parcel of the cash transfers. Under this scenario, we planned to conduct a Confirmatory analysis stratified by context and outcomes. In practice, this meant that we would use the existing outcome data to select different profiles of beneficiaries to be interviewed in the field. By doing so, we were sure to cover a variety of profiles, while also opening up the possibility of triangulating the qualitative data with the existing quantitative data at the analysis stage.

Unfortunately, we ended up not receiving access to the beneficiary data before the start of the data collection activities. As a result, we had to adapt our case selection approach at the last minute and ended up going for an Opportunistic selection, by location and by beneficiary profile. The beneficiaries were identified and mobilized in the field, with support of the local authorities.

In terms of data collection, we ended up going for the Blindfolding of beneficiaries, without blindfolding the researchers, mainly for practical reasons.

Data Collection activities using the QuIP methodology
(Source: Plan Eval)

In addition to the last-minute change in approach for case selection, another difficulty was that of ensuring the blindfolding of beneficiaries, due to the fact that we conducted in each location both QuIP and non QuIP interviews. In accordancae with the evaluation objectives, the QuIP interviews focused on the contributions and changes brought about by the intervention, while the non QuIP interviews focused on the program’s implementation. By conducting both QuIP and non QuIP interviews in the same location, and considering that beneficiaries were mobilized with the support of local authorities, we had to take a special care to clearly explain to the local authorities the difference between the two types of interviews and to make sure that the respondents to the QuIP interviews weren’t “contaminated” (in other words, that they were informed of the fact that the study aimed to evaluate the social protection program before the start of the interview).

Finally, we observed that it was sometimes difficult to get people to talk during the interviews. People responded to the interview questions, but without providing much detail. This can be problematic for the QuIP methodology, because it may limit our understanding of the real stories of change. As a result, we played around with the format of the interviews and conducted some QuIP interviews in a Focus Group Discussion format in order to see if it helped stimulate the conversation. Additionally, we observed the importance of using open-ended questions to stimulate the conversation and to be patient with respondents, giving them the time to feel enough at ease to open up.

Another important aspect is to make sure that the respondent focuses on his own experience, rather than speaking about the experience of the community and neighbours. Therefore, it is important to remind the person from time to time to talk about their own experience and to focus on the observed changes.

Overall, in terms of lessons learned, I would identify the following elements:

  1. (If possible) Conduct the QuIP and non-QuIP interviews in different locations in order to avoid the risk of “contamination”
  2. Importance of open-ended questions to stimulate conversation
  3. Importance of being patient and letting the respondent speak freely, but reminding the person (when necessary) to talk about their own experience and focusing on observed changes
  4. Encourage respondents to focus on their own experience, rather than the experience of the community, neighbours, etc.
  5. Importance of being well acquainted with the questionnaire BEFORE starting data collection activities

The study is currently at the analysis and reporting phase. Once the study will have been finalized, I will report on any challenges and lessons learned from that stage of the evaluation process.

In the meantime, if you are interested in the results of this evaluation or if you have any questions on the use of the QuIP method, please feel free to contact us by email:

Pauline Mauclet – Evaluator (Plan Eval)
pauline@plan-eval.com

Magdalena Isaurralde – Team Leader (Plan Eval)
magdalena@plan-eval.com