Ethical Considerations and Methodological Innovations related to Gathering Information from Children

Natalia Arce[*]

Yasmin Winther[**]

Keywords: ethical considerations; gathering information from children; playful methodologies; do-no-harm approach; best-interest-of-the-child standards; children’s perspective in evaluation

Mural in “El Desierto” Shelter, Mexicali, Mexico / Photo credit: Natalia Arce

Introduction

This paper offers an examination of the ethical considerations and methodological innovations related to gathering information from children. It draws upon the empirical foundation of the authors’ and Plan Eval’s[1] evaluation experience with children on the move in Latin America and Africa. Rooted in the distinct philosophical and ethical perspectives of collaborative thinking, this paper aims to provide practical solutions that pertain to consulting children, while adhering to international human rights standards, including the Convention on the Rights of the Child, the do-no- harm approach, and the human rights-based approach.

This paper is based on the authors’ first-hand experiences conducting research with children as key informants. The arguments presented here are grounded in both the successful practices and the challenges encountered. Considering the collaborative thinking approach, this paper underscores the importance of interdisciplinary collaboration and ethical reflexivity in evaluation processes involving children. By sharing concrete examples of good practices and innovative methods of engaging with children, this research contributes to advancing the dialogue on how to ethically engage with children, with the ultimate goal of bringing about an inclusive and participatory approach to evaluations involving children in a manner coherent with the values of collaborative thinking.

Children’s perspective in evaluations

When evaluating a project, program, or policy that involves children, ethical considerations must be a cornerstone of whatever research methodology that is used. In general terms, it is about “balancing the goals and benefits of evaluation with the rights and interests of those being evaluated” (Barnett & Camfield, 2016). To this end, a series of principles that should guide the performance of the evaluation team and govern the evaluation have been identified by the United Nations Evaluation Group (UNEG) (UNEG, 2020), these are: independence, impartiality, honesty and integrity, competence, accountability, respect and protection of human rights, confidentiality, prevention of harm, transparency, among others.

Based on the above, different organizations have developed a series of ethical guidelines to guide the evaluations and actions of those who conduct the research (Better Evaluation, 2024). The guidelines are diverse, spanning various levels of quality (Better Evaluation, n.d.). For example, the most demanding ethical guidelines require the implementation of specific practices for each stage of the evaluation process and/or directed towards each stakeholder involved[2]. Some guidelines stipulate the verification of individual researcher’s specific competencies, requiring specialized training in addition to the required evaluation experience[3].

Gradually, human rights-based approaches have become integrated into an evaluation’s required ethical considerations. As a result, considerations of gender, culture, and the intersectionality of a project or programme’s intersectionality have become recognised as prerequisites for evaluations that seek to 1) research systemic inequalities and inequitable power relations and 2) to generate recommendations that effectively address the differentiated needs of the most vulnerable groups. Importantly, evaluations must ensure the broad participation of the most vulnerable groups (such as, women, the elderly, indigenous people, people living with disabilities, etc.) as consulted stakeholders and, in best practices, as part of the evaluation team itself.

However, ensuring such participation entails its own challenges, especially when there is no shared worldview between stakeholder groups, such as occurs between adults, children and adolescents. In these cases, an evaluation’s approaches, language, resources, tools, and methods must be adapted to each stakeholder group. As adults, researchers are compelled to question how they are to achieve this. They must ask themselves:
“How can we step away from our adult-centric worldview?”

International standards regarding the participation of children and adolescents provide general guidelines as to how to do this. For instance, the United Nations Committee on the Rights of the Child states that safe spaces for the participation of children and adolescents must be provided and that research methods should be transparent and informative, voluntary, respectful, relevant, adapted, inclusive, supported by training, safe, and attentive to risk, and responsible (Committee on the Rights of the Child, 2009). This implies 1) recognizing and respecting non-verbal forms of communication; 2) ensuring that they are properly informed about the matter to be addressed and the use that will be made of the information collected; 3) providing them with safe and friendly conditions and spaces that allow them to express their opinions comfortably, without pressure or manipulation; and 4) enhancing actions to include those who, for any reason, might have difficulty expressing their views (Committee on the Rights of the Child, 2009)[4]. Importantly, all stakeholders must be informed that they free to choose whether they want to participate or not.

The challenge for evaluation specialists who wish to conduct research involving children is to supplement traditional evaluation frameworks with innovative strategies that effectively capture the perspectives of children while ensuring and protecting their rights. Fortunately, UN agencies, such as UNICEF (UNICEF, 2015), and NGOs, such as Save the Children (Save the Children, 2004), among others (Graham, Powell, Taylor, Anderson, & Fitzgerald, 2013), have developed methodological toolkits that facilitate this task. While this article will not reproduce these guidelines in full, it will highlight that the authors consider to be best practices and necessary conditions for consulting children and adolescents, based on their experience.

Design of methodology and instruments

First, we recommend integrating a specialized Ethics Committee as a component of the evaluation architecture, during the inception phase before data collection begins. In fact, using an external ethics review committee is standard practice for some United Nations agencies, such as UNICEF[5]. The inclusion of an external and expert perspective ensures the verification that methodological adaptations are appropriate, ensure participation while avoiding harm, that they consider not only age, but also the specific vulnerabilities of all stakeholder groups (e.g., disability, the elderly, migration status, etc.) involved in the research, just as the human-rights based approach requires. The Ethics Committee, independent of the evaluation team, would conduct a preliminary review of the methodologies, protocols, and tools for gathering and systematizing information obtained from consultations with children and adolescents. If necessary, it could suggest changes or adaptations that the evaluation team must integrate before piloting and applying the data collection tools. Only after the Ethics Committee has reviewed and approved the proposed data collection tools can data collection commence. It is a good practice for this Committee to have an interdisciplinary composition (e.g., psychology, pedagogy, and child and adolescent rights specialists).

Additionally, our experience causes us to advocate for even a second or third verification of an evaluation’s data collection tools and methodologies, the second to be carried out by the agency or organisation whose project, policy, or program is being evaluated, and, if applicable, the third by the implementation partner that has direct contact with children and adolescents during the project implementation, when it is not the same that the entity that has commissioned the evaluation. The implementation partners have firsthand knowledge of the vulnerabilities faced and their differentiated needs. They can also be a great ally in providing child-friendly spaces for consultations.

Child-friendly spaces deserve a special mention. Typical evaluations stipulate that consultations with children and adolescents can be conducted if there is a comfortable place to sit and take notes. However, it is important for evaluators to reflect on what is meant by the term “comfortable.” It is important to recognise that comfort has different meanings for different age groups. For example, for a child or adolescent, comfort might mean that the space itself does not make them feel that they are being examined, judged, or detained. This involves having furniture that matches the child’s size, ensuring the furniture does not create a barrier between the evaluator and the child, and including decorations that convey warmth, among other factors. Of course, there is no standard design for child-friendly spaces, as they also depend on the age and background of the child being consulted. For example, a seven-year-old child is not the same as a fifteen-year-old adolescent. The decor suitable for a seven-year-old may make a fifteen-year-old feel infantilized. Consulting a child who has always developed in a family environment is not the same as consulting one in a migration situation who has been through migration stations. The former may not be much affected by a consultation from an adult behind a desk and computer, while the latter may feel distrust or even re-victimized.

Recognizing aspects such as those noted above, allow the evaluation team—from the initial phase of the evaluation—to identify the profiles and needs of the children and adolescents to be consulted and, in coordination with the entity whose project or program is being evaluated and the implementation partner  -if applicable-, to provide the necessary spaces. If appropriate child-friendly spaces are not already available, they will need to be established for the purposes of the evaluation.

Prior, free, and informed consent

Once the methods, instruments, and consultation spaces have been approved by the Ethics Committee, it is necessary to obtain free, prior, and informed consent from both the children and adolescents and/or their parents or caregivers. Typically, parents or caregivers sign a consent form on which the evaluation objectives, the purpose of the consultation, and all considerations of independence and confidentiality that will be applied are outlined. Meanwhile, children and adolescents might be asked to sign their own form, containing the same terms but in age-appropriate language. In our experience, this process generally proceeds smoothly. The challenge arises when the children and adolescents to be consulted do not have parents or caregivers and/or are institutionalized. In these contexts, their increased vulnerability and consequent mistrust has compelled us as researchers to be flexible in obtaining their consent. For example, in the evaluation of a children on the move initiative, some of the adolescents were unaccompanied migrant minors in shelters awaiting family reunification or assisted return. In some cases, despite their willingness to participate in the consultation, there were evident fears about signing documents. Therefore, in cases where written consent might be intimidating or impractical, alternative methods such as informed verbal consent recorded with a witness (e.g., staff from the implementing organization or shelter) can be utilized, especially in high-risk or sensitive environments.

Only after informed consent has been obtained can researchers proceed with the consultation, bearing in mind that, at the child’s or adolescent’s discretion, the consent and willingness to participate can be withdrawn at any time.

Gathering information from children and adolescents

The evaluation team must recognize that the initial contact with children and adolescents is as crucial as the rest of the consultation. A friendly approach, or simply one without excessive formality, can make the difference between a fruitful consultation and an unproductive one. The organization whose project or program is being evaluated, along with its implementation partner -if applicable- can again be valuable allies in this regard. Based on the authors’ experience, it is worth allocating time in the fieldwork schedule to involve the evaluation team in recreational or sports activities with the children and adolescents, alongside the implementing organizations, before proceeding with the consultations. For instance, during an evaluation involving children on the move, it proved beneficial for the evaluator to participate in psychosocial support activities with the adolescents before consulting them. For example, these activities included playing soccer with the unaccompanied migrant children’s shelter, led by the implementing organization. The evaluator joined a team, participated in teamwork, and shared jokes and laughter typical of such activities. The adolescents saw the evaluator as an equal, and the brief camaraderie established during the games was crucial for fostering trust during the subsequent consultation. In cases where such an initiative did not take place, gaining the adolescents’ trust was a slower process, making it more challenging to encourage their participation and share their experiences and assessments in the short time allotted to the consultation. Of course, implementing this good practice requires prior coordination with the implementing organizations and other logistical considerations in defining the schedule.

This might not always be possible; nevertheless, researchers should ensure that, regardless of the circumstances, an icebreaker or informal pre-consultation activity be implemented as a standard procedure prior to conducting consultations with children. These activities aim to use the presentation moment or the initial part of the consultation session to improve group interaction and help create a comfortable and trusting environment. There is a wide variety of icebreaker activities (e.g., introducing the neighbour, ball of names, find someone who, two truths and a lie, etc.), with the most important aspect being adapting the exercise to the context and characteristics of the group.

Not all consultations with children and adolescents will necessarily take place in focus groups. Therefore, researchers must develop icebreakers relevant to individual interviews. In these cases, a less formal introduction, such as sharing aspects of our personality, can be more fruitful than merely presenting our role as an evaluator. Examples of this might be: sharing our name, where we are from, something we like and something we don’t, interests, favourite foods, etc., with the child or adolescent doing the same. This often reveals commonalities and creates an initial empathetic connection that foster a more trusting atmosphere. It is also crucial to understand the background of the child being consulted, as questions that might seem natural in a normal scenario could generate fear in others. For instance, in an evaluation involving children on the move, dealing with adolescents in irregular migration situations who had been detained in migration stations and feared deportation, it was more appropriate to ask, “What name would you like us to call you?” instead of “What is your name?” This formulation helped them overcome their fear of being “identified,” and made them feel they could share their opinions without providing what they perceived as sensitive information (their name and country of origin), which would typically be natural to share in other contexts.

Exploring new methodologies tailored to each age group

The authors’ experience has shown the necessity of exploring new methods tailored to each age group. Generally, playful methodologies are the best option for consulting children and adolescents. In fact, “evidence from psychology has stipulated that people can feel ‘playful’ at any age, but the way that playfulness is expressed is different according to life stage” (Johnston, Wildy, & Shand, 2023). For instance, during the previously referenced evaluation involving children on the move, adolescents were consulted about their satisfaction with the playful methodologies used in psychosocial activities. They highlighted the positive contribution, but also mentioned that “they did not like being treated like little kids.[6]” This confirmed that it is not about omitting playful methodologies for consultations but understanding that as researchers, we must use games and language appropriate to each age group. For example, with a group of adolescents, a “role-playing game” was effective, where the adolescents acted as evaluators and, after a visit to the unaccompanied migrant children’s shelter (the place where the girls were housed), they highlighted what they liked, what they didn’t like, and their improvement recommendations.

Moreover, the COVID-19 pandemic and the resulting remote digital technologies that gained prominence during this time have also allowed us to identify other consultation alternatives that may be more attractive to adolescents, even outside the pandemic context. For example, replacing traditional survey formats (using Google Forms or SurveyMonkey) with other visually appealing applications that even offer the possibility of creating an avatar is a good practice.

Safeguarding of sensitive data

Ensuring the safeguarding of sensitive data is paramount when conducting evaluations involving children. Given the heightened vulnerabilities and the sensitive nature of the information gathered, specific strategies must be implemented to protect all data gathered throughout the entire evaluation process.

First and foremost, detailed information about how the data will be collected, stored, used, and shared should be clearly communicated to both the children and their guardians, so that their consent can be fully grounded. Talking to children and adolescents about privacy is an effective way to explain to children how researchers are going to use their data in a way that they understand (Park, 2021)[7]. Guidelines, toolkits and diverse resources have been developed that allow a creative demonstration of privacy concerns and measures taken[8] to ensure the confidentiality of all data gathered during the course of an evaluation. In addition, consent forms should explicitly address data protection measures and provide assurances that the data will be handled with the utmost care.

It is fundamental to control and limit the access of sensitive data only to members of the evaluation team who need this to perform their roles. It is important that all members of the evaluation team undergo comprehensive training on data protection and privacy issues. This training should cover best practices for handling sensitive data, recognizing potential security threats, and understanding the legal and ethical implications of data breaches. Capacity building initiatives should also extend to implementing partners and any third-party service providers (if needed) to ensure a unified approach to data safeguarding across all entities involved in the evaluation.

To further protect the identities of the children involved, anonymization and de-identification techniques should be employed. This involves removing or masking any personally identifiable information (PII) from all data sets before analysis and reporting. Techniques such as data aggregation and pseudonymization or assigning a Unique Identifier Code (IEC) to each stakeholder that is used in place of their names, can be used to ensure that individual responses cannot be traced back to specific participants.

Desirable skills in the evaluation team

Ideally, an evaluation should be conducted by a multidisciplinary team. However, this is not always possible, and single-person external evaluations of projects, programs, and policies are becoming increasingly common. In such scenarios, it is especially important for the evaluator to possess the range of skills necessary to ensure that their interactions with children and adolescents within the evaluation framework do not cause harm.

For example, a good practice is to have the ability to provide psychological first aid. This allows the evaluator to assist a child or adolescent in overcoming episodes of anxiety or similar distress during the consultation process. The evaluator can help them find calmness and immediately refer them to an entity or person who can provide specialized care. Psychological first aid[9] tools are diverse, but having the ability to apply techniques such as diaphragmatic breathing or coping strategies can make a significant difference. In experiences such as the evaluation involving children on the move, the authors did not have to implement all these techniques. However, it was important that they be familiar with them as the adolescents being consulted were in a highly vulnerable situation due to their irregular migration status, which generally meant they had survived serious risks and violence.

Psychological first aid is just one example. In practice, the evaluator must possess skills relevant to the context and risks faced by the population being consulted. In some cases, this is not solely that responsibility of the researchers, but a requirement of the organisation or agency that commissioned the evaluation. For instance, in the case of an evaluation involving children on the move, the organisations that have commissioned the evaluation required that the evaluation team complete a virtual and self-directed training on the prevention of sexual abuse and exploitation. This training aimed not only to prevent such practices, but also to enable the team to identify instances of abuse and exploitation that had occurred, and to act according to established protocols.

Dissemination of evaluation findings with a childhood perspective

​​Disseminating the findings and recommendations of the evaluation is a task that also involves applying a childhood perspective. A traditional executive summary may not be the best way to communicate the results to children and adolescents, as it is likely to use technical language and may not be the most user-friendly format.

Even when little is known about the dissemination stages of the research process to children (Egli, et al., 2019), there is an increasing demand from organizations (often captured in the evaluation’s terms of reference) that, in addition to the evaluation report, dissemination instruments or activities should be designed to communicate the results of the evaluation in an appropriate manner to the children or adolescents who were involved in the process[10]. In the case of the ‘children on the move’ evaluation conducted by Plan Eval, this issue was particularly challenging, as it involved children and adolescents in situations of human mobility, making it nearly impossible to re-establish contact with them once the evaluation process was completed. Therefore, on that occasion, the dissemination of results was limited to generating briefing notes to be shared with the implementation partners.

Nevertheless, this presented an opportunity to emphasize that feedback of results should also be tailored to the specific needs and characteristics of the children and adolescents consulted. For example, infographics, video capsules/reels, or other social media content, illustrations, animations, among others, can be created. Of course, adapting the language to the corresponding age group will also be crucial.

A significant advantage today is artificial intelligence, which has become a valuable tool to optimize evaluation work. Just as in the educational field, AI can be used to review texts, break down tasks into simpler steps, or adapt proposals to different learning stages (Paulista, 2024). In the evaluation field, AI can be used to rephrase texts to make them more user-friendly for diverse audiences, including children and adolescents. Naturally, the Ethics Committee would play a key role in reviewing and validating these products. Finally, it is essential to budget for the development of adapted dissemination resources for the evaluation.

Conclusions

The practices shared in this paper are not the only ones available for integrating a childhood perspective into project, program, and policy evaluations. Other entities have developed extensive and rigorously crafted guidelines, frameworks, and toolkits that are freely accessible. However, the evaluation of the children on the move intervention provided a valuable opportunity reflection, and, when taken into consideration with other available resources, the reflections, lessons, and best practices presented here will positively contribute to the ethical consultation children in future evaluations.

Firstly, it has allowed us, as evaluators, to become aware of the need to set aside our adult-centric viewpoint from the early stages of evaluation design. With concrete examples, we have demonstrated that the first condition to ensure harm-free action was identifying the differentiated needs of children and adolescents, even when they were not apparent at first glance.

Furthermore, we have shared best practices that have shown us that not all children and adolescents can be treated equally. Instead, it is necessary to recognize, through an intersectional approach, their diverse needs and various situations of risk and exclusion. For instance, the specific contexts of children and adolescents in situations of human mobility compelled us to make additional adaptations to meet to their dual vulnerability.

Moreover, reviewing all phases of the evaluation—from design of the evaluation, to the dissemination of findings and recommendations—allowed us to demonstrate that a rights-based approach and the best interest of the child are cross-cutting issues. Furthermore, we have shown that these approaches are not ethereal or marginal reference issues: on the contrary, they translate into concrete practices to be implemented by the evaluation team.

Finally, with this brief contribution we hope not only to share best practices, but also to catalyse reflection within our profession allowing us to advance the design and implementation of evaluations that are protective and which do not replicate the inequitable positions of power and the practices of discrimination and exclusion that children commonly face.



[*] Natalia holds an MSc in Planification, Management and Evaluation of Cooperation and Development Interventions from UNED, and a bachelor’s degree in law from ITESO, Mexico. With her work experience in Latin America, she is deeply involved in Project Planning, Management, Monitoring and Evaluation, under a human-rights based approach, intersectionality, and gender perspective.

[**] Yasmin holds an MSc in Finance with a focus on Economic Policy from SOAS, University of London, and a bachelor’s degree in economics from PUC-SP, Brazil. With her work experience spanning Africa, Asia, and Latin America, she is skilled in quantitative and mixed methods, including tools like R, EViews and Stata. She contributes to evidence- based decision-making and policy development through evaluation, research, and capacity building.

[1] Founded in 2007, Plan Eval is a consultancy firm dedicated to monitoring and evaluating public-interest programmes. It is the oldest M&E practice currently active in Brazil. Its European subsidiary, Plan Eval SRL, operates in Belgium since 2016. Plan Eval is an institutional member of the European Evaluation Society and the Rede Brasileira de Monitoramento e Avaliação. Plan Eval offer solutions to measure and evaluate the impact of interventions on any scale, from local development projects to multi-country programmes. Find mor at Plan Eval’s website https://plan-eval.com/english/

[2] For instance, the Australasian Evaluation Society Guidelines for the Ethical Conduct of Evaluations include practices for three stages: commissioning and preparing the evaluation, conducting the evaluation, and reporting (AES, 2013). On the other hand, the United Nations Evaluation Group includes guidelines directed at the leadership entity, those who organize evaluations, and those who conduct evaluations (UNEG, 2020).

[3] For example, a Plan Eval’s client requested that all evaluation consultants involved in the process complete a Prevention of Sexual Exploitation and Abuse (PSEA) training before starting the field missions.

[4] See para. 134.

[5] Consult the HML IRB website as an example: https://www.healthmedialabirb.com/unicef

[6] e.g. Adolescent in a focus group that took place in Mexico, during the evaluation of children on the move (2023).

[7] The Future of Privacy Forum and Common Sense assembled a panel of youth privacy experts for a webinar presentation on, “Talking to Kids about Privacy,” exploring both the importance of and approaches to talking to kids about privacy. Among other topics, the panel addressed a model for engaging children in informing data protection policies. Watch the full webinar here: https://www.youtube.com/watch?v=bAcLoThw6I4

[8] Some resources for talking to children and adolescents about privacy can be consulted here: Future of privacy forum (https://fpf.org/blog/talking-to-kids-about-privacy/); Data Protection Commission (https://www.dataprotection.ie/en/dpc-guidance/childrens-data-protection-rights) and The London School of Economics and Political Science (https://www.lse.ac.uk/my-privacy-uk).

[9] According to the International Federation of Red Cross and Red Crescent Societies, psychological first aid “is a direct response and set of actions to help someone in distress”. “It is a form of helping that involves paying attention to the person’s reactions, active listening and if relevant, practical assistance to help address immediate problems and basic needs”. (International Federation of Red Cross and Red Crescent Societies, 2018)

[10] For example, UNICEF’s Adolescent Development and Participation Section (ADAP) in Programme Division and the Evaluation Office (EO) at UNICEF Headquarters in New York have developed guidelines to, among others, disseminate the results and support/lead closing the feedback loop with adolescents through the production of adolescent-friendly versions of the findings (UNICEF, 2018).

References

Committee on the Rights of the Child. (2009). Committee on the Rights of the Child’s General Comment No. 12. Retrieved from https://commission.europa.eu: https://commission.europa.eu/system/files/2022-12/the_right_of_the_child_to_be_heardcrc-c-gc-12.pdf Access date: September 13, 2024

AES. (2013). Guidelines for the Ethical Conduct of Evaluations. Retrieved from https://www.aes.asn.au: https://www.aes.asn.au/images/AES_Guidelines_web_v2.pdf Access date: September 13, 2024

Barnett, C. &. Camfield, L.(2016). Ethics in evaluation. Journal of Development Effectiveness, 8(4), 528–534. Retrieved from: https://www.tandfonline.com/doi/full/10.1080/19439342.2016.1244554#d1e107 Access date: September 13, 2024

Better Evaluation. (2024). Ethical guidelines. Retrieved from https://www.betterevaluation.org: https://www.betterevaluation.org/methods-approaches/methods/ethical-guidelines Access date: September 13, 2024

Better Evaluation. (n.d.). Determine what constitutes high quality evaluation. Retrieved from https://www.betterevaluation.org: https://www.betterevaluation.org/frameworks-guides/rainbow-framework/manage/determine-what-constitutes-high-quality-evaluation#back-to-top-anchor Access date: September 13, 2024

Egli, V., Carroll, P., Donnellan, N., Mackay, L., Anderson, B., & Smith, M. (2019). isseminating research results to kids: practical tips from the Neighbourhoods for Active Kids study. Kotuitui: New Zeland Journal of Social Sciences Online, Vol 14, No. 2, 257-275. Retrieved from: https://www.tandfonline.com/doi/epdf/10.1080/1177083X.2019.1621909?needAccess=true Access date: September 13, 2024

Graham, A., Powell, M., Taylor, N., Anderson, D., & Fitzgerald, R. (2013). Ethical Research Involving Children. Retrieved from https://childethics.com: https://childethics.com/wp-content/uploads/2013/10/ERIC-compendium-approved-digital-web.pdf Access date: September 13, 2024

International Federation of Red Cross and Red Crescent Societies. (2018). A guide to Psychological First Aid. Retrieved from https://pscentre.org/: https://pscentre.org/wp-content/uploads/2019/05/PFA-Guide-low-res.pdf Access date: September 13, 2024

Johnston, O., Wildy, H., & Shand, J. (2023). Teenagers learn through play too: communicating high expectations through a playful learning approach. Aust. Educ. Res. 50, 921–940. Retrieved from: https://doi.org/10.1007/s13384-022-00534-3 Access date: September 13, 2024

OECD. (2023). Applying a human rights and gender equality lens to the OECD evaluation criteria. Retrieved from https://www.oecd.or: https://www.oecd.org/en/publications/applying-a-human-rights-and-gender-equality-lens-to-the-oecd-evaluation-criteria_9aaf2f98-en.html Access date: September 13, 2024

Park, J. (2021). Talking to kids about privacy: advice from a panel of internationa experts . Retrieved from https://fpf.org: https://fpf.org/blog/talking-to-kids-about-privacy/ Access date: September 13, 2024

Paulista, A. C. (2024). The ethical use of AI in early childhood education: a must-read for educators . Retrieved from https://early-beginnings.com: https://early-beginnings.com/blog/the-ethical-use-of-ai-in-early-childhood-education/ Access date: September 13, 2024

Save the Children. (2004). So You Want to Involve Children in Research? . Retrieved from https://www.savethechildren.org.uk: https://www.savethechildren.org.uk/content/dam/global/reports/education-and-child-protection/so-you-want-to-involve-children-in-research.pdf Access date: September 13, 2024

UNEG. (2020). UNEG Ethical Guidelines for Evaluation . Retrieved from https://procurement-notices.undp.org/: https://procurement-notices.undp.org/view_file.cfm?doc_id=302194#:~:text=The%20four%20UNEG%20guiding%20ethical,essential%20for%20responsible%20evaluation%20practice. Access date: September 13, 2024

UNEG. (2021). Integrating Human Rights and Gender Equality in Evaluation – Towards UNEG Guidance . Retrieved from https://www.uneval.org: https://www.uneval.org/document/detail/980 Access date: September 13, 2024

UNICEF. (2015). UNICEF procedure for ethical standards in research, evaluation, data collection and analysis . Retrieved from https://www.unicef.org: https://www.unicef.org/media/54796/file Access date: September 13, 2024

UNICEF. (2018). UNICEF Guidance Note: Adolescent participation in UNICEF monitoring and evaluation . Retrieved from https://www.unicef.org: https://www.unicef.org/evaluation/media/2746/file/UNICEF%20ADAP%20guidance%20note-final.pdf Access date: September 13, 2024

Meet the team #4 – Ikram Koudoussi

In Plan Eval’s “meet the team” series, we invite you to get to know more about the incredible people who make up our dynamic team.

In the dynamic landscape of international affairs, expertise in both public policy and management is indispensable. Our fourth guest, Ikram Koudoussi, brings her dual proficiency in International Public Policy and Management as well as International Relations to her role as Proposal Coordinator. Her specialised knowledge extends to the adept management of international projects, project financing techniques, and the art of crafting compelling business proposals.

She is fluent in French, Spanish, English and Berber and, since joining us in late 2023, has been leading the development of proposals and expressions of interest alike in different parts of the globe. In this brief interview, she shares her insights into her experience at Plan Eval.

Can you tell us a bit more about yourself?

“I’m French-Moroccan, and I have a double degree in public policy management from Audencia Business School in Nantes and International Relations from the Institute of Political Studies in Aix-en-Provence. I’ve had experience as a consultant in the French public sector and I also lived for two years in Madrid, where I worked in a French firm. My assignments concerned European funds and, more specifically, recovery plans.”

Ikram (right) in an office day in the Belgium office, with our Senior Associate Fabrizio and Project Managers João Paulo and Laís.

How did you start working with the development sector and, more specifically, with evaluations?

“After gaining some experience in the private sector, I wanted to get closer to what I had studied: International Relations. I wanted to work in a more international environment and at the same time I really wanted to work in a sector that could contribute to the common good. I find that the work done in the evaluation sector and the studies carried out are fascinating, and have a direct significant impact. I hope to evolve in the field of evaluation and to continue working on projects with an impact and on themes that are increasingly important, such as gender equality, intersectionality and providing assistance to the most vulnerable.

Ikram in Audencia Business School, Nantes, France

What is your favorite part of working at Plan Eval?

What I love about my work at Plan is that no day is the same,  I have the opportunity to work on very different and exciting proposals, I get to learn more about different topics, such as nutrition, public health, risk management, governance, etc.

I get to meet consultants that have very interesting backgrounds, and it’s very exciting to coordinate a team of individuals that sometimes don’t know each other but start to work on a completely new project.

Every day, I get to coordinate projects in French, English, Spanish with consultants that come from all over the world, it’s stressful yet captivating day-to-day life.”

Finally, what are your expectations of the future?

I hope that Plan will continue to take on meaningful and significant projects in areas such as social impact, youth and children’s rights. Coming from the MENA region, I hope we can expand our projects in North Africa and the Middle East, and strengthen our partnerships in South America.

Meet the team #3 – João Paulo Cavalcante

In Plan Eval’s “meet the team” series, we invite you to get to know more about the incredible people who make up our dynamic team.

Our third guest is João Paulo Cavalcante. João holds a degree in International Relations from the FACAMP in Brazil and a master’s degree in International Development from the Université Grenoble-Alpes in France. He has worked for 5 years in the World Food Program (WFP), ultimately gaining experience in south-south and trilateral cooperation, policy advocacy, and capability building to support governments and local actors in adopting initiatives to fight poverty, malnutrition, and improve governance in Latin America, Africa, and Asia. 

He joined Plan Eval in 2022 as a Project Manager and shares his insights into his  experience at Plan Eval in this brief interview.

  1. How did you start working at Plan Eval?

Coming to work at Plan was a natural progression in my career. My background is in international relations and development cooperation and, since completing my master’s degree in 2012, I have been working in the management of different projects related to development cooperation. I came to work at Plan when I saw an open position for the project management position, which I applied for and was selected. Working with development project management is something that interests me a lot because I can combine project management, which is perennial with projects in both the public and private sectors, and I can also align with the focus on international development.

João supporting a school feeding project in Guine Bissau

Especially at Plan, we have a very specific focus on Monitoring and Evaluation, mainly evaluation, which allows us to look at projects after they have been implemented, which is very interesting. Also being able to manage projects of different themes (environment with UNEP, technological security with the US Department of State, COVID in Mozambique) for different clients and regions is also very interesting.

  1. What is it like to be a Project Manager at Plan Eval?

The big difference I see at Plan is that we have a niche, a specific focus. Other consultancies have a wider scope of work and have different types of services, whereas we are specialised in monitoring and evaluation.

On a day-to-day basis, interaction between Plan employees is also very important and positive. We all know each other and have a very direct openness, so problem solving, troubleshooting  and support becomes much more efficient. In a way, this even facilitates project management: if I need something, I can contact the person directly. This improves the relationship between the internal and external team, as problems are resolved more quickly and efficiently.

  1. What was your first project at Plan Eval? What project do you consider to be the most meaningful to you?

My first projects at Plan had already started, but I also took on some projects from the beginning. The nutrition project in Madagascar, for example, had already started when I started to manage the project. This project was very challenging, not because of the project itself, but more because of the context in which the country is inserted and its geographical extension, in addition to involving both local and international consultants. Another very interesting project was the project in partnership with Dev Tech on technological security for the US State Department, in which we participated in a consortium and did not have access to the end customer.

Playing football by the beach in Senegal

Amongst the projects that I participated in since its early phases, the Red Cross project was remarkable, as it was linked to the development of activities and the autonomy of certain communities to deal with challenges related to diseases, an extremely interesting topic. There was a lot of flexibility and openness on the part of the client to define the products, which also brought challenges and opportunities for the team.

Another outstanding project was UNICEF Uruguay on digital education. The topic was very interesting and the evaluation team, as well as the UNICEF and Ceibal team (implementation partners), were proactive, engaged and open to suggestions, which greatly facilitated the management of the project itself. It was a project that generated good results and the client was very satisfied in the end.

Finally, the COVID-19 project in Mozambique was very remarkable. It was a cash-transfer program implemented by WFP and UNICEF that had a positive impact on the well-being of society due to the transfer of income. Evaluating such a project was extremely interesting both for its theme (cash transfers) and also for the moment in which the project took place, in the middle of a pandemic.

  1. What do you consider to be the best part of working at Plan Eval?

In addition to the collaborative environment, the quality management system and the company’s own organisation are, without a doubt, one of its strengths. All documents are organised, we have a standard for information and project management. This makes our job a lot easier.

  1. Finally, what are your expectations of the future?

Having an office in Brazil is a great starting point for other projects in Latin America and given that we are a small organisation based in Brussels and that has already carried out large projects makes me optimistic that we will expand our projects to more and more countries in Europe, Africa and Asia. With the support of our internal quality system and our experiences, I am sure that we will continue to expand.

Creating Impactful Evaluation Reports: Strategies for Meaningful Communication

An evaluation is only as good as the changes it helps promote”.

In the realm of evaluation, the essence of empirical insights is not encapsulated in the information provided but rather embedded in a narrative framework. A persuasive evaluation report serves as a vessel through which evidence is distilled into actionable recommendations.

In this guide, we will explore the key components of creating evaluation reports that leave a lasting impact.

  1. Establishing the Foundation: Objectives and Contextual Anchoring

Begin the process by delineating the overarching objectives that the evaluation report intends to achieve[1]. Precisely define the scope, objectives, and intended audience for the report. This groundwork not only imparts coherence but also serves as a compass, ensuring that the report remains focused and pertinent in its content (EvalCommunity, 2023).

A structured approach is paramount to ensuring that the narrative remain comprehensible. Start the report with a succinct executive summary that covers the main findings, conclusions, and recommendations. Follow with a brief section on approach and methodology and then present your findings per evaluation question, conclusions and recommendations (Canadian International Development Agency, 2002).

  1. Data Visualization and Analysis: The Art of Substantive Simplification

In the context of an evaluation report, the role of data visualization is paramount. Data, in its raw form, can often be dense and complex, challenging the reader’s ability to discern patterns and glean insights efficiently. This is where data visualization steps in as a powerful tool.

Incorporating graphical representations is instrumental in holding the reader’s attention. Visual representations, such as graphs, charts, and diagrams, distil intricate data sets into clear, intuitive visuals. These visuals not only enhance the report’s accessibility but also enable stakeholders to grasp key trends and correlations at a glance.

By harnessing the power of visualisation, evaluation reports transcend mere text. The nuances of data and the complexity of the analysis become more accessible, fostering a deeper connection with the insights presented (Caswell & Goodson, 2020).

  1. Directing Action: The Essence of Pragmatic Recommendations

The main aim of an evaluation report is to inspire action. The report should clearly lay out suggestions for improvement based on what the evaluation has uncovered. These recommendations need to be practical and feasible, allowing decision-makers to take clear steps forward.

Crafting effective recommendations involves connecting the dots between the findings and real-world solutions. Each recommendation should come directly from the findings and conclusions of the evaluation. This connection ensures that the suggestions are relevant and can genuinely address the identified areas needing improvement. Moreover, recommendations should consider the broader context in which the project or initiative operates and what is and isn’t within the governance of the stakeholders.

To make recommendations actionable, it’s crucial to break them down into clear steps. This means explaining how each suggestion can be put into practice. Details like who will be responsible for what, the timeline for implementation, and any resources needed should be included. Additionally, highlighting any potential challenges and offering strategies to overcome them adds a practical dimension to the recommendations. Prioritisation is also key. Not all recommendations are equal, and it’s important to figure out which ones will have the most significant impact and tackle them first. This strategic approach ensures that efforts are focused where they matter most and that resources are used wisely (Wingate, 2014).

  1. Stakeholder Alignment: Engaging with Different Audience Needs

Consider the needs and expectations of your stakeholders. Tailor the report’s content to address their concerns and questions. As highlighted in our short guidance on fostering utilisation focused evaluation, when stakeholders see their concerns being addressed, they are more likely to engage with the report and act on its recommendations. In a similar fashion, avoid jargon and technical terms that may alienate non-expert readers – clear and concise language is preferable to communicate complex concepts.

Lastly, before finalising the report, seek (rounds of) feedback from the key stakeholders and evaluation management group, and record their suggestions in a comments’ matrix. Consider having your report peer-reviewed and/or undergo a review from a panel of experts to ensure the content is accurate, coherent, and aligned with objectives. Continuous refinement ensures the report is polished and effective (Canadian International Development Agency, 2002; EvalCommunity, 2023).

In conclusion, an impactful evaluation report is more than a compilation of data; it is a strategic communication tool. By setting clear objectives, presenting data visually, weaving a compelling narrative, and aligning with stakeholders’ needs, you can create reports that not only inform but also inspire action. Remember, the true mark of success lies in how well your report triggers positive change based on your evaluation insights.

[1] For a guide on utilisation focused evaluation, please visit: https://www.plan-eval.com/blog/?p=1172

Works Consulted

Canadian International Development Agency, 2002. How to perform evaluations – evaluation reports. [Online]
Available at: https://www.oecd.org/derec/canada/35138852.pdf
[Accessed 28 August 2023].

Caswell, L. & Goodson, B., 2020. Data Visualization for Evaluation Findings. [Online]
Available at: https://oese.ed.gov/files/2021/02/DataVisualization_508.pdf
[Accessed 28 August 2023].

EvalCommunity, 2023. How to Write Evaluation Reports: Purpose, Structure, Content, Challenges, Tips, and Examples. [Online]
Available at: https://www.evalcommunity.com/career-center/structure-of-the-evaluation-report
[Accessed 28 August 2023].

Wingate, L., 2014. Recommendations in Evaluation. [Online]
Available at: https://www.betterevaluation.org/tools-resources/recommendations-evaluation
[Accessed 28 August 2023].


Meet the team #2 – Natália Martins Valdiones

In Plan Eval’s “meet the team” series, we invite you to get to know more about the incredible people who make up our dynamic team.

Our second guest is Natália Valdiones, Business Manager, specialist in Strategic Business Management and Business Intelligence Management. She has more than 15 years of experience in implementation of ERP systems, coordination of administrative, financial, and Business Process Management (BPM) areas, building Quality Management Systems (QMS) and internal audit of ISO 9001:2015.

In this brief interview, she shares insights into her experience at Plan Eval, highlighting her achievements and her expectations for the future.

  1. You once mentioned that studying business management was not a decision made by someone who didn’t know what they wanted to do, but by a person who knew exactly what they wanted. Tell me more about it.

Looking at it today, I really think that 17 years old is too early for someone to choose what they want to do for the rest of their lives. But I also believe that some people are born with a gift. In my case, I always knew exactly what I wanted to do, and I really love what I do.

It is true that business management course is very generalist, and that a lot of people who don’t know what to do end up on the course; I myself came across several people like this during my studies. Once you graduate as a Business Manager, you become a jack of all trades and a master of none. You understand accounting, but you are not an accountant. You understand law, but you are not a lawyer. You understand technology, but you are not an IT professional. You have all the skills required to manage a business; however, I believe the most important thing for a Business Manager is to specialize because with this generalist approach, you end up getting a little “lost”, which ends up harming you in the job market.

Therefore, I chose to pursue a specialisation. I did an MBA in Strategic Business Management and then another specialisation, this time in Business Intelligence Management. And I am very proud to say that I am a Business Manager and that I do exactly what I love.

2. You’ve worked your whole life in retail. How did you end up in the evaluation world?

I worked in retail since I was 14 until I was 20, when I left to do an internship in information technology (IT). Until today I have a “crush” on IT, so much so that I focused my two dissertations on it: one talked about ERP System Implementation and while the other talked about Returns and Advantages of implementation of such systems[1]. The internship lasted a year, and, during that period, the company was in the middle of implementing a similar system. I had the opportunity to closely follow the whole process and I found that I really liked BPM.

After that year, I was invited to be a managing partner on the family business, and to be responsible for automating the processes to meet Federal requirements, where I had the opportunity to put into practice everything I had learned in school, including the IT systems part.

The entire process of restructuring and implementing processes, as well as employee training, lasted about two years. After that, I was managing partner for another eight years. The only problem was that I didn’t like my job! We then decided to sell the family business.

After that, I chose to dedicate myself to motherhood for just over a year and when I decided to return to the job market, I wanted to go back to work with what I loved! I fell into the world of evaluation by accident, through a selection process for administrative manager at Plan Eval.

Plan Eval’s team in January 2023

For me it was a very big impact, because I left the retail sector to come to the services sector, but it was very positive, because in addition to working with what I like, we have an environment that does not have a routine, where every day we are dealing with different and challenging projects, one in each corner of the world. When we see the result of the evaluation and see the results of the project, it is very satisfying. In the end, I do what I love, in an environment that provides me with daily learning and challenges, and which also provides me with a quality of life of balance between personal and professional life.

3. One thing that was very visible in his trajectory, both academic and professional, is your passion for implementing processes from scratch. Is that why you started to work on QMS?

I’ve always loved working BPM. And during my undergrad, it was like that too. I remember an occasion when we were asked to build a business plan. I took the lead on the project, started talking to the people in the group and developing the idea. In the end, our business plan was the standout amongst all the groups. We won several accolades and recognitions for having done a very good job. I remember that I led the whole process, because I was always very organized, especially in these activities.

When I arrived at Plan Eval, obtaining the ISO 9001:2015 certification was already a goal. At the time, I noticed that I could contribute precisely with this experience in BPM, as there was a certain difficulty in scaling the work needed to map all the processes and fulfill all the necessary steps, because it is not as simple as it seems. As I have a natural affinity with processes, I started to help the team and, naturally, I became responsible for the entire certification process. We had the support of an external auditor, who was essential to bring the perspective of those who audit, and thus align all processes in our QMS. In the end, this whole structuring process lasted around seven months, and we received the certification. This happened before I completed a year working with Plan Eval. I remember that, at the time, we were the only evaluation company in Brazil with this certification.

Natália and Plan Eval’s ISO 9001:2015 certification

After that, I received training on the ISO 9001:2015 standard and also participated in an ISO 9001:2015 auditor training course. Today, I have the ISO 9001:2015 knowledge that I did not have at the time. Once the entire certification process was finalised, it was agreed that I would also be responsible for monitoring and maintaining Plan Eval’s QMS.

4. How does having the ISO 9001:2015 certification increases the quality of our evaluations and services? What sets Plan Eval apart by having this certification?

I believe that the main impact is on the structure of the company. Having the guarantee that we can meet all the customer’s requirements, a procedure, a process in which things need to happen and providing the guarantee of continuity makes all the difference. Before the QMS, Plan Eval did not have all this structure and documentation.

This brings quality, as nowadays project managers have greater control over the activities that are happening in each project. We also have access to more information, both from the client and from consultants and internally. Before, we did not carry out standardized surveys with all interested parties, and today we have this resource that provides us with inputs to promote the continuous improvement of our processes and our deliveries to our customers.

Nowadays, we value the fact that we can understand if the clients were satisfied, and, if not, in which areas we can improve. We assess the consultants, the consultants assess us, the client assess the project management and the consultants, almost like a 360-degree assessment. In this way, we bring this information back into the company so that we can deal with these issues on a recurring and comprehensive basis. Now, we also have a new process, in which we seek to understand, together with our client, the impact caused by our work. It has changed and improved things. Ultimately, I believe this turns the PDCA cycle and gives us the opportunity to improve daily.

PDCA cycle

What I consider most important thing is having everything documented, knowing what is happening in real time in each project, understanding what everyone is doing and recording all this information. If it is necessary to make any changes in the team, e.g., a new project manager, they would have an exact record of what happened, as well as all relevant information about that project. Now we have much more quality in our final product and greater control and a different organization than we had before. In addition, project budget control has become more precise and meticulous, which allows for safer and smoother execution.

5. What is the best part of working at Plan Eval?

Without a doubt, it’s the part of working with what I love. So, no matter what happens, I’m always very happy with my work. I also really like the daily learning because we are always in contact with different situations, with different realities and, whether we like it or not, we learn many things about the realities of other countries and cultures. Finally, the fact that we are a small company makes us very close and we end up all working together and helping each other.

6. What are your expectations for Plan Eval’s future?

That the new structures in place will result in more and more visibility for Plan Eval and that we will also be able to increase the quality of our services and the satisfaction of our customers, conquering more and more markets and having more and more projects!


[1]All these articles are available at https://www.webartigos.com/autores/nataliavaldiones

How can evaluators maximise the usability of evaluations? Practical suggestions

Por: Yasmin Winther Almeida

Evaluation is a powerful tool that enables organisations to measure, assess, and improve their programmes, policies, initiatives, and processes. It can provide valuable insights, identify areas of strength and weakness, and guide decision-making for optimal outcomes.

However, the true value of evaluation lies not only in conducting it, but also in utilising its findings effectively. In this article, we will explore the significance of evaluation utilisation and discuss strategies to maximise its impact.

Understanding Evaluation Utilisation

Evaluation utilisation refers to the process of incorporating evaluation findings into decision-making and subsequent actions. According to Patton’s Utilisation-Focused Evaluation (UFE) approach, which emphasises the intended use and influence of evaluation findings, evaluations should be conducted with the primary purpose of promoting effective decision-making and utilisation of evaluation findings. Patton emphasises the importance of including diverse perspectives and engaging stakeholders in a collaborative and participatory process throughout the evaluation; this ensures that the evaluation is specifically tailored to address their unique information needs and concerns.

Patton’s 17-step Utilization-Focused Evaluation (U-FE) Checklist (2013)

This approach also requires that the evaluation team and the key stakeholders work closely to define the purpose, scope, and the intended use of the evaluation to ensure that the evaluation is relevant and aligned with stakeholders’ goals. Another important aspect of UFE is the emphasis on utilisation-focused reporting. The evaluation findings are communicated in a way that is accessible and meaningful to the intended users, providing them with actionable recommendations and insights that can inform decision-making and programme improvement.

Throughout the UFE process, Patton emphasises the importance of fostering a culture of learning and using evaluation as a tool for social change. This involves cultivating a respectful and collaborative relationship between evaluators and stakeholders, creating an environment where evaluation findings are seen as opportunities for growth and improvement.

How can evaluators maximise evaluation utilisation?

Engage Stakeholders

To engage stakeholders effectively, evaluators typically identify and involve a diverse range of individuals or groups who have a vested interest in the programme or initiative being evaluated. These stakeholders may include programme staff, donors, policymakers, community members, implementing partners and other relevant parties.

The engagement process begins by establishing clear communication channels and building relationships with stakeholders. It is essential to communicate the purpose and benefits of the evaluation, address any concerns or misconceptions, and establish trust and rapport. This helps create a supportive and collaborative environment where stakeholders feel valued and encouraged to contribute their insights and perspectives.

During the evaluation inception phase, evaluators actively seek input from stakeholders to identify their information needs, expectations, and desired outcomes. This collaborative approach ensures that the evaluation is focused on the issues and questions that matter most to stakeholders, increasing the relevance and usefulness of the findings.

Engaged stakeholders ensure that the evaluation is relevant

Engaging stakeholders also involves involving them in data collection activities. This may include conducting interviews, focus groups, surveys, or observation sessions with stakeholders who have first-hand experience or knowledge related to the programme being evaluated. Involving stakeholders in data collection not only enhances the quality and richness of the data but also promotes their active engagement and investment in the evaluation process.

Throughout the evaluation, regular communication and feedback loops are maintained with stakeholders to provide updates on progress, share emerging findings, and seek their input and validation. This ongoing engagement allows stakeholders to contribute their expertise, provide contextual insights, and offer alternative perspectives, thereby enriching the evaluation process and enhancing the credibility of the findings.

Finally, engaging stakeholders also involves involving them in the interpretation and sense-making of the evaluation findings. This collaborative sense-making process allows stakeholders to make sense of the data in light of their experiences, knowledge, and priorities. By actively involving stakeholders in this process, evaluators facilitate their ownership of the findings and increase the likelihood that the evaluation results will be used for decision-making and programme improvement.

Tailor Communication

When presenting evaluation findings, it is important to consider the diverse backgrounds, knowledge levels, and roles of the intended audiences. This may include programme managers, policymakers, frontline staff, community members, funders, or other relevant parties. By understanding their perspectives and information needs, evaluators can customise the communication to resonate with each audience.

To tailor the communication effectively, evaluators should focus on presenting the key messages and insights derived from the evaluation. This involves distilling complex data and analysis into succinct and easily understandable points. By highlighting the most relevant and significant findings, stakeholders can quickly grasp the main takeaways without getting overwhelmed by excessive detail.

In addition to presenting the findings, UFE emphasises the importance of providing actionable recommendations. Evaluators should translate the evaluation results into practical suggestions and strategies that stakeholders can implement to improve programmes or make informed decisions. These recommendations should be specific, feasible, and supported by evidence from the evaluation. By offering clear guidance, evaluators empower stakeholders to take meaningful actions based on the evaluation findings.

Tailoring communication also involves selecting appropriate formats and channels for sharing the evaluation findings. This may include written reports, presentations, infographics, dashboards, or interactive workshops. The chosen formats should align with the preferences and communication styles of the target audiences. For instance, policymakers may prefer concise executive summaries, while programme staff might benefit from detailed reports with practical implementation guidelines.

Tailored communication ensures that each stakeholder receives the most relevant information in the most useful format.

Furthermore, it is crucial to consider the potential impacts of the evaluation findings on different stakeholders. Evaluators should explicitly communicate the relevance and implications of the findings for each audience, addressing their specific interests and concerns. By highlighting the potential benefits or consequences of acting upon the evaluation findings, stakeholders are more likely to recognize the value and urgency of utilising the evaluation results.

Throughout the communication process, evaluators should also encourage feedback and dialogue with stakeholders. This open and interactive approach allows stakeholders to ask questions, seek clarification, and engage in discussions around the evaluation findings. By promoting an ongoing conversation, evaluators can deepen stakeholders’ understanding, address any misconceptions, and build a shared understanding of the implications of the evaluation.

Visually engaging reports

Effective communication of evaluation findings is vital to ensure the utilisation and impact of evaluation reports. One key aspect of this communication is the presentation of the findings and recommendations in a visually engaging manner. This can be achieved through several ways, such as:

  1. Using infographics and data visualisations: Incorporate infographics, charts, graphs, and diagrams to illustrate key findings, trends, and relationships in the data. Ensure that the visualisations are clear, labelled properly, and easy to interpret.
  2. Employing dashboards (e.g., PowerBI): Consider creating interactive data dashboards that allow users to explore the evaluation findings dynamically. Dashboards can enable users to filter data, view different visualisations, and customise their analysis based on their specific interests. Interactive dashboards offer a more engaging and exploratory experience for users.
  3. Emphasising key messages and quotes: Highlight key findings, recommendations, or quotes by using visually distinct formatting, such as bold text, larger fonts, or coloured text boxes. This draws attention to important information and makes it easier for readers to grasp the main points at a glance.
  4. Using a visually consistent layout: Pay attention to the overall design and layout of the report to create a cohesive and visually consistent look. Use consistent fonts, font sizes, and formatting throughout the report. Ensure that headings, subheadings, and sections are clearly defined. Leave enough white space to improve readability and avoid clutter.
Visually-engaging reports, courtesy of cross-content (https://www.crosscontent.com.br/)

By incorporating these visual elements, evaluators can support the understanding, retention, and utilisation of evaluation findings, ultimately increasing the impact of evaluations and informing decision-making processes.

Final considerations

Evaluations are valuable tools for programme and process improvement. However, their true value is realized when the results are effectively utilized. By implementing the strategies discussed in this blog post, you can maximize the utility of your evaluations and drive the success of your initiatives.

Meet the team #1 – Laís Bertholino Faleiros

In Plan Eval’s “meet the team” series, we invite you to get to know more about the incredible people who make up our dynamic team.

Our first guest is Lais Bertholino Faleiros, our Project Manager. She has a Masters in Management of Public Organizations and Systems from the Federal University of São Carlos and a degree in Public Administration from São Paulo State University. She is a specialist in Cost-Benefit Analysis and Social Return on Investment (SROI).

Lais joined Plan Eval in 2017, initially as an administrative-financial manager. Later, she transitioned into the role of a researcher and currently serves as a project manager. In this brief interview, she shares insights into her experience at Plan, highlighting the highlights and her expectations for the future.

  1. How did you start working at Plan Eval?

“I started my career at Plan in 2017 as an administrative-financial manager. I held this role for approximately a year and a half, handling administrative responsibilities. Then, an opportunity in the research area arose and I became a researcher. During this period, I worked on projects such as C&A and Caixa. These were the main experiences as a researcher.

Later, in 2019, I left Plan acted as a consultant on the Coca-Cola Institute project, where I conducted a qualitative assessment of their Youth Collective Programme. However, in 2021, I returned to Plan Eval as a project manager. Therefore, along this path, I went through different roles, starting as an administrative-financial manager, then becoming a researcher and, finally, assuming the position of project manager.

I noticed a significant transformation at Plan over time, to the point where I felt, when I returned in 2021, that I was joining a completely different company. This perception is due, in part, to the changes that occurred during the pandemic period. In addition, I was able to observe a significant increase in the company’s operations in Belgium.”

2. What is it like be a Project Manager at Plan Eval? What do you like the most?

“The most important point, I believe, is that it [my work] motivates me. Working with public policy and being involved with many relevant organisations around the world that seek to create change in society is something that I am passionate and that drives me. The work of the United Nations and governments, together with public policies, provides the opportunity to act in projects and programmes that bring systemic changes to society. This possibility of comprehensive learning on different topics, without the need to become an expert in just one of them, is one of the fascinating characteristics of the evaluation area, which I chose to dedicate myself to.

Another important factor is being a project facilitator. In addition to management, I believe that my role involves integrating different interests, which come from donors, funders, executors and evaluators. Dealing with reconciling these interests is an exciting challenge, even if it is not always easy. People management is also part of this universe, adding even more complexity to everything we do.

However, being a project manager goes beyond that, also involving understanding the needs of people, both in the evaluated organisations and the evaluators, and I feel rewarded when I successfully complete this mission. Being able to lead and manage people, understanding their demands, is rewarding and gives my work an even greater purpose.”

3. What was your first project at Plan Eval?

“My first project at Plan was carried out in partnership with Instituto C&A, involving data collection for the evaluation of cotton farming projects supported by Instituto C&A and Porticus in Northeast Brazil. This project covered three different states: Piauí, Ceará and Pernambuco. As the research coordinator, I had the opportunity to travel to train the researchers. It was an extremely enriching experience, as in addition to being responsible for coordinating the research, I also actively participated in training the team and conducting the evaluation.”

Lais facilitating the training of enumerators in Ceará

4. What project do you consider to be the most meaningful to you?

“The PPCAAM project was an incredible experience for me, as we went through several evaluation stages. We had the opportunity to carry out comprehensive qualitative research in addition to quantitative research. We also worked on developing indicators and reconstructing the theory of change. We carry out activities to disseminate knowledge, training and even a seminar. Now, we are about to publish a book about the project. I believe that this project is remarkable, as it covered a wide range of activities that we master. It is gratifying to be able to accomplish so many things in a single project.

Another standout project for me is “Children on the Move“. It is a new project, which is just starting and involves multiple countries in different regions. This is a very relevant and exciting project and I look forward to providing meaningful evidence to our clients.”

Laís and the PPCAAM team in Brasília

5. What do you consider to be the best part of working at Plan Eval?

“At Plan, one of the things I appreciate most is the opportunity to engage in meaningful conversations with people, including consultants, co-workers and clients. In addition, being a global company gives us great freedom to work with different people and clients. I also believe that building relationships and networks is a fundamental aspect of our work. That ability to build relationships is something I really value from my experience at Plan.”

6. Finally, what are your expectations of the future?

I hope that Plan Eval can increasingly spread its operations around the world and acquire experiences in increasingly diverse contexts. I believe this will enrich our evaluative capacity, not only in different contexts, but also when working with more diverse organisations. In this way, we will be able to build more educational assessments, with greater quality, equity and diversity.”

Incorporando a perspectiva de gênero em avaliações de políticas públicas e programas sociais e humanitários

Ao longo dos últimos três anos, atuando como especialista de estudos de gênero, eu fui convidada pela Plan Eval para participar de algumas pesquisas e avaliações que consideravam a perspectiva de gênero na análise e avaliação de intervenções sociais e humanitárias. Não obstante, participei de outras tantas avaliações que não faziam a incorporação desta perspectiva, mas que ao longo das análises ficava evidente que ao menos parcialmente esta incorporação poderia ser feita. O fato é que muitas das intervenções sociais e/ou humanitárias podem não fazer referência explícita, mas isso não significa que não tenham um impacto diferenciado por gênero em determinados grupos ou localidades, principalmente devido à desigualdade estrutural entre homens e mulheres ainda persistente na maioria das sociedades – dada as devidas especificidades regionais e/ou culturais, são as mulheres e meninas que continuam a sofrer os maiores impactos em situações de vulnerabilidade, como de grave crise econômica, violência e/ou conflitos armados, por exemplo. Dessa forma, segundo Medina (2021):

“incorporar a perspectiva de gênero implica considerar sistematicamente as diferenças entre mulheres e homens nas diversas esferas de políticas ou programas, com a vontade de identificar essas desigualdades e os fatores que as geram, torná-las visíveis, projetar e aplicar estratégias para reduzi-las e, assim, avançar para sua erradicação. Da mesma forma, significa abordar o estudo dos fenômenos sociais sem assumir a universalidade das experiências masculinas e também questionar o sistema sexo-gênero e suas implicações”.[1]

© UNHCR/Patrick Brown

Para a reflexão que proponho sobre este tema, compartilho algumas das avaliações que realizei no último ano, em 2022, que poderiam ter dado ênfase à questão de gênero desde a sua concepção, mas que não o fizeram explicitamente. Os dois projetos de avaliação que destaco foram solicitados pelo Comitê Internacional da Cruz Vermelha (CICV) no Brasil.

O primeiro, conduzido por uma equipe formada pela Plan Eval, tratou-se de uma avaliação de resultados sobre o Programa Acesso Mais Seguro (AMS), que visa reduzir e mitigar as conseqüências da violência armada sobre os serviços públicos essenciais, como educação, saúde e assistência social. O segundo, realizado de forma independente, fazia referência ao mapeamento das necessidades de comunicação das pessoas migrantes e refugiadas, especialmente da população venezuelana, atendida no âmbito da Operação Acolhida do governo federal, no estado de Roraima. Eram avaliações completamente diferentes entre si, tanto de escopo e estrutura, como de públicos alvos. Contudo, em cada uma havia evidências de que estas intervenções poderiam ter assumido a perspectiva de gênero, uma vez que eram as mulheres que estavam sendo impactadas de forma mais negativa e desigual pelos problemas que a organização procurava mitigar.

© Foto: UNFPA Brasil/Pedro Sibahi

O Programa AMS trabalhava em parceira, no momento da avaliação, diretamente com as secretarias de Educação, Saúde e Assistência Social[2] de quatro municípios: Rio de Janeiro, Duque de Caxias, Fortaleza e Porto Alegre. Por sua vez, o Programa era executado por profissionais que estavam na ponta dos três serviços referidos, como: diretores(as) de escolas, professores(as), médicos(as), dentistas, enfermeiros(as), agentes comunitários e assistentes sociais. Há pesquisas que mostram que as mulheres ocupam em maior número os serviços públicos de forma geral, atuando especialmente no atendimento direto aos usuários dos serviços, porém são minoria nos cargos de maior liderança e tomada de decisão[3]. Além desta desigualdade de gênero “vertical”, há também a desigualdade “horizontal”, quando há a diferença na distribuição de gênero em algumas carreiras específicas. São as consideradas “posições de cuidado”, como as das áreas de assistência social, educação e saúde, que são, historicamente, pior remuneradas, representando um abismo salarial entre os gêneros[4].

Esta intervenção que visa, entre outros objetivos, manter funcionando com segurança os serviços essenciais públicos em áreas de grande vulnerabilidade, passou a impactar a rotina de centenas de profissionais, especialmente que se identificam como mulheres, assim como tem potencial de impactar a vida de muitas beneficiárias/usuárias mulheres que dependem, por exemplo, quase que exclusivamente das escolas abertas para poder trabalhar. Esta afirmação baseia-se nos relatos dados por mães, e demais pessoas cuidadoras, durante os grupos focais realizados com as comunidades, nos quais 90% das pessoas participantes eram mulheres e se apresentavam como chefes de família ou como principais responsáveis pelas suas crianças[5].

Já no caso da avaliação das necessidades de comunicação da população migrante e refugiada em Roraima, havia desde o princípio a preocupação de se mapear as necessidades de subgrupos específicos como: pessoas indígenas; pessoas com deficiência; pessoas LGBTQIAP+; pessoas idosas; mulheres jovens e/ou mães solteiras. Mesmo sendo reconhecido pelas organizações humanitárias que atuam em parceria com o governo federal na Operação Acolhida de que a maioria das pessoas migrantes e refugiadas que entram por esta fronteira são homens, ao longo da avaliação ficou evidente que eram as mulheres – pertencentes a cada grupo específico citado – que tinham maiores dificuldades para se comunicar e conseguir informações sobre as ajudas humanitárias e sociais no país, ficando dependentes de companheiros, demais familiares, ou ainda, exclusivamente, de agentes externos (como no caso das mulheres solteiras com crianças pequenas).

Em ambas as avaliações, o que foi possível de se apresentar como resultado foi uma contextualização das desigualdades de gênero em cada cenário como parte do diagnóstico realizado, sendo ainda elaboradas algumas recomendações e sugestões específicas para que a perspectiva de gênero se fizesse mais presente e explícita nas intervenções propostas pela organização. Não obstante, havia outros passos que poderiam ter sido dados antecipadamente, os quais eu sugiro fortemente aos avaliadores e avaliadoras que me lêem ao se depararem com casos de avaliações que apresentem evidências irrefutáveis de desigualdade de gênero no contexto da intervenção em questão.

Guias práticos sobre avaliação com a incorporação da perspectiva de gênero, como o lançado pelo Instituto Catalão de Avaliação de Políticas Públicas[6], ou como os que foram elaborados pela ONU Mulheres[7], podem ser de grande valia para apoiar nesta tarefa. A seguir, aponto brevemente algumas das orientações destes guias para destacar o que pode ser seguido para que uma avaliação considere minimamente a perspectiva de gênero.

Perguntas iniciais

É conveniente levantar algumas perguntas iniciais, seja para realizar a avaliabilidade da avaliação de um programa ou política, seja para a análise prévia que se desenvolve na primeira parte de uma avaliação em qualquer âmbito de estudo. Tais perguntas podem ajudar a entender e classificar o enfoque de gênero de uma intervenção a partir das informações disponíveis. Elas podem ser como estas:

  • O programa coleta dados da situação inicial para os valores serem analisados? Estes dados são segregados por sexo?
  • Eles incluem informações sobre outros marcadores sociais, como etnia/raça, recursos econômicos, idade, escolaridade?
  • Há informações disponíveis sobre como mulheres e homens respondem e valorizam a intervenção?
  • O programa possui indicadores específicos de desigualdade de gênero e tem acompanhamento?
  • Durante a implementação da intervenção, houve um acompanhamento dos perfis das pessoas beneficiárias do programa?

Perguntas de avaliação

Perguntas de avaliação com foco em gênero são igualmente importantes e devem fazer parte da fase de planejamento metodológico da avaliação. Medina (2021) considera útil selecionar questões de avaliação concretas e projetadas para entender as diferenças de gênero e que vão além de considerar as diferenças entre mulheres e homens, como exemplo:

• Existem normas, práticas ou estereótipos de gênero relacionados aos fatores que a intervenção está tentando mudar? Quais seriam? Como eles afetam mulheres e homens?

• A intervenção visa ou consegue mudar essas normas, práticas ou estereótipos? De que modo?

• Existem perfis diferentes de mulheres e homens entre os usuários do programa? O efeito da intervenção difere entre esses perfis?

• Existe algum perfil de participante (especialmente mulheres e pessoas LGTBQIAP+) sub-representado entre a população beneficiária? Qual seria? Por quê?

• Existe algum perfil de participante (especialmente mulheres e pessoas LGTBQIAP+) sub-representado em algumas das atividades da política ou programa? Qual seria? Por quê?

• Em função do seu gênero, os(as) beneficiários(as) da política vivenciam de forma diferente sua participação no programa? Por quê?

Execução e análise

Ao longo do processo de avaliação, o que inclui a coleta de dados e a análise, a perspectiva de gênero pode se incorporada de forma bastante prática, a partir de ações como:

• Procurar levantar amostras representativas entre os gêneros masculino e feminino;

• Incorporar a voz de mulheres e de organizações feministas ou de afirmação identitária sempre que possível;

• Desagregar dados e fazer análises diferenciais, incluindo outros marcadores sociais sempre que possível;

• Analisar as implicações da política ou do programa em termos de gênero, levando em consideração a interseccionalidade com outros marcadores sempre que possível;

• Definir recomendações específicas sobre gênero para a intervenção;

• Garantir uma linguagem inclusiva e neutra na redação dos relatórios.

Por fim, a avaliação de políticas e programas sociais e humanitários, enquanto um exercício científico de levantamento de evidências, agrega em si um grande componente de aprendizagem que pode (e deve!) gerar novos conhecimentos e práticas, sendo assim visto igualmente como um importante apoio na promoção de mudanças mais amplas e profundas nas organizações e instituições, como na sociedade em geral. Incorporar a perspectiva de gênero nas avaliações, assim como de outros marcadores sociais, faz parte deste processo de aprendizagem para todos os envolvidos, o que requer paciência, é verdade, mas que não suporta mais postergação.


[1] MEDINA, Júlia de Quintana. Guía práctica 18: La perspectiva de género en la evaluación de políticas públicas. Instituto Catalán de Evaluación de Políticas Públicas (Ivàlua), 2021, p.21 (tradução livre).

[2] Apenas em Fortaleza o CICV apresentava parceria com as três secretarias no momento da avaliação. Em Porto Alegre e Duque de Caxias as parcerias eram com as secretarias de Educação e Saúde. E no Rio de Janeiro apenas com a Educação.

[3] De acordo com a Pesquisa Nacional por Amostra de Domicílios Contínua, a PNAD, do Instituto Brasileiro de Geografia e Estatística, de 2022, as mulheres representam 57% dos profissionais no setor público, enquanto os homens são 43%. Contudo, os diretores e gerentes estão representados por 39% de mulheres e 61% de homens.

[4] HIRATA, Helena; KERGOAT, Danièle. Novas configurações da divisão sexual do trabalho. Cadernos de Pesquisa37, 595-609. 2007.

[5] De acordo com dados do Dieese de 2022, no Brasil, de 12,7 milhões de famílias monoparentais com filhos, 87% são chefiadas por mulheres e 13% por homens. Nos demais núcleos familiares, a diferença não é tão grande: 51% das famílias são chefiadas por mulheres. Das 11 milhões de mães solteiras e chefes de família, 62% são negras. Dentro desse subgrupo, 25% prestam serviços domésticos; 17% trabalham nos setores de educação, saúde humana e serviços sociais; e 15% no comércio. Entre as mulheres não negras, a proporção é praticamente inversa: 22% trabalham com educação, saúde humana e serviços sociais, 17%, no comércio e 16% com serviços domésticos (Boletim Especial de 8 de março – Dieese com dados do IBGE – PnadC, 2022)

[6] MEDINA (2021).

[7] Os guias da ONU Mulheres podem ser encontrados no site oficial desta agência: Guía de evaluación de programas y proyectos con perspectiva de género, derechos humanos e interculturalidad (2014); Manual de evaluación de ONU Mujeres: Cómo gestionar evaluaciones con enfoque de género (2015).

Use and influence of evaluations: UNICEF Brazil Country Programme

Utilisation-focused evaluation (UFE) is an approachthat emphasises the importance of engaging with stakeholders throughout the evaluation process to ensure that findings are relevant, useful, and actionable. It focuses on providing feedback and insights that lead to more effective decision-making and ultimately improving the success of the project, programme, or policy being evaluated. This approach is aligned with our mission to analyse the performance of institutions to help them amplify, improve, and sustain societal benefits.

With that approach in mind, Plan Eval, in partnership with Action Against Hunger, conducted the evaluation of UNICEF Brazil’s Country Programme 2017 – 2021. The evaluation had as its objectives to provide accountability for the work performed in the period under analysis and to be a learning tool to inform the upcoming programme cycle. To achieve such objectives, the evaluators applied a utilisation-focused and participatory approach. It consisted of engaging with stakeholders from the inception to the reporting stages, going over each evaluation question to make sure that it served a practical purpose.

The team held weekly check-in meetings during the research phase to report on the progress of information collected and likewise during the analytic phase, to discuss preliminary findings. Come the reporting stage, the conclusions were presented in a series of online discussions involving UNICEF Brazil’s programme officers, whose criticism was essential for the evaluation team to hone in on the relevance of the findings. The latter round of validations consisted of participatory SWOT seminars, where everyone involved in the management response to the evaluation had the opportunity to rank recommendations in terms of their priority for implementation and likely impact.

UNICEF presence in Brazil

The Country Programme Evaluation (CPE) is a mandatory assessment conducted by UNICEF Country Offices every two programme cycles (i.e., every ten years) and is among the most complex, as it looks at all programmatic areas and operations. Evaluating a broad range of activities and outcomes across different sectors can be challenging to manage as it involves multiple stakeholders like government partners, civil society organisations, and other UN agencies. Managing the input and feedback from these different groups required integrating data into a dynamic evidence matrix organised by evaluation question and intended purpose.

© UNICEF/BRZ/Inaê Brandão

Boris Diechtiareff, Monitoring and Evaluation Specialist at the Brazil Country Office (BCO) highlighted the usability and influence of the evaluation findings and recommendations. According to him, “the findings not only focused on the mandatory aspects but also saw the necessity and the benefits of doing the exercise to help design the new country program”. The evaluation was “shared and used widely by different teams and stakeholders, including the Country Management Team, the UNICEF Latin America and Caribbean Regional Office, the Brazilian Cooperation Agency and the Brazilian Government”.

The evaluation report‘s findings and recommendations, in addition to informing the Country Programme Document (CPD), also served as a learning tool to improve the response to the Venezuelan migrant emergency in Brazil.

Resultados com Diagnósticos Municipais da Situação da Infância e da Adolescência

A Plan Eval traz uma perspectiva da avaliação para todos os trabalhos que realiza. Essa perspectiva consiste em primeiro entender quais são as prioridades de política para então buscar dados da realidade social a fim de lhes dar seja suporte e direcionamento, seja reformulação em face da realidade observada. Essa abordagem dialógica a empresa aplica também em projetos que a princípio se consideram “diagnósticos” ou “levantamentos” mas que na prática se configuram como avaliações, como é o caso dos Diagnósticos Municipais da Situação da Infância e da Adolescência (DMSIA).

Esses diagnósticos resultam de uma diretriz nacional do Conselho Nacional dos Direitos da Criança e do Adolescente (CONANDA) para que municípios produzam um documento único com informações sobre a situação da infância e adolescência em seu território a partir do qual devem ser desenvolvidas as ações do poder público. Para entender como é a construção de um Diagnóstico, conversamos com a Liora Mandelbaum Gandelman, consultora da Plan Eval que lidera os projetos nessa área. Liora é cientista social e vem trabalhando com DMSIAs desde 2016.

Liora Mandelbaum Gandelman

De acordo com Liora, a ideia inicial dos DMSIAs era de que fossem um ponto de partida na identificação de “desafios e fortalezas” da infância e adolescência no município e que a partir deles fosse desenvolvido um Plano Municipal da Criança e do Adolescente, o qual qualificaria o município para pleitear recursos adicionais para executar as políticas na área. Em nível nacional, foi feito um trabalho que serviu de base para o que era esperado nos diagnósticos municipais. Esse diagnóstico nacional atendia ao Estatuto da Criança e do Adolescente (ECA) e aos Objetivos do Milênio das Nações Unidas.

Na prática, há motivações muito diversas para a realização de diagnósticos. Além da qualificação para o pleito de recursos federais e estaduais, notou-se também um anseio espontâneo de prefeituras de aumentar a eficácia da aplicação de recursos e trabalhar com critérios mais objetivos para o repasse de verbas a entidades conveniadas. Outra motivação comum para a realização de DMSIAs é a necessidade de resposta a solicitações de órgãos fiscalizadores como o Ministério Público para corrigir situações de violação de direitos, como o trabalho infantil e a precariedade de abrigos.

Nossa forma de fazer

Tendo em conta essa diversidade de propósitos, a Plan Eval desenvolveu uma abordagem que a produção de informação usando o critério da utilidade. De acordo com a consultora, essa abordagem passa por conversar inicialmente com os secretários municipais e diretores de órgãos relacionados à infância e adolescência para entender suas necessidades práticas e assim orientar a pesquisa para problemas concretos. A equipe também entrevista técnicos que atuam tanto nas secretarias como nos serviços de atendimento direto, incluindo aqueles geridos por organizações sociais a fim de entender como a linha de frente dos serviços pode ser melhorada[1]. Essa informação de primeira mão é então contrastada com fontes secundárias como dados administrativos e populacionais além de discussões com famílias.

“A gente passa um bom período num município, faz todas essas entrevistas, além de recolher os dados que o próprio município tem relacionado à infância e a adolescência. Por exemplo, dados do Centro de Atendimento da Assistência Social, dados da Secretaria de Saúde, de Educação etc. e em paralelo fazemos essas entrevistas com os gestores técnicos. Com isso, a gente consegue identificar as fortalezas e os desafios do município em vários âmbitos, por exemplo saúde, educação, cultura, acesso a lazer, esporte, moradia, sempre tendo em mente a interseção dessas políticas com as necessidades de crianças e jovens”.

É importante frisar, diz Liora, que a Plan Eval sempre trabalha de maneira participativa com os Conselhos Municipais da Criança e do Adolescente (CMDCA), construindo conjuntamente os instrumentos e compartilhando os achados ao longo do processo. O Conselho é um órgão deliberativo, sendo responsável pela aprovação do plano e pela fiscalização da atuação do poder municipal nessa área. O diferencial da atuação da Plan Eval, segundo Liora, é procurar envolvê-lo, em todas as etapas da elaboração do DMSIA (e não só na aprovação final) a fim de garantir a relevância e a legitimidade do diagnóstico.

Essa forma de condução rendeu muitos frutos na prática. O Município de Jundiaí (SP), por exemplo, pediu à Plan Eval que os auxiliasse na construção do Plano Municipal Decenal dos Direitos Humanos da Criança e Adolescente de Jundiaí na sequência do diagnóstico. O plano também foi construído de maneira participativa, com oficinas e estabelecimento de metas de curto e médio prazo para todas as políticas relacionadas à infância e adolescência.

A experiência exemplar

Liora relata que Jundiaí, sendo um município populoso e muito bem estruturado em termos de equipamentos públicos, trouxe novos desafios para a elaboração do DMSIA. Um exemplo são as casas de cumprimento de medidas socioeducativas. No município há uma casa de semiliberdade da Fundação Centro de Atendimento Socioeducativo ao Adolescente (Fundação CASA), que funciona como um Centro de Atendimento não só para Jundiaí, mas para as cidades do entorno. A equipe da Plan Eval passou três dias dentro da Fundação para conseguir desenhar indicadores de qualidade e serviço relevantes. Outro desafio foi a quantidade de equipamentos. O município possui seis Centros de Referência de Assistência Social (CRAS) e dois Centros de Referência Especializado da Assistência Social (CREAS). Para conseguir ouvir técnicos de todas essas instituições foram necessárias várias rodadas de grupos de discussão.

Segundo a consultora, Jundiaí é um município que busca excelência nas políticas para a infância e adolescência, atendendo a todas as prerrogativas previstas em lei. O Conselho Municipal também deu muito suporte à condução do Diagnóstico, fazendo um “bom meio de campo” com as outras secretarias municipais. Ela conclui dizendo que havia um interesse muito grande por parte da gestão municipal para que esse Diagnóstico ocorresse. A presença desses elementos – busca da excelência, envolvimento do Conselho, escuta ampla – favoreceu a realização de um diagnóstico preciso e aplicação imediata, como foi o caso.

Os resultados

Jundiaí é hoje um município referência na primeira infância. As conclusões do Diagnóstico ajudaram na construção de políticas públicas mais eficientes e investimentos em aparelhos de lazer e cultura, além da criação de um site (“Cidade das Crianças”) para acompanhar o desenvolvimento da infância e juventude.

O Diagnóstico também apoiou a “criação de metas e objetivos norteadores da aplicação de recursos públicos que garantam, de fato, os direitos da infância e da adolescência”[2]. Um exemplo disso foi a construção de um dos maiores parques públicos do Estado de São Paulo, o Mundo das Crianças[3]: o parque possui acesso gratuito e traz a proposta de integração entre a brincadeira, o aprendizado e o contato com a natureza. O Parque também possui uma proposta inclusiva, garantindo acesso universal a todas as estruturas, de modo que crianças de todos os níveis de habilidade possam usufrui-las.

Para saber mais, acesse https://cidadedascriancas.jundiai.sp.gov.br/


[1] Para mais informações, acesse esse post no blog da Plan Eval: https://www.plan-eval.com/blog/?p=962

[2] https://cmdca.jundiai.sp.gov.br/2017/12/hhh/

[3] https://jundiai.sp.gov.br/noticias/2020/12/14/com-conceito-inedito-mundo-das-criancas-e-apresentado-no-aniversario-da-cidade/