Clement Bisai from CARE Malawi talks about what he and his team are learning about how to do better remote data collection. Focus, listen to communities, and reflect regularly are his key takeaways. Don't expect to outsource everything. Digital remote data collection may be the best way to work in COVID-19, but we're already learning how to do it better.
Melch Natukunda from CARE Uganda talks about trying to build the first ever financial services app that linked poor rural women to banks. What's the biggest lesson? "it’s not just financial services. Anything we do should be trying to lighten women’s burden and help her with the other challenges she’s dealing with.” It's also about remembering that, "at a bank, someone is looking at this project and saying, 'is this giving me profit?' That will never happen in 6 months." You need at least 5 years to build something that will really work, but once you've got it, it can work for millions of people.
Iain Dickson from Birdlife talks about his project, "Embracing Failures" which is developing a failure taxonomy that helps conservation organizations learn about the underlying causes of failure. What can we do to get better at learning from failure? "Just live it," says Iain. "We think about it as a complex overarching problem, but many of the solutions are simple." One of the key solutions they found was that there's appetite to talk about failure, but it works best when this is a conversation, and not an exercise filling out forms.
Kylie Hutchinson, independent evaluator and author of three books about evaluation and program planning: Evaluation Failures: 22 Tales of Mistakes Made and Lessons Learned, A Short Primer on Innovative Evaluation Reporting, and Survive and Thrive: Three Steps to Securing Your Program’s Sustainability talks about how to learn from common evaluation failures to improve impact and social justice. Her two tips are learning how to engage stakeholders more effectively and understand context when you're doing an evaluation. Here are three questions to ask yourself before launching an evaluation: 1) What are we trying to learn? 2) What are we going to do about the answers? 3) When do we need to know?
"You need to design for real people, not for experts." "Be ruthless with what you really need, and what's just nice to have" Isadora Quay from CARE's Gender and Emergencies work discusses CARE's Gender Marker, and all of the attempts it took to get to a tool that would actually work for the organization, not just the experts. It's about building tools that can turn everyone into a gender champion, and not tools that contain everything. The other secret? Design on a napkin!
Anne Sprinkel and Dipendra Sharma from CARE's Tipping Point project talk about the challenges in implementing RCTs, and the risk of sacrificing communities' needs to the methodological rigor that researchers demand. "Make sure you have a good reason for doing an RCT," says Sprinkel. Sharma adds, "Start with good programming, then build research around it." They also have some great tips for managing expectations, clear communication, and just how long it takes to do it right (Hint: it's a lot longer than you think).
Isadora Quay talks about the process of developing CARE's Rapid Gender Analysis, and how embracing imperfection is key to saving lives. When we want everything to be perfect, that often means we delay or prevent sharing any information at all, which can be catastrophic in humanitarian (and development) settings. Making tools useful for a broader range of people, and focusing on practical, tangible suggestions, and analyzing results in plain language for non-experts are some key lessons to take forward. "Act fast, there's a huge need for real information in real time." Isadora argues that failure is inevitable, so we need to learn not to prevent it, but to manage it and learn from them.
In our first ever Francophone episode, Fanomezantsoa Randrianarisoa from CARE Madagascar talks about what happens when you launch an experimental monitoring system before partners are ready for it. Besides challenges getting the data you need, there are serious risks to sustainability. Investing in people's skills, creating back-up plans, and aligning with global systems are some of the solutions.
Dans notre premier épisode francophone, Fanomezantsoa Randrianarisoa du CARE Madagascar explique ce qui se passe si vous lancez un nouveau system d’évaluation avant que l’équipe ne soit prêt. Il n’y a pas seulement les difficultés de trouver l’information, mais aussi les risques pour pérenniser le system. Les leçons appris comprennent : un appui au personnel, l’appropriation du system, et l’harmonisation avec les autres systems.
Ian Lathrop from USAID’s LEARN project talks about how to show the art of humility, and learn from failures so we don’t repeat them. After action reviews, pause and reflect, and having leaders model behavior are all practical actions he suggests for getting better at this. Some of the resources he suggests to create space for learning from failure are USAID Learning Lab - CLA Maturity Tool Resources, the video on Community Connector and CLA: Proving the Concept, and Learning Lab’s failure blog.