CONTRIBUTION TRACING VLOG SERIES: Designing Data Collection

Vlog 5 of 5: Designing Data Collection

This is the final Vlog in the Contribution Tracing series. Samuel Boateng explains how Contribution Tracing uses probabilities to focus on collecting data with the highest probative value; making best use of limited resources for impact evaluation.

CONTRIBUTION TRACING VLOG SERIES: Understanding Process Tracing tests

Vlog 4 of 5: Understanding Process Tracing Tests

In the penultimate video of the series, Michael Tettey, provides brief explanations for the four tests that we find in Process Tracing: the Hoop, Smoking Gun, Doubly Decisive, and Straw in the Wind tests.

If you have missed any of the previous vlogs in the series, you can check them out here:

  1. What is Contribution Tracing?
  2. How do you develop a testable contribution claim?
  3. Unpacking your causal mechanism

How to avoid 'toolsplaining': thinking differently about social accountability

Guest blog by Tom Aston

On the plane to Accra just over a week ago I read Rebecca Solnit’s Men Explain Things to Me (the origin of the term “mansplaining”), and it struck a chord with me. A colleague from Kenya who hadn’t heard the term before asked if there was such a thing as “white-splaining”. And, indeed, there is. But, recently, I’ve been concerned with another phenomenon: “toolsplaining”.

 

“Toolsplaining” is, as far as I can see, the phenomenon where we over-explain how clever a particular tool is, but forget to explain how (in reality) it interacts with context, and works together with other strategies and processes to achieve change. We often assume, usually due to lack of information (or lack of careful investigation), that whatever tool we used must explain the change – that this tool (the scorecard) or that tool (the citizens’ charter), for example, was the cause of the change.

In practice, especially for social processes, how that change happens is generally more complicated, and nuanced. There are typically multiple causal pathways even within our own strategy that work together to influence change (and of course, various complementary factors driven by other actors). And it’s often the informal micro-politics that matters, rather than your formal process.

So, we need to think differently.

How is our intervention contributing to change?

I was in Accra to support the five-year USAID-funded Ghana Strengthening Accountability Mechanisms (GSAM) project which aims to “strengthen oversight of capital development projects to improve local government transparency, accountability and performance.” In particular, what CARE wants to understand better is how our intervention is contributing to district assemblies’ responsiveness to citizens’ concerns in relation to the planning and implementation of capital investment projects.

We used contribution tracing to define a hypothesis and identified causal pathways for how change happened, rather than merely what the log frame says, or what a process map suggests ought to happen. To do this, the team looked at the process that was designed (see the graphic below), but then traced back real changes (e.g. district assemblies replacing inadequate building materials) as a causal chain.

Scorecards formally hinge on a public meeting (an interface meeting). But, on various occasions, we believed that changes had been triggered even before we’d held a public meeting (6 or 13 in the graphic above), but after we’d conducted site visits to monitor the quality of infrastructure (2). We’d established District Steering Committees composed of government actors, community leaders, and engineers (invisible between 1b. and 2b.) which were seemingly able to resolve some (but not all) problems without district town hall meetings, or even scorecard interface meetings.

Tracing the real process has therefore helped us think again about how, when, and where we really might have influenced change.

Inter-related pathways to change

Rather than a single pathway of information sharing, or knowledge transfer, it was clear we had at least four inter-related change pathways for social accountability:

  1. providing financing to civil society organisations who prepared a district scorecard to get district assembly members to respond;
  2. getting district assembly members to release data and participate in the process;
  3. supporting citizens to monitor priority infrastructure projects and presenting their findings to authorities, and;
  4. creating new spaces for dialogue between citizens and district assemblies about capital projects.

The team are now going to go out to find evidence to support their claim about how their strategies influenced change. But, I just wanted to underline some of the learning:

  • Define terms (eg transparency, accountability, responsiveness) precisely so you know what change you’re actually going to measure and what data is relevant to your hypothesis.
  • Interrogate your assumptions periodically. Allow different staff members to challenge your logic. Don’t just rely on proposal writers or project managers.
  • Don’t bundle everything together. Or else, how will you understand the relationship between different components of your hypothesis?
  • Make sure your hypothesis is in order. Remember, logical steps follow chronologically...
  • Don’t toolsplain. Don’t get distracted by your hypothetical process maps or steps in your tools: in other words, consider the evidence, not what you hope your tool influenced.

CONTRIBUTION TRACING VLOG SERIES: Unpacking your causal mechanism

VLOG 3 OF 5: UNPACKING YOUR CAUSAL MECHANISM

This is the third Vlog in the Contribution Tracing series. Don't worry if you missed the other Vlogs, but you might want to watch them first. Check out the first Vlog on 'What is Contribution Tracing?' and the second Vlog on 'Developing a testable contribution claim'. 

In this week's edition, Francisca Agyekum-Boateng, delves into the topic of 'unpacking your causal mechanism'. Francisca explains what a causal mechanism is and how to clearly develop a mechanism, based on a specific claim.

 

 

CONTRIBUTION TRACING VLOG SERIES: How do you develop a testable contribution claim?

Vlog 2 of 5: Developing a testable contribution claim

Welcome to the second Vlog in the Contribution Tracing series. In the first Vlog, presented by Mohammed Nurudeen, we introduced you to Contribution Tracing - a new approach to impact evaluation. If you haven't already seen this Vlog you can watch it here.

An initial step in Contribution Tracing is to clearly articulate your claim. What precisely is your project, programme or campaign claiming? For example, are you claiming that your advocacy campaign has contributed to policy change?

In this video, Sharif Yunus Abu-Bakar outlines some of the key features of developing a testable contribution claim. Enjoy!

CONTRIBUTION TRACING VLOG SERIES: What is Contribution Tracing?

VLOG 1 of 5: What is Contribution Tracing?

Welcome to the first Vlog in a new series focusing on Contribution Tracing - a new approach to impact evaluation.

Over the coming weeks, we will publish 5 short videos that seek to introduce you to Contribution Tracing and some of its key aspects. All of the videos are presented by staff from CARE International in Ghana. You can read more about Pamoja's learning partnership with CARE here.

In this first video Mohammed Nurudeen Salifu answers the question: what is Contribution Tracing?

Capturing complex change: is it really all about confidence?

Guest blog by Tom Aston

For those of us that sing the praises of social accountability (citizen-driven initiatives to hold those in power to account), making a claim about “impact” (or transformative change) is a challenge we face on a daily basis. And CARE’s not alone. The title of the first session at an NGO political economy analysis working group at which I’m presenting this week (“Building the Evidence-base for Social Accountability”) speaks to the same concern.

Some argue that we need more longitudinal studies. Others advocate the use of Randomised Control Trials (RCTs). And a recent study CARE conducted on the influence of Community Score Cards on reproductive health-related outcomes in Malawi shows that RCTs have a place, and do demonstrate that social accountability makes a difference.

But, once you consider that outcomes are behavioural changes of real (complicated) people, you quickly see, as Marina Apgar recently suggested, why we need to move “beyond measurement of linear pre-defined change and intervention-effect alone and [use] mixed-methods to help us understand emergent complex social change.” Social accountability outcomes (such as mayors changing budgets to benefit poorer areas, or even procuring a new ambulance) don’t fit neatly into boxes. They rely on our capacity to influence behaviour, and this is behaviour we can’t (fully) control. So, we need to better explain HOW change happened, not merely to assert that it did. 

Recognising this has led CARE to explore various theory-based methods such as Most Significant Change and Outcome Mapping. With a particular emphasis on the change process, we are now piloting Contribution Tracing with Pamoja Evaluation Services in Ghana and Bangladesh to help us better understand CARE’s contribution to social accountability outcomes.

Contribution Tracing is all about increasing your confidence in making claims about impact. Essentially, you make a “claim” about your intervention’s role in achieving an outcome that really happened (your contribution), and then find evidence to defend your claim.

To do this, like other theory-based methods, you need a hypothesis (a proposed explanation) about how you think change happened. You then review the connection between different steps (or components) in that process

You identify evidence that would help support (or undermine) your proposed explanation using the four tests of Process Tracing (Straws-in-the-wind, Hoops, Smoking Guns, Doubly Decisive).

What matters is not how much evidence you have, but how good that evidence is to help confirm that each part of your proposed explanation for your claim really exists (“probative value”).

In Contribution Tracing, you use Baysian (Confidence) Updating to assign a probability (how likely it is) that the various components of your contribution claim exist; and ultimately whether your claim holds true. You then update your confidence after gathering data precisely tailored to your claim (increasing or decreasing the probability using the four tests), compare this against rival explanations, and then put it up for “trial”, inviting others in to peer review your claim.

We’re right at the beginning of the journey, but to me, what our learning already suggests is that:

  • You can show your contribution, even when change processes are complex;
  • You can make credible impact claims, without a counterfactual;
  • You can tighten up your loose theory of change as you go along, and;
  • You may not need to gather as much data as you think you do to prove it.

But don’t take my word for it; listen to some reflections from staff on the experience so far. And watch this space for more to come.

A new learning partnership for inclusive governance

CARE International recently entered into a partnership with Pamoja Evaluation Services for an exciting new learning partnership, known as the Halcrow Project. The purpose of the learning partnership is to build on CARE’s substantial expertise in inclusive governance programming by better capturing its effects through a strengthened approach to monitoring and evaluation. Therefore, the Partnership will seek to produce credible and rigorous evaluations that can speak directly to CARE’s contribution to observed changes in contexts where inclusive governance work is taking place.

To help CARE realise its ambition to better capture the effects and contribution of its work in inclusive governance, Pamoja is supporting CARE to apply the cutting-edge evaluation approach of Contribution Tracing as an organising framework. Two of CARE's country offices in Ghana and Bangladesh are taking part in the project, due to the extensive inclusive governance work they deliver. This will provide us with an ideal learning opportunity for an evaluation that applies Contribution Tracing.

Debates in the evaluation practitioner and academic spaces, and our own in-depth literature review, strongly indicates that randomised approaches are, by themselves, not a solution to the challenges of measuring inclusive governance. While we have very many positive outcomes measured for inclusive governance work, there is very little documentation on impact. Therefore, we are facing an evidence challenge in which we can seek new opportunities to test and implement innovative approaches such as Contribution Tracing, to enhance this knowledge base and fill our evidence gaps around the impact of inclusive governance.

Contribution Tracing is based on the principles of Process Tracing and Bayesian (Confidence) Updating. This approach offers practitioners clear guidance on what data to collect. It has been designed to support the formulation and validation of a ‘contribution claim’ about the role played by an intervention to determine if outcomes of interest are realised. It then measures how much particular items of evidence increase or decrease confidence in relation to a specific contribution claim. This process will support CARE’s inclusive governance programmes to design a system which focuses on gathering the ‘right’ data, thereby using resources for monitoring and evaluation more efficiently.

Contribution Tracing forces evaluators to think – in great detail – about how and why a particular change has come about as a result of a project. Through this evaluative pilot process, we will identify what works and where gaps exist in the way the team has been monitoring and documenting change.

Under this exciting new initiative, a group of staff from Ghana and Bangladesh are working collaboratively with Pamoja until the end of this year. By challenging practices and assumptions, and using new ways of thinking about evidence - offered by Contribution Tracing - we aim to demonstrate CARE's unique contribution to transformation in the inclusive governance space. We are confident this approach will help CARE to unpack the nature of social accountability in this context and better articulate the role that CARE and its partners have played in delivering impact to citizens.

Advocacy evaluation: art of the possible

by Gavin Stedman-Bryce

I was asked by Médecins Sans Frontières (MSF) to attend a staff conference in Geneva earlier this year. I had been asked to speak about the importance of monitoring and evaluation of advocacy - a real passion of mine. Alas my diary was full the day of the conference but thankfully the organisers allowed me to attend via video.

This short film, featuring Tom Aston, a Governance Advisor from CARE UK International, touches on why advocacy M&E is not only increasingly important, but possible thanks to a range of innovative methods and tools. 

An evaluator's journey

I started calling myself an evaluator in 2008 when I established Pamoja; but for years after, during long flights, I would ponder how I ended up becoming an evaluator. On one such flight, pondering whether my in-flight meal was fit for human consumption, I finally found my answer. Tanzanian teachers!

As soon as I graduated, I leapt at the chance to volunteer in East Africa with VSO. When I first landed in rural Tanzania, I was only 23 years old, straight out of Glasgow’s East End. My naivety at how the world worked was at PhD levels of genius.

I recall getting really angry as I looked around at my new home, a place called Singida - a small town in the middle of Tanzania, days from anywhere. I was angry because nothing seemed to work at the school where I was volunteering as an A-Level biology teacher.

When I say nothing seemed to work, I am referring to the teachers who were never in class. I seemed to be the only teacher who would turn up. Where were my colleagues and why were they not taking their duties as a teacher seriously? Did they not care about the students?

This was my first hard lesson from mother Africa; everything has a reason. I learnt early in my career, that if you ask the ‘right’ questions of the ‘right’ people, in the ‘right’ way - and then listen carefully - an answer will usually present itself. In my school teacher case, it was that teachers were busy working in their fields to ensure a bumper maize crop to bring much needed money into their households. I later learned that the Ministry of Education hadn’t paid salaries to teachers in almost 12 months. That’s why they weren’t in school teaching their students.

I had found out it wasn’t laziness or greed, it was need. This memory reminded me of why I call myself an evaluator. I am a naturally curious person, always thinking about how things could be improved. For a time, I thought of myself as too critical. But I now believe that pessimists make great evaluators and I am definitely in the glass half-empty camp. At least that’s what I tell myself.

Pessimism aside, I firmly believe that as humans, we can always get better at what we do, if we take time to learn about what works (or doesn’t work), and share our knowledge.

Yes, I still get angry at the injustices in the world but as an evaluator, by inspiring others to learn and improve, I have found my unique way of turning frustration into positive action.

I am passionate about learning and improving and I want to inspire others to learn and improve too. It’s been a long time since I’ve been on the frontline in a development sense, but as an evaluator, I make my small contribution through inspiring good change. And I owe it all to some absentee biology teachers from Tanzania.   

 

What's your story? We'd love to hear about how you became an evaluator. Send your stories to info@pamoja.uk.com.