Trekking the world, one mountain at a time ¦ July update


This month’s prep for the Rockies trek has been less dynamic that the past two months. It has not been a month of furious extremes; rather it has been a month of consolidation, of steady effort: clocking up 46 miles of training by doing lots of shorter walks, 2/3 miles and a number of 10ks, respectively in the city and around where I live in the west coast of Scotland. 

The landscapes traversed have therefore been familiar. If familiarity breeds contempt, I have to admit to a certain amount of contempt being evident in my reflections over the period. Perhaps contempt is too strong, perhaps annoyance is more accurate - whatever it is, it is a subtle aggravation, and not altogether as negative as either ‘contempt’ or ‘annoyance’ would imply. Post trauma, and in perpetual grief mode, one loses a sense of normality, likewise one loses the capacity to relate to things, especially to things familiar. Therefore, feeling neither contempt nor even annoyance would be possible . . . for even basic feelings struggle to pass through the grief veil. To re-discover those simple and basic feelings, those perfectly natural feelings of irritation and boredom, annoyance, contempt, is a revelation . . . like the dawn appearing, slow and sallow upon a forgotten horizon after a long night unslept.

I've rather enjoyed this month, therefore, retracing familiar steps, rediscovering familiar feelings . . . and it's not been entirely uneventful either: the landscape of Scotland never fails to offer up something of worthy and memorable occasion. The most stunning rainbow appeared one day, and it stayed, sharp and clear for some considerable time . . . I think it knew it needed to be photographed! 

It's also been a month of keeping busy, on other fronts; getting organised for the trek, checking passports and visas and getting other bits and pieces of paperwork completed. (Now, as for form filling - my contempt for that process has not diminished, not one bit!) But the necessary evils have been completed, and likewise future goods: I officially signed up for my next trek, in the Grand Canyon, in October of next year, 2018. 

And I was invited to complete a form of a considerably more engaging content: Discover Adventure, the company with whom I book my treks, invited me to complete an interview for them, about my trekking experiences. The interview is available here.

Certain questions provoked reflections upon things I hadn't considered: not least: what keeps bringing me back for more, trekking. There are many reasons, collectively they've effected the consequence: I think I've caught the trekking bug . . . if I ever needed any evidence, this month I had to have my walking boots re-soled; the rather shocked expression upon the face of the gentleman who took them in for re-soling I’ll take as a compliment when I told him I’d only had the boots for a little over a year. I think, by his expletives, that he was impressed.

So: Roll on the Rockies, one month to go before departure, and then roll on the Grand Canyon, then the Lava Trail . . . 

Thank you for reading,


Evaluator Seeks Sensitive Data

Avoiding the data trap blog series

'Avoiding the Data Trap’ is a 3-part blog series developed by Pamoja to highlight a new approach to impact evaluation, called Contribution Tracing.  The blog series explains key steps in Contribution Tracing that can guide evaluators, and those commissioning evaluations, to avoid common data traps, by identifying and gathering only the strongest data. The blog series draws from a live case study of a Contribution Tracing pilot evaluation of Ghana’s Strengthening Accountability Mechanisms Project  (GSAM) project. This pilot forms part of a learning partnership called the Capturing Complex Change Project, between Pamoja, CARE UK International and CARE Ghana and Bangladesh Country Offices.

Part 2: Evaluator Seeks Sensitive Data

Welcome to the second edition in the ‘Avoiding the data trap’ blog series. If you missed the first blog, ‘Mining for data gold!’, we encourage you to read that first.

In the last edition, we introduced the common problem of the ‘data trap’ that people can often fall into when collecting data – too much effort spent on gathering relatively useless data and not enough of the ‘right’ data that makes for strong evidence! As a potential solution, we introduced the first of four key steps in a new theory-based approach called Contribution Tracing (see steps in Box 1). To recap, Step 1 helps us identify the right evidence that can help prevent us from falling into the ‘data trap’. Let’s now continue by exploring Step 2: assigning probabilities for Sensitivity and Type I Error.



Based on the example case from the GSAM project, we identified five items of evidence that we might look for during data collection (Box 2 above). In step two, we turn our attention to finding out which items of evidence are the most powerful. We do this by firstly assigning two probabilities, known as Sensitivity and Type I Error (Check out GSAM team member, Samuel, who gives a brief explaination of these two probabilities in the YouTube video below).


The probability for Sensitivity works like this: if the component of the claim is TRUE, what is the probability of finding a specific type of evidence? Let’s remind ourselves of the necessary component we worked with from the GSAM claim in our first blog:

The GSAM project (entity) delivered training to Civil Society Organisations (activity) to increase their knowledge and skills in engaging with District Assemblies on the planning and implementation processes of capital projects.

In our example, the question we ask ourselves when assigning the probability for Sensitivity is: if the GSAM project really did deliver its training programme to Civil Society Organisations [component], what is the probability of finding a training agenda [evidence item #1]? This logic would be applied to each item of evidence identified in Box 2.

Probabilities are numbers between 0 and 1, which are equivalent to percentages between 0% to 100%. In Contribution Tracing, we can think of the probabilities we set for Sensitivity as follows: a probability of 0 means there is absolutely no chance of finding the evidence item (0%), whereas 1 means there is 100% chance of finding the evidence item. Of course, in reality, we can never have such definitive certainty until we begin our search! Therefore, it is common to start this process by assigning evidence with low Sensitivity very close to 0 (such as 0.5, 0.1 or 0.01) and high Sensitivity very close to 1 (such as 0.9, 0.95 or 0.99).

From our example of the GSAM project, what probability might we set for evidence item 1? To start, we know that GSAM is a well-funded, well-organised project, being implemented by a consortium of large and reputable NGOs. We also know that it is common practice to produce training agendas in this context. Therefore, we would assume that there is a high chance of finding a training agenda. Let’s say we decide on a high probability of 0.95. By setting such a probability, we are saying that we are very confident (95% in fact) of finding such evidence, should we look for it. However, we have left some room for doubt, at a level of 5%. We then follow the same process for other items of evidence - the Sensitivities for each one are shown in Box 3 below.

Remember that Sensitivity is based on our expectations of finding evidence if the component of the claim is TRUE. You will note that the Sensitivities for evidence items 1 through 4 are very similar, but evidence item 5 has a very low Sensitivity. Why? This is because, it is unusual, especially in the Ghanaian context, to film such training events. So, we set the Sensitivity lower at only 10%, because we wouldn’t expect to find such evidence if we were to look for it.


Let’s turn now to the probabilities for Type I Error, which works like this: if the component of the claim is FALSE, what is the probability of finding a specific type of evidence? In our example, it would look like this: if the GSAM project DID NOT deliver its training programme to Civil Society Organisations [component], what is the probability of finding a training agenda for the event anyway [evidence item #1]?

This might sound a little crazy at first, but let’s think it through. It is plausible that the plans for the training were well advanced and hence the agenda had been developed. Then, for a number of legitimate reasons, the training never went ahead e.g. the trainer got sick, or there was a tropical storm. I’m sure you could think of other reasons why the training was cancelled - they are numerous.

This means that the training agenda could exist even if the training event never happened. Therefore, we can assert that this item of evidence has a medium to high Type I Error. This assignment depends on the number of potential, alternative explanations that could plausibly describe its existence. In our example, let’s set Type I Error as 0.4 for this item of evidence. Here we are saying there is a 40% chance that the training agenda could exist, even if the training event never took place. Type I Errors for the other items of evidence are shown in Box 4.


You’ll note the evidence with the lowest Type I Errors are for the signed attendance record (item #2) and the video recording of the training event (item #5). Why? When assigning Type I Error, we must think about other explanations for the existence of the item of evidence, other than the explanation under investigation - in this case, the GSAM project’s training event. While it is possible that the GSAM project may have forged the signed attendance record, it is highly unlikely. Similarly, the level of deception required to stage and film a fake training event, is beyond comprehension. Therefore, the best explanation for the existence of these two items is that the GSAM project really did deliver its training event.

In Contribution Tracing, we can think of the probabilities we set for Type I Error as follows: a probability of 0 means there are absolutely no other explanations for the existence of the item of evidence, other than the component of the claim. Whereas a probability of 1 means that multiple, alternative explanations exist, which may be more plausible to explain the existence of the item of evidence. Again, we can never have such definitive certainty, ex ante, to set Type I Error as 0 or 1, so we choose a value close to 0 or 1.

Now going back to the title of our blog, why would evaluators be seeking sensitive data? And similarly, why do they like evidence with low Type I error? Remember that the higher the sensitivity, the more likely we are to find the evidence if we look for it, while the lower the Type I Error, the less likely that other, potentially better, explanations exist. 

In this final edition of our blog series we will explain how to use Bayes Theorem to update your confidence in the component of the claim following data collection. 

Mining for Data Gold!

Avoiding the data trap blog series

'Avoiding the Data Trap’ is a 3-part blog series developed by Pamoja to highlight a new approach to impact evaluation, called Contribution Tracing.  The blog series explains key steps in Contribution Tracing that can guide evaluators, and those commissioning evaluations, to avoid common data traps, by identifying and gathering only the strongest data. The blog series draws from a live case study of a Contribution Tracing pilot evaluation of Ghana’s Strengthening Accountability Mechanisms Project  (GSAM) project. This pilot forms part of a learning partnership called the Capturing Complex Change Project, between Pamoja, CARE UK International and CARE Ghana and Bangladesh Country Offices.

Part 1: Mining for Data Gold!

With Monitoring and Evaluation now a standard feature in development projects, NGO staff and evaluation practitioners are charged with the sometimes daunting task of gathering evidence to prove the influence of programming on complex social change. Examples of what NGOs such as CARE are trying to do to tackle poverty and address social injustice are endless. Often, we can see change happening in the communities we serve. However, the process of showing the ‘how’ often results in pages and pages of ‘data’ that yields little reliable evidence. There is frustration that comes with having a strong belief that programming has made a difference for the better, but then failing to capture data that supports a clear cause and effect relationship. We face challenges in claiming with confidence, just how our work actually contributed to positive change. How many of us have been here too many times before?

What is the data trap?

When evaluating a claim made by a project or programme, about the role it may have played in contributing to an observable change, it is crucial to gather evidence that strengthens our confidence in making such claims. All too often when substantiating ‘contribution claims’, strengthening our confidence in the claim is confused with simply collecting an abundance of data. We miss the mark by failing to focus on the relative strength (or weakness) of such data. Wasting time, energy and resources collecting data that does nothing to increase confidence in the claim, is what we like to call a data trap.

Enter Contribution Tracing: a new theory-based impact evaluation approach. It combines the principles and tests found in Process Tracing, with Bayesian Updating. Contribution Tracing helps sort the data wheat from the chaff! Most importantly, it changes the way we look at data, encouraging us to identify and seek out the best quality data with the highest probative power. Contribution Tracing gives us a clear strategy for avoiding the data trap; supporting evaluators instead to mine for data gold.

So how does it work? To illustrate, let’s draw from a live Contribution Tracing evaluation which is part of the Capturing Complex Change learning partnership. Ghana’s Strengthening Accountability Mechanisms (GSAM) is a USAID-funded, multi-year intervention led by CARE with partners IBIS and ISODEC. The ultimate aim of GSAM is to support citizens to demand accountability from their local government officials.

The GSAM evaluation team are currently testing the following claim, using Contribution Tracing:

GSAM’s facilitation of citizen’s oversight on capital projects has improved District Assemblies’ responsiveness to citizen’s concerns.

Essentially this claim is stating that a range of activities provided by - or funded by GSAM - has supported citizens to become more engaged in scrutinising government-funded building projects. As a result of this, District Assemblies (local government) have become more responsive to concerns presented by citizens, related to the quality, performance and/or specification of on-going capital projects in their communities, such as the construction of new schools or roads.

To test this claim, we need to unpack the mechanism that provides a causal explanation for how the project’s range of facilitation activities contributes to the outcome of District Assemblies becoming more responsive to citizens’ concerns.

In Contribution Tracing, causality is thought of as being transmitted along the mechanism, with each interlocking component being a necessary part. A mechanism component is comprised of two essential elements: an entity, such as an individual, community or organisation, for example; that performs an activity or behaviour, or that holds particular knowledge, attitudes or beliefs.

One of the necessary components, identified by the GSAM evaluation team is below:

The GSAM project (entity) delivered training to Civil Society Organisations (activity) to increase their knowledge and skills in engaging with District Assemblies on the planning and implementation processes of capital projects.

In Contribution Tracing, the role of the evaluator is to identify evidence that tests whether each component in the mechanism for a particular claim actually exists, or not. If sufficient empirical evidence can be identified and gathered for each component in a claim’s mechanism, we can update our confidence in the claim, quantitatively.

But wait! Before running off to gather whatever data we can lay our hands on, in Contribution Tracing we take several initial steps to help design our data collection (Box 1). These steps focus our attention on only gathering specific data that supports testing the existence of each component of our claim’s mechanism. Why is this important?

  • It saves a lot of effort in gathering essentially useless data, in respect of our claim;
  • It saves limited resources e.g. staff time, finance, etc;
  • It’s more ethical because we are not asking key informants to spend their precious time providing information that we won’t use; and
  • It produces more rigorous findings.

This blog is focused on step 1, with later blogs in the series describing the other steps.


To begin the data design process in Contribution Tracing, we ask “if the component of the claim is true, what evidence would we expect to find?”. In other words, if the GSAM project really did provide Civil Society Organisations with training, what evidence should be readily available, if we look for it? Some examples of such ‘expect to find’ evidence are shown in Box 2.

The logic behind identifying ‘expect to find’ evidence is simple. If the component of the claim is true - if the project really did deliver its training programme - the evaluator should be able to easily find such evidence. Failure to find ‘expect to find’ evidence, diminishes the evaluator’s confidence in the existence of the component of the claim (and perhaps in the claim overall). ‘Expect to find’ evidence, therefore, becomes powerful only when it is not found.

In addition to expect to find evidence, we must also try and identify ‘love to find’ evidence. This is evidence which is harder to identify and find, but if found, serves to greatly increase our confidence in the component of the claim (and perhaps in the claim overall). We can think of ‘love to find’ evidence as highly unique to the component of the claim. Box 3 shows an example.

While we would love to find video footage of the training event being delivered, it is not an expectation. It is not usual practice to film such events in this context, but if filming did take place, and the evaluation team could gather such evidence; it would confirm the component of the claim. So, while expect to find evidence only becomes powerful when not found, love to find evidence becomes powerful when it is found.

This step in Contribution Tracing helps the evaluation team to begin the process of focusing on identifying data gold, but it is only the first step. In the next blog, we will explore how we use probabilities to be even more targeted in our search for data gold.

Part 2 of the blog series will be published on 31 July 2017. Sign up below and get parts 2 and 3 delivered directly to your inbox.


Vlog 5 of 5: Designing Data Collection

This is the final Vlog in the Contribution Tracing series. Samuel Boateng explains how Contribution Tracing uses probabilities to focus on collecting data with the highest probative value; making best use of limited resources for impact evaluation.

CONTRIBUTION TRACING VLOG SERIES: Understanding Process Tracing tests

Vlog 4 of 5: Understanding Process Tracing Tests

In the penultimate video of the series, Michael Tettey, provides brief explanations for the four tests that we find in Process Tracing: the Hoop, Smoking Gun, Doubly Decisive, and Straw in the Wind tests.

If you have missed any of the previous vlogs in the series, you can check them out here:

  1. What is Contribution Tracing?
  2. How do you develop a testable contribution claim?
  3. Unpacking your causal mechanism

How to avoid 'toolsplaining': thinking differently about social accountability

Guest blog by Tom Aston

On the plane to Accra just over a week ago I read Rebecca Solnit’s Men Explain Things to Me (the origin of the term “mansplaining”), and it struck a chord with me. A colleague from Kenya who hadn’t heard the term before asked if there was such a thing as “white-splaining”. And, indeed, there is. But, recently, I’ve been concerned with another phenomenon: “toolsplaining”.


“Toolsplaining” is, as far as I can see, the phenomenon where we over-explain how clever a particular tool is, but forget to explain how (in reality) it interacts with context, and works together with other strategies and processes to achieve change. We often assume, usually due to lack of information (or lack of careful investigation), that whatever tool we used must explain the change – that this tool (the scorecard) or that tool (the citizens’ charter), for example, was the cause of the change.

In practice, especially for social processes, how that change happens is generally more complicated, and nuanced. There are typically multiple causal pathways even within our own strategy that work together to influence change (and of course, various complementary factors driven by other actors). And it’s often the informal micro-politics that matters, rather than your formal process.

So, we need to think differently.

How is our intervention contributing to change?

I was in Accra to support the five-year USAID-funded Ghana Strengthening Accountability Mechanisms (GSAM) project which aims to “strengthen oversight of capital development projects to improve local government transparency, accountability and performance.” In particular, what CARE wants to understand better is how our intervention is contributing to district assemblies’ responsiveness to citizens’ concerns in relation to the planning and implementation of capital investment projects.

We used contribution tracing to define a hypothesis and identified causal pathways for how change happened, rather than merely what the log frame says, or what a process map suggests ought to happen. To do this, the team looked at the process that was designed (see the graphic below), but then traced back real changes (e.g. district assemblies replacing inadequate building materials) as a causal chain.

Scorecards formally hinge on a public meeting (an interface meeting). But, on various occasions, we believed that changes had been triggered even before we’d held a public meeting (6 or 13 in the graphic above), but after we’d conducted site visits to monitor the quality of infrastructure (2). We’d established District Steering Committees composed of government actors, community leaders, and engineers (invisible between 1b. and 2b.) which were seemingly able to resolve some (but not all) problems without district town hall meetings, or even scorecard interface meetings.

Tracing the real process has therefore helped us think again about how, when, and where we really might have influenced change.

Inter-related pathways to change

Rather than a single pathway of information sharing, or knowledge transfer, it was clear we had at least four inter-related change pathways for social accountability:

  1. providing financing to civil society organisations who prepared a district scorecard to get district assembly members to respond;
  2. getting district assembly members to release data and participate in the process;
  3. supporting citizens to monitor priority infrastructure projects and presenting their findings to authorities, and;
  4. creating new spaces for dialogue between citizens and district assemblies about capital projects.

The team are now going to go out to find evidence to support their claim about how their strategies influenced change. But, I just wanted to underline some of the learning:

  • Define terms (eg transparency, accountability, responsiveness) precisely so you know what change you’re actually going to measure and what data is relevant to your hypothesis.
  • Interrogate your assumptions periodically. Allow different staff members to challenge your logic. Don’t just rely on proposal writers or project managers.
  • Don’t bundle everything together. Or else, how will you understand the relationship between different components of your hypothesis?
  • Make sure your hypothesis is in order. Remember, logical steps follow chronologically...
  • Don’t toolsplain. Don’t get distracted by your hypothetical process maps or steps in your tools: in other words, consider the evidence, not what you hope your tool influenced.

CONTRIBUTION TRACING VLOG SERIES: Unpacking your causal mechanism


This is the third Vlog in the Contribution Tracing series. Don't worry if you missed the other Vlogs, but you might want to watch them first. Check out the first Vlog on 'What is Contribution Tracing?' and the second Vlog on 'Developing a testable contribution claim'. 

In this week's edition, Francisca Agyekum-Boateng, delves into the topic of 'unpacking your causal mechanism'. Francisca explains what a causal mechanism is and how to clearly develop a mechanism, based on a specific claim.



CONTRIBUTION TRACING VLOG SERIES: How do you develop a testable contribution claim?

Vlog 2 of 5: Developing a testable contribution claim

Welcome to the second Vlog in the Contribution Tracing series. In the first Vlog, presented by Mohammed Nurudeen, we introduced you to Contribution Tracing - a new approach to impact evaluation. If you haven't already seen this Vlog you can watch it here.

An initial step in Contribution Tracing is to clearly articulate your claim. What precisely is your project, programme or campaign claiming? For example, are you claiming that your advocacy campaign has contributed to policy change?

In this video, Sharif Yunus Abu-Bakar outlines some of the key features of developing a testable contribution claim. Enjoy!


VLOG 1 of 5: What is Contribution Tracing?

Welcome to the first Vlog in a new series focusing on Contribution Tracing - a new approach to impact evaluation.

Over the coming weeks, we will publish 5 short videos that seek to introduce you to Contribution Tracing and some of its key aspects. All of the videos are presented by staff from CARE International in Ghana. You can read more about Pamoja's learning partnership with CARE here.

In this first video Mohammed Nurudeen Salifu answers the question: what is Contribution Tracing?

Trekking the world, one mountain at a time ¦ June Update

As part of Pamoja's 'Business for Good' strategy, we support good causes that are close to our hearts. One such amazing cause is the Christopher Angus Fund, which you can learn more about here.  Each month, Michael Angus, co-founder of the Fund, guest blogs to tell us about his progress to trek the world, one challenge at a time.

First day: setting off on Canal Trek, Lochrin Basin, Edinburgh

First day: setting off on Canal Trek, Lochrin Basin, Edinburgh

June 2017 has been, as predicted, all about getting the training going, in earnest, for the Rockies trek in September.

The month began with some self-inflicted isolation – in order to get my mind right basically; I headed off for three days into Scotland’s Westcoast wilderness, travelling back in time, literally and metaphorically, to walk in amongst the historical remnants of Scotland’s prehistoric past: along the Dalriada Way to Fort Dunadd and Kilmartin Glen.

There is something almost tangibly prescient about the past here – ancient bones emanate. It is where the original kings of Scotland were crowned, but before that, it is where settlers buried their chiefs in celebrated mounds, and practiced sun worship: the area abounds with stone circles; it is truly a magical place, a vast flat flood plain, stretching for six miles, almost perfectly aligned north to south – it must have provided the ideal landscape within which to study the skies…… Weather permitting of course - which seemed appalled by my presence, it must be said – maybe I imagined it. But I was the only figure in the landscape, so for whom other did nature wish to impress, by its awesome display of thunder and lightning at 11 in the morning on the second day? I felt honoured, to be honest - altogether, it was wonderfully primal……being in that the place, and nature performing at its powerful best.


It has in fact, been a month of being witness to impressive phenomena. Canals have been the predominant man-made feature of the month, and they have been pretty awesome in their own right; for these first three days, I stayed by the canal at Cairnbaan, and later in the month I completed the cross-country canal challenge that I set myself: to walk the Union Canal and the Forth + Clyde Canals, east to west, all the way from Edinburgh, to Glasgow and beyond, right to and back to the Westcoast. 65 miles in three days. Toughest thing I’ve set out to do; it’s no mean feat of human engineering and construction endeavour either.

Things did not go as well as intended. Seems my mind was not quite as right as I hoped - I took eight days to complete the trek, not three - due to an enforced injury, which was completely of my own making. I walked too far, too fast on the first day, injured my foot, which by the end of the second day was simply done. I had to let it heal, before going back to complete the last day five days later.

This has been the first challenge that I have not completed as intended – it’s provoked a lot of soul searching and reflection. I have a lot (personally) invested in this trekking campaign. But what I discovered, or rather what has been confirmed, is that I love trekking. I could never have predicted making such a statement – but the thrust of the planet, pushing back under one’s feet, even feet somewhat aching and bloody, is deeply comforting. It’s a healing thing to do, even if it harms – a contradiction, certainly, especially, as one’s mind wanders when one treks, and thoughts are not always entirely wholesome, the grief demons most certainly take advantage and invade – I suppose though, that really, they have nowhere to go – and the broad expanse of the landscape can accommodate their unhinged expulsion; whatever angers and rages I might feel, the natural world can summon breaths and downpours, and thunderous (literally) voices of its own, to both mirror and acknowledge my own dark heartache. The man-made world cannot match such ache – but one has to applaud the endeavour involved in the construction of such a thing as the Union and Forth + Clyde Canals – the will – which has created a place for water to rest. Within all the torpor of the natural world, the static and level calm of the canal infused more than anything else, by its silent balm, an unruffled ear to my unspoken aches and pains – both the physical and the mental.

In between these treks, I’ve continued to complete other shorter and regular training walks, locally and through the city; altogether, distance travelled 90.5 miles. Plans for the following month are to continue, with regular walks through the week, and longer treks at the weekends. It’s good to have a plan……and speaking of which, the longer-term plan has been moved on: this month, I officially registered to take the 8 day trek challenge in the Grand Canyon in October 2018: the fifth of six treks that I’m setting myself to complete in six years.

It’s all investment……

Thank you for reading.