Three questions about MEL for projects dealing with complexity

Last week I wrote to the Peregrine Discussion Group on Better Evaluation with a couple of questions on M&E or MEL of initiates that try ton address complex social and development problems. I have received many very interesting replies which i am posting below together with my questions.

My questions

1) What do you think are the key principles that we need to have in front of us when designing MEL systems for complex and adaptive (and uncertain) projects and programmes?

2) Do we need a different MEL system and tools?

3) What do you suggest I should read on this topic and question?

The answers

Russel Gasser

1) What do you think are the key principles that we need to have in front of us when designing MEL systems for complex and adaptive (and uncertain) projects and programmes?

Complexity is not cleanly separated from and unrelated to the causal and chaotic domains, the boundaries are not clear lines much of the time, and trying to put in clean separators between what is complex and what is not, is itself a causal/linear approach to a complex issue. Trying to define a fixed set of MEL principles for the complex domain is a causal/linear approach that is necessarily doomed to failure,  both in terms that (a) it assumes that it is possible to predict what will be needed and also (b) it is based on an assumption that a finite range of methods can cover every case.  Both of these assumptions are false in the complex domain – that failure of these assumptions is one way to define what complexity actually is, and separate it from “complicated”, “difficult to understand” and “nothing like what I have experienced before”.  Neither of the first two is complex (other than in everyday conversational use of “complex”) and the third may be complex but actually describes the previous experience of the person, not the complexity of the situation. Causality is a one-way street in the complex domain – it can only be seen with hindsight. If it is truly “complex” then measurement of predicted results can only really measure the degree of luck or skill at prediction in guessing the outcome, not the value or success of the intervention.  Perhaps the most important question to answer for the real world is: How can we create the link from funders who are locked in to the linear/causal model, to providing funding for and using an evaluation that is based on complexity ?  Until you get the complex domain evaluation (or MEL) funded, and until there is the possibility that organizations change what they are doing as a result of evaluation in the complex domain, then it is academic research that lacks clear application. Nothing wrong with academic research, but this forum isn’t likely to be the best place to address it.

2) Do we need a different MEL system and tools?

MEL is not a single process or system.  The toolkit is already huge. Some of the processes and tools are suitable for use in the complex domain, some are not. Answering this question has the same issues as mentioned above:  (a) it assumes that it is possible to predict what will be needed and also (b) it is based on an assumption that a finite range of methods can cover every case. A better approach might be to ask: “What do we need to know and understand in order to be able to decide which tools and methods are likely to be useful in the specific complex intervention we are looking at?\”  The specific nature of the complex domain means that we can only really consider one intervention at a time.A really important feature of complexity is that solutions are NOT transferable.  The MEL that works on one complex intervention is unlikely to be useful without changes for other complex interventions (that is the nature of complexity) and we can be certain that it will not be useful for all possible interventions in the complex domain. Monitoring is about the measurement of what is happening. There may be no reason to significantly change this from previous practice, beyond being far more obsessive about knowing what is going on right now and being able to respond quickly to build on success and damp-down failure.  If monitoring is not providing useful information in the causal domain then that won’t change when moving to the complex domain.===Evaluation in the complex domain can’t ask “did we achieve what we expected to achieve?” as that is largely meaningless.  But evaluation can still be any one of a range of approaches, and parts of the OECD DAC are still useful (e.g. “unforeseen consequences” in impacts).  Evaluation in the complex domain can usefully ask questions about what changes resulted from what aspects of the intervention, what was cost-effective and what was a waste of resources, how the intervention approach could be improved, etc.  Techniques like Outcome Harvesting and various story-telling techniques that are capable of identifying what really made a difference, what changed as a result of any intervention, and for whom, may be able to capture outcomes and impacts if scope and focus are correctly set. Learning is still much the same, it involves many of the same core skills for people to appropriate new information, internalize analysis and consequences, reinforcement to embed the knowledge, and a willingness to change beliefs and practices in the light of new information and experience.  If a causal/linear system is not supporting learning and producing new insights and approaches within an organization then transferring it to the complex domain won’t change the lack of learning. Selecting who might learn what, and for what purpose, needs even more detailed attention as complexity increases. Politics and social interaction are examples of interactions that are mostly in the complex domain, insight into “learning\” in political and social contexts may provide useful guidance. 

3) What do you suggest I should read on this topic and question?

Dave Snowden’s Cynefin Framework publications and videos, and the Cognitive Edge website.  If you are interested in the theory behind the Cynefin view of complexity then Max Boisot (a tough read…)- Jonny Morell’s blog- John Mayne specifically on Attribution by Contribution- Outcome Harvesting website and blogs – Better Evaluation website on “What is evaluation\”. Any web search engine will turn up plenty of other content on complexity as you search for these, and probably lead you to other useful sources – if you find anything really useful then maybe you could share it on Peregrine ?


Silva Ferretti 

Current RBM are totally not fit for purposes. They allow very little space to understand the meaning of results within context and the processes through which they are achieved. Unfortunately, changing approach requires rewiring our way of thinking. From an idea of linear change (if i do A then B happens) and control (I plan, I achieve) towards an idea of complexity (many diverse forces drive change, and they can play differently) and adaptiveness (we have a purpose, we navigate options). 

It also requires us to put our principles first, and to understand and integrate different worldviews. 

As some students in a training c told me; \”we had really to stretch and twist our thinking. we feel we, ourselves, changed\”. 

If done properly, principled M&E exploring complex and adaptive systems is not only about setting a different way to collect data or process them. 

It is about asking the whole organization to think and work differently in relation to change. To decolonize our thinking. 

And this is the real challenge. 


Bruce Boyes

Hi Arnaldo,

Following on from Russell\’s good advice, I would suggest exploring iterative impact-oriented monitoring, as recommended in one of the Overseas Development Institute (ODI) complexity publications, see https://realkm.com/2020/08/17/taking-responsibility-for-complexity-section-3-2-2-iterative-impact-oriented-monitoring/

Because of the uncertainty and unpredictability that occurs in complex environments, continuous and iterative M&E and the related adaptive management readjustment of actions aimed at achieving the original impacts is much more effective than hoping for the best and then doing MEL at the end. For an example of where I\’ve successfully used this iterative MEL approach see https://realkm.com/2016/03/03/case-study-an-agile-approach-to-program-management/

For an analysis of RBM in the context of complexity see Box 2 in https://realkm.com/2019/11/11/managing-in-the-face-of-complexity-part-3-1-tailoring-management-approaches-to-complex-situations/


Thomas Winderl

I agree: RBM hasn’t turned out to be the revolution in adaptive management that it was supposed to be. But my feeling is: it’s not the idea that is flawed, but the implementation. In most cases, RBM turned into a fill-out-the-boxes, “do the same but give it a different name”, increasingly bureaucratic exercise.

That’s a shame. I think there IS value to the basic idea of RBM. My proposal to ‘rescue’ RBM: Follow five simple rules. These are:

1.      A relentless focus on outcomes, outcomes, and outcomes

2.      Keep adapting what you do based on what you learned from outcome monitoring

3.      Involve stakeholders at every step to create quick feedback loops

4.      Budget for outcomes and outputs, not for activities

5.      And overall: keep RBM simple, but not simplistic

I’ve written a longer post about this a few months ago. If you want to see more detailed argument, check out http://results-lab.com/five-rules-for-results-based-management.


John Hoven

Bruce, have you investigated qualitative methods for iterative, evidence-based investigation of cause-and-effect in a one-of-a-kind situation? The dominant method, process tracing, is used almost exclusively for backward-looking causal inference. However, it is easily adapted to forward-looking design and implementation of a context-specific development / peacebuilding project. Iterative updating of the causal theory proceeds at the pace of learning (every week or two, not every 6 or 12 months.) See https://www.researchgate.net/publication/324507012_Adaptive_Theories_of_Change_for_Peacebuilding.


Alix Tiernan

Hi Arnaldo

This is one of our favourite questions.

Here are some practical thoughts on your three questions:

1) Key principles

a) Design the system around processes that repeat regularly and often, and expect to review project documentation at least twice a year, allowing for new strategies, approaches, and activities to creep in, and other originally planned ones to fall out – that’s the essence of adaptation

b) Try to avoid using targets if you possibly can – or if you can’t, then try to use the targets as a comparison to where you are at but NOT as a performance management tool

c) Find creative ways of triangulating and validating your data to ensure that you are taking account of all the different and changing variables and voices (ie. complement surveys with research studies, focus group discussions in communities with meetings with ministers, etc.)

2) Different MEL system and tools

a) you can adapt results frameworks to an adaptive approach but you need to identify some very good qualitative data collection mechanisms that allow you to take into account the effect of the complex context on your project/programme (we have been using Outcome Harvesting for that)

b) make sure you don’t use targets as a performance management tool, because not hitting your target could be the best possible indicator that you are being adaptive!

c) we like using theories of change instead of logframes or results frameworks, because they don’t have to be linear and don’t prescribe specific outcomes – whatever tool you use, make sure you have a way of picking up unexpected results of the work that relate to the expected outcomes, and that you analyse why they happened. Ideally that analysis would then lead to adaptation of further implementation, in an iterative manner.

3) What to read

I will just point you to this document: https://www.odi.org/publications/11191-learning-make-difference-christian-aid-ireland-s-adaptive-programme-management-governance-gender, which we co-authored, but there are many more from ODI exploring adaptive programming, if you google it (it’s sometimes called adaptive programming, sometimes adaptive management). 

Hope these (rather hands on) thoughts are helpful.


Bob Williams

I’m sorry if I appear to be trolling everyone on this list.  Maybe I’ve been around too long and have seen some of the histories and backstories the ideas being discussed here.  My understanding is that Adaptive Management was developed and adopted because of the widespread failure of RBM rather than being part of the adaptive management story.  

RBM has been criticised as a really bad idea both in theory and practice at least since the mid 1990s.  I recall a hugely negative evaluation of the use of RBM in the UN agencies written at least 15 years ago.  It concluded that it was a very bad idea because it wasn’t sufficiently adaptive for the kinds of circumstances that UN agencies.  The formal response from the UN was that RBM was a problem of implementation not the basic concept, without giving a single piece of evidence.  Hence my dismay to find myself working with a very large UN agency last year that was just introducing RBM at enormous cost.

None of which necessarily undermines the advice given by Thomas.  Just that a bit of history (or at least my version of it) might help a little.


Leslie Fox

… and yet, here we are with most of the major multi- and bilateral donor agencies not to mention a good number of international and local CSOs – those who pay our consultant fees — very much committed to an RBM approach. I’ve worked with most of these organizations and while evaluations have required an RBM approach (achieving results vis-à-vis measuring change in indicators), none of them has been limited to a single methodology to arrive at a full picture of whether change has/is taking place or why. I believe it’s called mixed method.


Frank Page

HI Arnando,

I am going to take a bit of a different tack than others who have answered this question more from the technical side. I am going to look at the institutional side. 

The simple answer I would give is to turn MEL systems into MIS – because, ultimately, M&E data/information is management information. 

For organizations to adapt quickly and effectively in complex environments a number of organizational practices are required.  Some organizational practices required include (this list is not complete) combining authority and responsibility in each position as far down the chain of command as to the right places in the organization possible (in other words, and I like this term, all positions has the agency to do their job and improve their performance) and then provide them and give them access to the information they need to make the best decisions. 

In other words, all staff from field workers, to project managers, to program managers, to VP’s and CEOs should be monitoring and evaluating performance in relation to the vision and mission and making adjustments themselves – as low down the ladder as possible. The M&E function of analysis (at least) needs to be integrated in their jobs, not in a separate department. In addition, doing this also moves much of the learning function into the chain of command, and the parts of the learning function that don’t go into the chain of command could very well go under R&D.

The current practice of separating program and m&el introduces many, some serious, coordination problems that are difficult overcome. 

Thus M&E when becomes MIS, it focusses on collecting original data, pulling data from the organization and its stakeholders, pushing data and information to where it is needed and allowing those in the chain of command to pull the data/information they need.  Then, those who are responsible for actually achieving goals in a complex environment have the information to make changes and adapt quickly. 

 

Photo credit: Tim Johnson on Unsplash

Leave a Comment

Your email address will not be published. Required fields are marked *

sixteen + ten =