Innovation is a confusing buzzword. The Oxford Dictionary defines innovating as \”making changes in something established, especially by introducing new methods, ideas, or products\”.
Other definitions include ‘turning an idea into a solution that adds value from a customer’s perspective.’ (Nick Skillicorn); ‘ideas that are novel and useful and that can be applied at scale.’ (David Burkus). The list is long.
These definitions point to the fact that there is no clarity and that what innovation is can be very subjective. Is the lack of a clear definition of innovation a problem in terms of evaluating innovation? What do the evaluators of innovation look for when they are called in to do their work?

I spoke with Jordi del Bas to find out what he thinks about this. Jordi is an evaluation specialist and senior researcher at the EU-Asia Global Business Research Center. Over the years, he has been involved in many evaluations focusing on social/policy innovation.
Arnaldo Pellini – Can you briefly describe your background?
Jordi del Bas – I am an economist by training. I have specialized in private sector development. Over the last 17 years, I have worked as an independent evaluator for international agencies in several countries throughout Asia, Africa and Latin America. At present, I combine work as an evaluator with research and teaching in business schools and universities.
AP – Do you have a definition of innovation?
JdB – This might seem a simple opening question but it is not. Really. Innovation is an overused and abused term that means different things depending on who you ask. Over the last decade, there has been an upsurge of articles questioning the word.
I do not have my own definition, but I feel very comfortable with the TSI Foundation (The Netherlands) definition: New ideas that work to address pressing unmet needs. In the specific area of international development, I would define innovation as solving pervasive and new problems in new ways. New ways meaning adopting approaches that differ from those applied in the past.
Whatever definition you go for, to me there are two critical aspects to innovation. The first is that innovation depends on the context because pervasive or new problems are context specific. The second is that innovation is about valuable solutions, that is ideas that work in practice, that solve issues we care about, things we value.
AP – What are you looking for when you evaluate innovation processes/systems?
JdB – In innovation, we usually evaluate solutions, supposedly innovative solutions. Evaluation in innovation focuses on finding robust evidence of whether innovative solutions work, how and why, so that they can go beyond a pilot and be scaled up to maximize their positive impact.
AP – Is there a difference when evaluating innovation along with the traditional criteria of efficiency, effectiveness, impact, etc.?
JdB – Yes, there is a difference. The OECD Development Aid Committee (DAC) criteria are usually applied to traditional interventions where the focus is on achieving planned goals. Efficiency, effectiveness and impact criteria put the lens on expected effects. In traditional interventions, you do not expect significant deviations from the way you thought things would happen. It is more about gathering evidence about the outcomes achieved. Traditional interventions build on a reasonable degree of certainty, predictability and control.
In contrast, innovation is about trial and error and about iteration. In innovation, failing (not achieving intended objectives) is a source of critical insight, a source of learning. In a traditional intervention, failure tends to be a problem, as the focus is on achieving planned goals within planned resources. At times, when innovating, you also have an idea of how things could work. Still, the focus is more on testing your assumptions. There is an intentional interest in identifying unplanned and unexpected effects, as we know that valuable insights may be found in assumptions that do not hold true. You might fail over and over and then all of a sudden, find something that works. Implementing innovative solutions means a higher degree of uncertainty, lack of predictability, and lack of control.
Some of the criteria used to evaluate innovative solutions are: feasibility, desirability, viability, acceptability, usability and scalability. You could eventually evaluate if an innovation has been effective or efficient, but you would do it in terms of generated or acquired learning rather than in terms of whether it achieved planned objectives and goals. In my opinion, however, the focus when evaluating innovation is not on evaluation criteria. The focus is on finding reliable evidence of whether innovative solutions are having an impact, how and why. In this regard, the NESTA Standards of Evidence set out an interesting approach.
Moreover, I think evaluation in innovation is also about sensemaking. By sensemaking I mean making collective sense of the evidence obtained to inform what is next in the process of solving the pressing unmet needs of my earlier definition.
AP – From what you are saying the evaluation principles and guidelines set out by the OECD DAC and the evaluation methods that align with them do not see to fit well with innovation.
JdB – I think that there is a growing need for methods that allow us to capture the effects of interventions in volatile, uncertain, complex and ambiguous contexts (so-called VUCA). This is the case for innovation, but also for policy changes and reform processes. Evaluation designs that incorporate systems thinking and complexity theory are urgently needed. This is the case with Developmental Evaluation, an approach grounded explicitly in innovation. Michael Quinn Patton, one of its main proponents and developers, promotes developmental evaluation as an approach intended to assist social innovators in developing social change initiatives in complex or uncertain environments. Patton refers to innovations in a broad sense. To him, innovation can take the form of new projects, programs, products, organizational changes, policy reforms and system interventions.
A developmental evaluation can be easily embedded into design processes of solutions and experiments. When this happens, developmental evaluation provides real-time or quasi real-time evidence, informing the innovation process along the life of a project, an experiment, a pilot. Developmental evaluationsalso fit organizational development and learning processes very well.
Just a few days ago Patton wrote about the implications of the coronavirus pandemic on evaluation. One of his points was that all evaluators should now become developmental evaluators. His argument is that developmental evaluation is about evaluation under conditions of complexity, and this is now becoming the natural environment in which we must operate. We cannot model and predict the effects of interventions in complex systems, and traditional evaluation methods and criteria become almost obsolete.
AP – Are there specific skills or tools that evaluators interested specializing in evaluating innovation need to familiarize themselves with, or know well?
JdB – In my view, three skills characterize the evaluation of innovation: facilitation skills, a versatile attitude, and good analytical skills.
Let’s start from facilitation skills. In innovation, you test, iterate, analyse and share results to collectively interpret what works (or not), how and why. Such feedback loops involve bringing people together and enabling sensemaking processes. This is the reason that good facilitation skills are very useful. By versatile attitude, I mean the capacity to adapt and adjust quickly to emerging findings. Evaluating innovation builds on adaptive frameworks and on iterative feedback loops, which demand to be ready and feel comfortable with modifying courses of action when needed. Good analytical skills are needed for quick turnarounds in data collection, analysis and feedback. When evaluating innovation, data collection, analysis and feedback loops are carried out continuously, and often they need to be run in parallel alongside testing of solutions.

AP – Tell me more about the sensemaking process
JdB – Several scholars and practitioners have written about sensemaking from different perspectives. Karl Weick addresses sensemaking in organizations and Brenda Dervin in information and communication systems. Klein, Moon & Hoffman define sensemaking in a way that resonates with me, \”a motivated, continuous effort to understand connections (which can be among people, places, and events) to anticipate their trajectories and act effectively.
Sensemaking is always linked to emergence, which in turn is linked to complex settings or to processes that cannot be planned or forecasted. This is usually the case in innovation, where it is very difficult to predict what will happen. We usually understand why things happen in retrospect, looking back, making sense of what has occurred.
My closest experience with sensemaking within a developmental evaluation has been with UNFPA. We analyzed emerging patterns in the data and made sense of it through several rounds of feedback loops with a wide range of business units in the organization. We used visual mapping techniques and discussed the findings in a participatory process that allowed answering questions such as how does ‘what is going on’, affect what you desire to be as an organization? A question that to me, reflects the essence of organizational development.
AP – The skills you describe are similar to the ones required from managers and experts involved in the design and implementation of experiments to test solutions to public policy problems. These skills are an integral part of adaptive programming, something that is slowly being accepted in international development. I have one last question, Jordi. You mentioned in the beginning of our conversation that innovation has become a buzzword. What is or are the risks associate to this?
JdB – There is a crucial aspect of innovation that is often omitted or downplayed: the assessment of undesired adverse effects of disruptive innovation on society.
There is a tendency to look at positive impacts for a number of stakeholders, usually the target group of an innovation. We tend to miss questions such as, What is the broader negative unexpected impact of disruptive innovations? For whom are disruptive innovations good or bad in terms of society as a whole?
We are in a world where almost every organization is compelled to innovate in one way or another. If organizations do not innovate, they become second class. Answering these questions is essential if we want innovation to be meaningful and transcend the stigma of a dangerous buzzword that soaks up massive budgets and hides worryingly negative impacts. In this context, it is vital to strengthen the use of impact evaluations in evaluation portfolios for innovation. This would help us answer questions along the lines of, For whose benefit do we innovate? It would allow us to delve deeper into the purpose of innovation.
Thank you very much, Jordi.
If you republish please add this text: This article is republished from Knowledge Counts, a blog by Arnaldo Pellini under a Creative Commons license. You can read the original article here
Photo credit: Dario Valenzuela on Unsplash
Re: “Some of the criteria used to evaluate innovative solutions are: feasibility, desirability, viability, acceptability, usability and scalability.” If innovation = invention + use, then what about “newness” or difference from what already exists?. Perhaps surprisingly, I think this can be measured, by asking people about _similarity_ between the products/processes of interest. See my argument here: http://mandenews.blogspot.com/2019/05/evaluating-innovation.html
Thanks for the comment Rick; spot-on. I have seen newness being applied as a selection criterion (ex-ante evaluation criterion) in innovation funds. But I have seen it applied in a rather intuitive way. The approach you suggest is much more structured, objective, and highly interesting. Your comment made me think a lot about the link between newness and the context-specificity of innovations and how to link the two. I have only questions for the moment, no answers yet. Thanks a million from prompting them!
Glad to see my comments were of interest. Keep me informed if you take these ideas anywhere…or want help in doing so…
Hi Rick, how about an interview / blog conversation on this topic and to get your viewpoint?
Happy to do so, email me rick.davies@gmail.com
Great interview to Mr. Jordi. He is a seasoned expert. It makes a lot of sense to me that in Innovation not achieving the goals or not confirming the hipothesis is not a failure, provided that there is a takeaway in knowledge. Hence, evaluation must be different and consider this takeaway.
Thanks for the post Arnaldo
Hi Joan Cos, thank you very much for your comment
Thanks so much for your feedback, Joan.
Thank you very much for sharing this Arnaldo. I completely agree with Jodi that innovation should be evaluated in terms of ´generated and acquired learning´. Another useful definition of innovation in relation to this is might be ´the process of replacing uncertainty with (new) knowledge to solve unmet needs for many.
Also, I wonder how normative knowledge in relation to innovation is evaluated against the scalability and replicability criteria because they are context-specific? Also, how should we approach the lack of evidence in policy innovation ( as opposed to product innovation) when the challenge is not technical.
Thank you.
Thanks for your addition to incorporating uncertainty in defining innovation, Lula. I find the approach very interesting and very relevant in many cases. Thanks also for your reflections on scalability, applicability, and policy innovation. To my opinion, the link between scalability/replicability and context-specificity when evaluating innovations is very much related to the evaluation design. To my understanding, innovations are scaled up within the same context and replicated across contexts. When ‘replicating in other contexts’ it is crucial that evaluations incorporate methods that allow capturing the mechanisms and enabling factors (success factors if you like) that generate the impact of the innovation e.g. realist evaluation or comparative qualitative analysis would seem suitable in these setting. Lack of evidence in policy innovation is indeed an issue. Arnaldo has explored this area in-depth and I would love to read his reaction. There are always winners and users of public policies. My humble opinion is that policy innovation should be evaluated using a different lens, more systemic, and reflecting complexity. It is indeed not a technical challenge but rather an institutional one. Happy to discuss further, Lula.
Hi Jordi,
Thank you so much for taking the time to respond. I really appreciate it! Thank you for explicitly distinguishing the difference between scalability/replicability and the associated methodologies relating to design and evaluation. Makes complete sense and very helpful! I would love to read more about your work on this! Also, I would love to learn more about Arnaldo´s work on policy innovation. Thank you again for your insights!
Ps apologies for my late response, I did not receive a notification. Luckily I managed to come back to re-read your earlier points.
Hi, I have posted you reply. Thank you Lula
Hi Lula, thanks for getting back to me and for your kind words. You might want to have a look at the “Formative evaluation of the UNFPA innovation initiative”. I worked on it in 2016 with a colleague. Some of the challenges mentioned in the interview appear in the report. You can find it here: https://www.unfpa.org/es/node/16063. Please do not hesitate to let me know if you have any questions or you want to discuss further. You may reach me directly at jordidelbas@gmail.com. All the best. Jordi
Rick, thanks for your offer. I will definitively get in touch with you if the time comes to bring these ideas one step forward. Thanks again for your insights.
Pingback: The brave new world of evaluating innovation. In conversation with Petri Uusikylä |