Development cooperation is probably one of the most intensively evaluated policy domains. The EU alone produces an impressive stream of evaluations every year. These include 10-12 “strategic” evaluations, which seek to understand how EU resources are spent at country/regional level, on core themes or by using specific aid modalities (such as budget support).
But what is done with all the knowledge contained in these often costly evaluations? Is there “uptake” of lessons learnt; and are recommendations used by policy-makers or practitioners? Do they help to improve the way the EU operates? Or are these documents poorly read and under-used?
The European Commission (EC) recently reviewed its strategic evaluation system, looking at how evaluations are used and what can be done to improve the uptake of lessons learnt. The timing for this could not be better. With the new round of programming well underway and the development community’s eyes firmly set on the EU for the European Year of Development in 2015, the Commission and the EEAS can show leadership not only as “the biggest donor”, but also in doing development better.
A recent strategic evaluation, for example, explored EU cooperation with the Occupied Palestinian Authorities. It examined why the EU has not been able to lead a “results-based dialogue with Israel and the Palestinian people”, and why its generous aid had few lasting effects in the absence of a political solution to the conflict. Needless to say, these are very important documents.
To know more about how these documents are used, the Evaluation Unit of EuropeAid commissioned ECDPM and ODI to carry out a “study on the uptake of evaluations”. The purpose was to investigate how much learning takes place in the entire process of conducting and disseminating these strategic evaluations. The study involved a wide range of stakeholders in the EC (both at headquarters and field level). The findings were discussed in lively meeting with senior management. The EU External Action Service (EEAS) was also keen to be involved, as it has no evaluation unit so far, but badly needs knowledge to perform.
This initiative reflects a clear interest by the EU to improve the quality of its operations. It is in line with the wish of the new Development Commissioner, Neven Mimica, to improve the process of assessing the efficiency of development cooperation policies and instruments through a “very precise result framework, which would enable us to measure the impact of our development programs and projects.”
The ‘uptake study’ fits in the on-going review of the EU’s entire aid delivery architecture, which can be traced back to the Agenda for Change in 2011. At the time, the EU committed itself to showing greater impact, and developing a more strategic external assistance. This is also the driving force behind the on-going review of EuropeAid’s project cycle management approach, and its so-called ‘results framework’, which is meant to increase both the accountability and effectiveness of EU cooperation by putting outcomes and impact at the centre of how it manages EU development cooperation. By asking how the EU uses its evaluations, it also investigates how the EU values knowledge in its overall development cooperation.
WATCH this video from Capacity4Dev on Evaluation.
The results of the study were somewhat mixed. While all stakeholders agree on the importance of evaluation, no clear system is in place that helps people make sense of the complex findings, and as a result many important lessons are lost instead of learnt.
Rethinking how the EU uses evaluation to inform strategic choices is an important next step in this overall reform, and things have already been set in motion. Unlike other evaluations, this study was published with a detailed “action plan to strengthen EuropeAid’s evaluation function”, which adds to the Evaluation policy “evaluation matters!” that was adopted only months ago.
The combined message of all three of these documents is very clear: using evaluations more systematically will lead to smarter and higher quality development cooperation. Achieving this requires making evaluations a key part of EU development practice, and rooting the use of rigorous analysis into the “corporate culture” and decision-making processes of the EU.
Ambitious objectives like these rule out a purely technical solution. The Commission therefore rightly chooses for a systemic approach to strengthening what it calls an “evaluation culture”. Simply put, this means that the Commission recognises that acting on evaluation depends on the organisational environment as much as on the evaluations themselves, and in order for uptake to take place the necessary space and incentives need to be created.
This is the subject of the EU’s new action plan. It stresses the need to diversify “the menu” of evaluations, and ensure a better fit with demands of policymakers/practitioners. While also associating intended users more in the process of planning and carrying out evaluations (while safeguardingthe independence of the exercise). The Commission seeks to establish closer ties with the EEAS on evaluations in order to jointly solve the puzzle of “knowledge translation”, and how to ensure that evaluations are not only read, but also put to good use.
The proactive approach taken by EuropeAid’s evaluation department is promising. What is needed now is political leadership and a management coalition that takes up the mantle of quality. The upcoming results framework mentioned by Mimica offers ample opportunity to new knowledge driven “corporate culture”, making EuropeAid into the global leader it aspires to be.
Read the full study: Assessing the Uptake of Strategic Evaluations in EU Development Cooperation.
The views expressed here are those of the author, and not necessarily those of ECDPM.