Why doing evaluations ?

1 – General evaluation objectives and criteria

The reasons and objectives to do evaluation can be presented according to the two dimensions classically used to characterize general evaluation objectives:

  • the summative dimension, “what are the results or impacts?”: assessing and reporting results, effectiveness and efficiency of the policies;
  • the formative dimension, “what can we learn or improve?”: examining what works, what does not work, looking for improvements and questioning new ideas.

Most evaluations cover both dimensions to some extent (as observed in the evaluations analysed for the EPATEE case studies, see Broc et al. 2018). The main difference lies in the focus or priorities of the evaluation, as shown in the examples listed below.

Summative dimension:

  • accountability (e.g., to the Ministry of Finance, the Parliament or the Court of Auditors),
  • monitoring target achievement,
  • assessing cost-effectiveness of the policy measure,
  • etc.

Formative dimension:

  • getting a feedback on the satisfaction about the scheme,
  • understanding what worked (or did not work) as planned,
  • providing inputs to the redesign or improvement of the scheme,
  • etc.

Most of the evaluations have multiple objectives. However, evaluations rarely aim at covering all the evaluation criteria, such as the ones listed in the Better Regulation toolbox of the European Commission (2017a):

  1. Effectiveness: “Effectiveness analysis considers how successful [a policy measure] has been in achieving or progressing towards its objectives.
  2. Efficiency: “Efficiency considers the relationship between the resources used by an intervention and the changes generated by the intervention (which may be positive or negative).
  3. Relevance: “Relevance looks at the relationship between the needs and problems in society and the objectives of the intervention and hence touches on aspects of design.
  4. Coherence: “The evaluation of coherence involves looking at a how well or not different [policy measures] work together. It may highlight areas where there are synergies which improve overall performance (…) ; or it may point to tensions e.g. objectives which are potentially contradictory, or approaches which are causing inefficiencies.

Other evaluation criteria can be used (e.g., viability, utility) as represented in the figure below linking intervention logic, objectives and evaluation criteria.

Figure 1. Intervention logic, objectives and evaluation criteria.

Evaluation criteria are usually selected according to priorities of the evaluation commissioners (e.g. linked to policy agendas) or to regulatory or reporting requirements (e.g. linked to governance rules). This selection has also often to take into account practical constraints (e.g. time and means available for the evaluation, data limitations).

2 – From general evaluation criteria to specific evaluation questions: prioritizing

Evaluation criteria correspond to general questions that then needs to be transcribed in questions specific to the policy measure(s) evaluated and their background. In practice, evaluation questions most often have to be prioritized.

Example: the feedback about the evaluation of the Environmental Support scheme in Austria highlighted that all the evaluation objectives initially considered would have required a budget three times higher than the one available (Thenius and Böck 2018, pp.5-7).

Evaluation priorities can depend on its audience. The review of who was involved in the evaluations analysed in the EPATEE case studies confirmed the diversity in the organisation and role of evaluation, as shown in Figure 2 below.

The interviews done for the EPATEE case studies also confirmed that evaluation questions could be prioritized according not only to the needs of the evaluation customers, but also to the perspective of the audience. For more details, see also sections 3.1 and 4.1 of the Volume II (background report) of (Broc et al. 2018).

Example: when the evaluation is reported to the Ministry of Economy or Finance, the evaluation can have a focus on cost-effectiveness or related indicators. Likewise, when the Court of Auditors is involved, questions related to value for money is often on the agenda.


Figure 2. Who is involved in evaluations (and how) (source: Broc et al. 2018).

*: audience = bodies other than evaluation customers, monitoring body and evaluators Note: one case can include several evaluations/evaluators and different actors in the audience. Only one case study includes two different evaluation customers (for different evaluation studies). Only one case study includes several monitoring bodies, because the policy was a portfolio of programmes.

3 – Practical examples of the added value of evaluation

As suggested by some stakeholders interviewed for EPATEE (see Bini et al. 2017), a way to understand the role of evaluation is to think about what happens when no evaluation is done: in such a case, it becomes impossible to say if the money spent was used in a profitable way and produced the desired effects. This highlights why evaluation is a valuable resource for policymakers, especially in times of scarce resources.

The first EPATEE experience sharing webinars were dedicated to the added value of evaluation, with examples from:

Table 1 below provides practical examples from the EPATEE case studies about the use of evaluation results, conclusions or recommendations. For more details, see the section 2.1 of Volume II (background report) of (Broc et al. 2018).
To get other examples of the added value of evaluation (beyond the scope of energy efficiency policies), see the blog created during the International Year of Evaluation (2015) about “evaluations that make a difference”: https://evaluationstories.wordpress.com/

Table 1. Examples of use of evaluation results, conclusions or recommendations (Source: Broc et al. 2018).

Examples of outputs/outcomes from the evaluation Case studies where these examples are mentioned
Political outputs
Evidences/accountability for decision-making (particularly about funding) Better Energy Homes (IE), EE Fund (DE), Environment Support Scheme (AT), Individual heat metering (CR), Voluntary energy audits (FI), White Certificates scheme (IT), WAP (US)
Reinforcing support from policymakers and other stakeholders Better Energy Homes (IE), Voluntary agreements (FI), Voluntary energy audits (FI), Nordsyn, WAP (US)
Improving policy management
Optimising the programme management EE Programmes of Vienna (AT), Renovation programmes (LT), Supplier Obligation (UK)
New components added to increase scheme participation Voluntary agreements (FI), Renovation programmes (LT), Supplier Obligation (UK)
Improving the application process Primes Energie (BE), Environment Support Scheme (AT)
Improving monitoring and conditions for future evaluations EE Programmes of Vienna (AT), EEO scheme (DK), Agreement for freight companies (FR), “Future Investments” programme (FR), Better Energy Homes (IE), Nordsyn, WAP (US)
Adapting the scheme and its rules
Redesign of the incentives Energy renovation of public sector buildings (CR), Individual heat metering (CR) Environment Support Scheme (AT), Renovation programmes (LT)
Improving data collection and verification processes EEO scheme (UK), Environment Support Scheme (AT), Agreement for freight companies (FR), “Future Investments” programme (FR), Supplier Obligation (UK)
Updating the list of eligible actions Primes Energie (BE), EEO scheme (DK)
Improved technical recommendations/requirements Warm Front (England), Environment Support Scheme (AT), Voluntary energy audits (FI), EE Fund (DE), Multi-year agreements (NL), Warm Front (UK), WAP (US)
Better understanding of how the scheme works
Reactivity of households to changes in the incentive design Primes Energie (BE)
Detecting new trends and changes Environment Support Scheme (AT)
Better understanding of interactions between policies Voluntary energy audits (FI)
Better understanding of the reasons to participate (or not participate) to the scheme Agreement for freight companies (FR), Renovation programmes (LT)
Understanding of interactions between policies Voluntary energy audits (FI)
Understanding reasons of innovations success and failures Agreement for freight companies (FR)
Understanding impacts and  side-effects of the policy Purchase tax on new cars (NL), Supplier Obligation (UK), Warm Front (UK), WAP (US)

The European Commission’s Better Regulation Toolbox (European Commission, 2017b) also includes a section dedicated to “Why do we evaluate?”:

“Evaluation at the Commission serves several purposes. Although the importance may differ, most evaluation results will contribute to:

  • Timely and relevant advice to decision-making and input to political priority-setting:
    Evaluation supports decision-making, contributing to strategic planning and to the design of future interventions. The Commission applies the “evaluate first” principle to make sure any policy decisions take into due account the lessons from past EU action. Thus for instance, lessons learned from evaluation should be available and feed into impact assessment work from the outset.
  • Organisational learning:
    The results of an evaluation can be used to improve the quality of an on-going intervention. Evaluations should identify not just areas for improvement but also encourage the sharing of (good and bad) practices and achievements. Evaluation also provides the opportunity to look for the unintended and/or unexpected effects of EU action.
  • Transparency and accountability:
    All stakeholders and the general public have a right to know what the EU has done and achieved.
  • Efficient resource allocation: Evaluation results contribute to a more efficient allocation of resources between interventions, the separate elements of a specific programme or activity, or between activities.”