Practical barriers to evaluation and its integration into the policy cycle

1  – Barriers to evaluation

The interviews and first online survey of stakeholders done at the beginning of the EPATEE project made possible to identify the main barriers to evaluation, as perceived by the stakeholders (see Bini et al. 2017). The barriers qualitatively mentioned by the interviewees could be grouped in four categories (data; resources; management; awareness and perception), that were then used in the online survey to grade the importance of the various barriers identified.


Note: scale from 1 (barrier with low influence) to 5 (very important barrier)

Figure 11. Question “Please grade the importance of the barriers to evaluation” (Source: Bini et al., 2017).

The three main barriers show a mix of organizational, financial and technical issues:

  • insufficient financial resources, for example due to public budget restrictions and priority given to funding implementation;
  • lack of interest from policymakers and public managers, for example due to priority given to action or a fear that evaluation finds unexpected effects or raises unwanted issues;
  • lack of reliable data to evaluate non-energy effects, for example because the data needed is not in the scope of the monitoring of the policy measure.

Interviews done for the EPATEE case studies also showed that performing evaluation is not only about practical (e.g., data collection) or methodological (e.g., defining a baseline) issues. Organizational issues can be as important, and particularly when considering the planning and use of evaluation.

Another key barrier to evaluation pointed by some interviewees is the lack of trust that stakeholders may have in evaluation results. The credibility of the evaluation results is indeed essential for policymakers and other stakeholders to take them into account. Trust may depend on how stakeholders perceive the legitimacy or credibility of the evaluators and their methodology, the quality of the evaluation itself, whether they were involved in the evaluation process and whether results are transparent.

The lack of interest in evaluation sometimes shown by the top management and the fear to see results less good than expected can explain the lack of priority/resources dedicated to evaluation. This may also explain why some respondents consider that the lack of an obligation to perform evaluation can be a major barrier. However, if there were a stronger support from the top management to do evaluations, there would be no need to push for an obligation to evaluate. Mandatory evaluations can indeed lead to reports filling shelves.

2 – Barriers to the integration of evaluation into the policy cycle

The many issues (e.g. financial, technical, organisational, political, etc.) that can impede an effective evaluation or reduce its scope are also affecting the capability of evaluation to be integrated in the policy cycle, thereby contributing to continuous improvement.

The interviews done for the EPATEE case studies confirm that by introducing and integrating evaluation in the policy cycle, policy effectiveness can be improved.

The second EPATEE online survey of stakeholders investigated, among other issues, challenges for the integration of evaluation into the policy cycle (Bini et al. 2018). Answers about current practices were analysed according to the profile of the respondents (evaluation customers, Figure 12, and evaluators, Figure 13).


Note: multiple answers possible

Figure 12. What evaluation customers answered about the way evaluation is generally integrated (or not) into the policy cycle of their organization (Source: Bini et al. 2018).

Half of the 12 evaluation customers answered that the evaluation results and conclusions are usually communicated to the various levels of hierarchy within the organisation (up to the top management / top levels).

This was confirmed by another question for which only 1 (out of 12) respondent said that evaluation results were rarely discussed in his or her organisation. Whereas 2 said it was systematically the case, 5 that evaluation results were frequently discussed, and 2 that they were sufficiently discussed. So overall, when an evaluation is done, evaluation results would be discussed. This result should be taken with caution, due to the small sample size and the possible risk of bias that respondents could be considered front-runners in terms of evaluation practices.

As point of comparison, the qualitative survey done by Giorgi (2017) provides a more mixed picture: “Policy stakeholders all stated that evaluation was and ought to be key to good and open policymaking. Evaluation, often indirectly and in certain circumstances, was believed to inform policy; however, interviewees stated that policy is not driven by evaluation outcomes. Often policy interviewees highlighted two facades of evaluation: an external, formal, independent assessment carried out for accountability purposes and an internal more iterative and reflective dialogue of what works.

The other criteria included in Figure 12 show that the situation varies a lot among the respondents when dealing with practices for planning and undertaking evaluations. A sign that very different practices are found among countries and/or institutions, from the absence of clear evaluation framework or guidelines to the systematic use of clear rules.

This was confirmed by another question about the practices related to early planning of evaluation (i.e. plan the evaluation from the start of the policy measure): 6 respondents said that this practice is either frequent (3), systematic (1) or sufficient (1) in their organisation, 3 that it was rare, and the remaining 3 that they don’t know.

The answers from evaluators also showed a diversity in the practices they encountered, from purely administrative evaluations to evaluations well linked to the policy process.

Note: multiple answers possible

Figure 13. Evaluation and its links with the policy process and decision-making, from the evaluators’ point of view (Source: Bini et al. 2018).

More specifically about evaluation planning, 34% of the 29 evaluators said that the evaluations they made were either mostly (24%) or completely (10%) planned in advance. Whereas 25% said that they were mostly (21%) or completely (4%) decided and managed at the last moment. 38% mentioned a mix situation (partly planned, partly managed at the last moment). Evaluators’ point of view would thus reflect more “late planning” than evaluation customers. However this point is to take with caution due to the small size of both samples.

A large majority of the surveyed evaluators (73%) said that their evaluation results were discussed by the policymakers or officers, either systematically (17%), frequently (21%) or sufficiently (35%). However, compared to the feedback from surveyed evaluation customers, the share of surveyed evaluators saying that this was rarely the case is higher (24% vs. 8%, i.e. only 1 evaluation customer).

At the end, the results of the survey showed a diversity in the practices, and that if good practices are sometimes applied, they are not systematically used.

The second EPATEE online survey also provided insights about barriers that can impede an effective integration of evaluation into the policy cycle.

Many answers about the barriers to integration of evaluation into the policy cycle raise issues similar to the barriers reported in the first EPATEE survey about evaluation practices (see Figure 11 above). Particularly about resources. But some are more specific to the links between evaluation and the policy cycle.

This was an open-ended question in the survey. So the results are mostly qualitative. Nevertheless, similar answers were grouped to analyse if some issues stand out, and to see if the answers could be matched with the issues identified in Table 3.

4 issues (Political will; Resource allocation; Evaluation planning and preparation; Communication and mutual understanding) are clearly present in the answers to the survey. The 3 other issues (Legitimacy; Organisation; Communication about the evaluation and its results) were not explicitly reflected in the answers to the online survey. However these issues were clearly raised in several of the interviews done for the EPATEE case studies, and more generally (i.e. about evaluation but not specifically related to energy efficiency policies) in the interviews done by Giorgi (2017).

Answers related to Political will (top-management commitment)

7 answers emphasised that policymakers’ lack of interest in evaluation and/or priority given to launching new policies or implementation could be one reason for other barriers to happen (financial and time resources, timing and planning, cultural aspects). Some of these answers pointed that these issues can be related to the turnover in the policymakers.

These answers raised another issue: the lack of interest in evaluation could be because policymakers would assume that they know well the impacts of the policies. 4 other answers go even further on this line, mentioning that policymakers might sometimes not be willing to see results different from what they are expecting. This feedback is moderated by another answer reporting a positive experience of public authorities that showed clear interest in the evaluation results and in using them.

Another answer to the second EPATEE survey brought a complementary view about policymakers’ interest or will to evaluate, pointing that evaluation is not always necessary from a decision making point of view (which echoes to some extent some of the answers above about cultural aspects).

Some answers indeed highlighted that decisions can be the result of political compromise that do not necessarily take into account evidences brought by evaluation.

Answers related to Resource allocation (time, people, budget)

The financial barrier is mentioned in a straightforward way in 6 answers, also emphasising that the resources available for evaluation can depend on the size (or budget) of the policy measure. Other answers raise cost-related issues rather than budget constraints (e.g. costs for data collection and analysis, administrative burden for participating parties).

Time as a resource is also directly mentioned in 3 answers (so less frequently than financial resources).

Giorgi (2017) mentioned that the issue of time resource is not only about having enough time to collect data, perform analysis, etc. It is also about not having enough time to involve people in the evaluation process or to set up agreements, partnerships to facilitate the evaluation.

The lack of budget allocated to evaluation can be a prominent barrier when there is no legal requirement to do evaluations (or that the scope of such requirements is limited to a few policy), especially in times of austerity, as pointed by Giorgi.

Answers related to Evaluation planning and preparation

2 answers raise issues related to evaluation planning (e.g. data collection not planned early enough). More answers (5) deal with timing in terms of difficulties to match timeframe for evaluation and timeframe for decision processes.

Giorgi (2017) also found from her interviews with policy stakeholders that timing is one of the main issues to achieve evidence-based policy making. She highlighted that “policy and evaluation have two distinct tempos”. Policy implementation needs to be dynamic and reactive. Whereas evaluation requires to stand back and take time for analysing. Hence the challenge to coordinate both.

Another issue raised by Giorgi’s interviewees is the fact that policies are “not being designed from the onset as ‘evaluable’ policies taking place in an interrelated system with a myriad of intervening factors impacting a non-linear process”.

Answers related to Communication and mutual understanding

Answers related to evaluation planning also pointed that problems with evaluation planning might be due to differences in the cultures or habits between decisional level (policymakers) and operational or technical level (policy officers and other implementers). These differences, or usual routines in decision making or policy management, are raised in 6 other answers pointing out communication issues within or between institutions, specifically between political and operational levels, as well as the need of knowledge transfer and capacity building for the different persons to be involved in the evaluation process and use of evaluation. Capacity building (for both sides, policymakers and evaluators) was also mentioned in 6 other answers.

Connected to the cultural aspects, 4 answers raised issues related to the definition or selection of evaluation indicators or criteria. This issue was also connected to the differences in viewpoints between operational agents, policymakers and evaluators who could be interested in different evaluation objectives or metrics, and have different understandings of the policy.

In a few cases, the respondents ranked the barriers they mentioned. These four rankings are different. This would suggest that the hierarchy of barriers might depend on the context, or on respondents’ own experience.

The qualitative survey done by Giorgi (2017) provides complementary insights, including about the issues not raised in the answers to the second EPATEE survey.

About the communication and use of the evaluation results, Giorgi highlighted that “amongst respondents there was a sense of realism that, at times, circumstances and data do not allow for evaluation outcomes to influence policies.”

About the organisation of evaluation, one finding of Giorgi’s survey is that the usual steps of an evaluation2 are not really linked up as the theory would suggest. In practice, they often operate separately. Mostly because they are managed by different persons, services or bodies. Each person might then have a limited view about the other steps (issue related to possible lack of time, or lack of communication between services or organisations). For example, the planning & design step can be managed by a service dedicated to policy analysis or research. Whereas the decision to commission an evaluation and the evaluation priorities were set by the directorate general. Both services might be interested in different questions, which might lead to conflicting views or inconsistencies in the evaluation specifications.

This is summarized by Giorgi: “having phases operate in isolation or in silos is not conducive to better evaluation practice”.

Two other points highlighted by Giorgi can be linked to the issue of communication (between services or organisations): “not having access to colleagues (e.g. policymakers not having access to policy analysts); and the high turnover of staff due to how career paths get forged”.

Another issue pointed in Giorgi’s survey is that the expectations or objectives of the policy stakeholders might change between the very beginning of the evaluation process and its end. This can be due to evaluation priorities firstly based on a kind of wish list instead of being based on an analysis of the policy theory and needs for decision-making. But this can also be due to a change in the top management or even government. Such experience was for example mentioned in the EPATEE case study about the evaluation of the Primes Energie scheme in Wallonia.

About allocation of resources, some practitioners interviewed by Giorgi reported tensions between implementers and evaluators, because implementers saw evaluation as “taking time, resources and energy away from delivery”.

2 Giorgi uses the following steps to depict the evaluation process: design & plan; commission & detailing; implement & analyse; complete & use results.