Good practices for the integration of evaluation into the policy cycle

The overall approach of aiming at integrating evaluation into the policy cycle can be summarized as follows:

Evaluation needs to be an integrative, continuous process not a one-off exercise at the end or a series of self-contained steps, it needs to become a way of working” (Giorgi 2017)

This part presents examples of good practices collected in the EPATEE case studies, the second EPATEE online survey (Bini et al. 2018) or in the survey done by Giorgi (2017). They are not meant to be exhaustive, but to reflect empirical findings from the analysis of current practices for samples of evaluations. The objective is to contribute to experience sharing and capacity building.

The good practices are presented as actions that can be done by persons or units in charge of evaluation within public bodies. But they can also be considered by evaluators, private stakeholders or more generally anyone interested in the development of evaluation practices.

These actions are structured according to the main issues identified in Table 3, and grouped in two categories, respectively short-term and medium-term actions.

The first category can be considered whenever launching a new evaluation (or better whenever launching a new or revising an existing policy). The second category includes suggestions to improve practices and facilitate the integration of evaluation into the policy cycle over time.

Some of the actions presented in the following sections can overlap as the different issues are often linked. The Annex provides two summary tables, respectively for short-term and medium-term actions.

The actions presented below should be seen as suggestions. They do not necessarily apply to all contexts and situations. Their relevance also depends on the “magnitude” or ambition of the evaluation activities, the political stakes, size of the policy evaluated, history of the policy, etc.

1 – Political will (top-management commitment)

1.1 – Clarify expectations, what evaluation can bring and how it can be used

Political will to do evaluations is often linked to policymakers’ interest in what they can get or learn from an evaluation. This can thus depend on their background and previous experience with policy making and evaluation. Ensuring that all parties involved in the evaluation are on the same line about what it can (or cannot) bring can thus be critical for the decision to launch an evaluation or the success of the evaluation.

Another key issue is policymakers’ willingness to accept risk and failure. Extreme cases being to reject doing evaluation either because one may be sure that the policy is successful (so no need to verify it), or because one may be afraid of the results that the evaluation could show. Experience sharing is then a way to overcome preconceived ideas. Particularly by showing that understanding reasons of failures, weaknesses or limitations is the best solution to learn how to design successful policies.

SHORT-TERM ACTIONS Purpose(s)
Organise an exchange between the person in charge of the evaluation internally and the top-management of the institution in charge of the policy ·        identify top-management’s expectations towards evaluation;

·        agree on realistic expectations about what evaluation can and cannot achieve (Giorgi 2017)

Ensure that the evaluation includes indicators or metrics in line with policymakers’ priorities ·        ensure that the evaluation will bring findings that policymakers will be interested in

·        speak the same language as the policymakers

MEDIUM-TERM ACTIONS Purpose(s)
Communicate about what evaluation can bring to policymakers, and share experience about how to handle “bad” results

Raise awareness of policymakers about evaluation approaches

(e.g. organising experience sharing workshops; preparing briefing notes that present testimonies from policymakers)

·        highlight findings and examples that will resonate with the priorities of the policymakers contacted;

·        overcome preconceived ideas about evaluation

·        avoid evaluation to be instrumented

·        get support from the top-management to evaluation activities

Providing a clear view of the evaluation process (and particularly of the means involved) and relate the evaluation budget to the whole budget of the policy

(e.g. explaining that the evaluation process will be embedded in the policy cycle, and will use synergies with the policy implementation and monitoring; explaining that evaluation can help to optimize policy design and implementation, thereby generating cost savings)

·        provide hands-on examples that will have an echo for the policymakers;

·        demystify the evaluation process, its costs, etc.;

·        answer to concerns/fears creating reluctance to evaluation

1.2 – Analyse the current policy framework and processes, how evaluation can fit in, and what stakeholders can ask for evaluations

There is an overall trend to institutionalize evaluation processes. This can take various forms, for example:

  • legal provisions to ensure that evaluations will be done for certain types of policies, at certain stages of the policy cycle, etc.;
  • creation of services or institutions dedicated to evaluation (e.g. evaluation unit or evaluation body);
  • expanding the role or missions of existing institutions (e.g. Court of Auditors, Parliament).

More generally, an evaluation will occur in a given policy framework that is important to take into account to identify the stakeholders who can be involved in the evaluation process, what the timing of the evaluation should be to be in line with decision processes, etc. It is therefore useful to analyse the main policy frameworks, which is mostly related to medium-term actions.

MEDIUM-TERM ACTIONS Purpose(s)
Review how policy and evaluation processes are supposed to be organised, and identify possible gaps between framework and practice ·        anticipate how evaluations can be planned and connected to the policy cycle;

·        raise issues related to gaps in the evaluation practices

Identify what stakeholders can request or initiate an evaluation ·        identify the institutions or persons to contact when considering evaluation plans or priorities;

·        help to coordinate evaluation plans (e.g. when different institutions can initiate evaluations)

Identify what legal provisions are in place about evaluation processes or methodologies (at national or EU levels) ·        anticipate legal requirements for evaluation;

·        identify legal provisions that can help to initiate an evaluation process

Analyse for what policies there would be a public or political pressure to make evaluations (and about what questions) ·        help prioritize evaluation plans;

·        help prioritize evaluation questions

1.3 – Mandatory provisions for evaluation

In the interviews and online surveys done for the EPATEE project, several stakeholders pointed that how many policies are evaluated (and to what extent) sometimes depends on the national laws and regulation. Systematic evaluation requirements were reported for some countries (e.g., Germany, Sweden, UK). Such requirements were sometimes mentioned to apply mostly for ex-ante evaluation or impact assessment (e.g., Austria), or to be focused on “large budget” policies (e.g., Italy). Some stakeholders also told that the lack of evaluation requirements and/or guidelines was a key reason for the lack of evaluation in their country.

This point goes beyond the previous one, in case a general lack of evaluation is observed. It should indeed be noted that obligations to evaluate can have counterproductive effects, for example: making that evaluation is perceived as a burden, leading to evaluation reports that are not read or used.

MEDIUM-TERM ACTIONS Purpose(s)
Advocate (at national or EU level*) to include requirements about ex-post evaluation in the laws establishing new policies or amending existing policies. ·        Ensuring that evaluation is considered when a policy is decided or revised
Review if EU requirements about evaluation and reporting are well identified and taken into account by national (or regional) policymakers ·        Anticipating reporting needs

* several stakeholders have highlighted that EU requirements are often considered as a “must” by national bodies (which is not always the case for national requirements)

2 – Resource allocation (time, people, budget)

2.1 – Discuss evaluation means when deciding the budget for the policy measure

As pointed by one interviewee during the interviews done for EPATEE, evaluation costs are usually very small compared to the cost of implementing the policies. Still, it was a recurring feedback that not enough budget was available for evaluation. This can be partly explained because evaluation is sometimes considered late in the policy cycle, at a time where most of the budget for the policy has already been used.

Beyond budget, this issue also encompasses other types of means that can be needed for the evaluation, for example the time needed from implementing staff to collect and report data.

One way to tackle this issue can thus be to ensure that the means needed for the evaluation are discussed when deciding about the overall means committed to the policy.

This issue is also related to evaluation planning (see 6.3.1) and organisation (see 6.5). Particularly, several stakeholders pointed that doing evaluation should as much as possible avoid to create new administrative processes and work including new information collection systems (see 0 about identifying synergies).

Likewise, actions to show the added value of evaluation (see 6.1.1) can help to get adequate evaluation means.

The discussion about evaluation means can therefore be an opportunity to discuss the cost-efficiency of evaluation efforts: what the evaluation can bring vs. what means it would require. Experience shows that stakeholders involved in policy implementation are more inclined to contribute to evaluation (e.g. for data collection), when they have a clear view about what their contribution (e.g. data) will be used for.

SHORT-TERM ACTIONS Purpose(s)
Put evaluation means on the agenda when the budget of the policy is decided ·        Initiate early enough the discussions about the evaluation means (to avoid cases where no budget is left when considering evaluation too late)
Clarify what the evaluation means will be used for, and what the evaluation will bring ·        Get support or commitment from the stakeholders that will be involved in the evaluation
MEDIUM-TERM ACTIONS Purpose(s)
Analyse the decision processes about budget/means ·        Identify the opportunities to discuss evaluation means

2.2 – Define criteria to assess the needs in evaluation means

The previous point shows that it can be useful to have criteria or guidelines at hands to have quick insights about the means that can be needed, depending on the type of evaluation objectives. Such tools can then help for evaluation means to be taken into account in budgetary discussions. It can also contribute to enable an early planning of evaluation (see 6.3.1).

Developing such criteria or guidelines is usually a medium-term action.

MEDIUM-TERM ACTIONS Purpose(s)
Prepare hands-on guidelines summarizing what means are usually needed depending on the type of evaluation objectives ·        Provide quick insights about needs in evaluation means related to common evaluation objectives

3 – Evaluation planning and preparation

3.1 – Make early planning of evaluation a common practice

Taking into account future evaluation needs when designing or revising a policy is a usual no-brainer of the recommendations about evaluation practices. However, feedback from stakeholders confirms that it is not yet a common practice.

This could be due to various reasons: priority given to action/implementation, different stages of the policy cycle being managed by different services or institutions, etc.

Based on the experience from past evaluations and the feedback from stakeholders, the two tables below include examples of respectively short-term and medium-term actions that can help to ensure that evaluation will be considered along the policy cycle, and not perceived or delivered as a one-off standalone activity or a collection of self-contained steps (risks identified by Giorgi 2017).

SHORT-TERM ACTIONS Purpose(s)
Clarify from the start of the policy cycle what the evaluation will be used for.

Discuss the evaluation objectives (and their prioritisation).

·        Clarify the evaluation objectives

·        Get an agreement on the purposes of the evaluation, and thereby an acknowledgement of its usefulness

Consult evaluation experts when setting up the policy ·        Identify what evaluation approaches could be relevant (depending on evaluation objectives)

·        Identify most common data needs (depending on the evaluation approach) from the outset

Clarify the expectations for monitoring and evaluation requirements (particularly about data collection).

Explain what the contributions (particularly the data) to the evaluation expected from practitioners/stakeholders will be used for.

·        Tackle the negative perception of evaluation taking resources away from delivery;

·        Get support or commitment from the practitioners/stakeholders to be involved in the evaluation

Making the organisation of the evaluation clear before the policy (or its new period) starts ·        Facilitate coordination / cooperation along the evaluation process

·        Ensure that stakeholders are aware of when and how they can be contacted for or take part in the evaluation

Set up a working group to organise data collection, discuss ICT issues, etc. (when relevant) ·        gather the various expertise and different viewpoints that can be needed to organise the data collection;

·        anticipate practical problems that can be encountered for data collection

·        identify possible synergies (particularly about data that can be used for several purposes)

MEDIUM-TERM ACTIONS Purpose(s)
Adopt a general framework to plan evaluation activities

(e.g. defining guidelines to facilitate early planning of evaluation)

·        Provide conditions for evaluation to be more systematically taken into account at the beginning of the policy cycle
Define a set of usual evaluation indicators, and guidelines about how they can be used in practice ·        Facilitate the discussion and selection of evaluation indicators
Use past experiences to prepare guidelines about planning data collection (depending on the evaluation method)

(e.g. identifying usual data sources; asking participants’ approval for further surveys when providing financial incentives; ways to handle data privacy)

·        Facilitate the discussion about early planning of data collection

·        Anticipate risks or difficulties commonly encountered for data collection

3.2 – Timing issue

Matching the timeframes of evaluation and decision (or consultation) processes is a recurrent challenge for evaluators, who often mention the lack of time as one of the key difficulties they face.

Giorgi (2017) analysed that one way to tackle this challenge is to work on a mutual understanding between policy officers and evaluators.

With an open acknowledgement from both sides, this timing mismatch, according to interviewees, can be addressed. External evaluators, researchers and/or civil service analysts, on the one side, need to be confident in sharing emerging findings and insights live at that point in time through outlets such as learning sessions. The idea being that emerging findings are ‘good enough’ and tell the narrative of the ‘impact at this point in time’ to feed into policy development decisions. Policymakers, on the other side, need to feel comfortable with these caveats and with the risk that final results and end conclusions may differ when all the data has been collected and analysed.” (Giorgi, 2017)

SHORT-TERM ACTIONS Purpose(s)
Ensure that policy officers consult evaluation experts about realistic timelines for evaluation (depending on the evaluation objectives)

And reciprocally, ensure that evaluators are fully aware about the timeline and time constraints of the policy (and particularly about the timing of consultation and decision processes)

·        Ensure that evaluation commissioners have realistic expectations, taking into account time constraints

·        Ensure that evaluation findings will be available early enough to be taken into account in the consultation or decision processes

Consider the use of regular intermediate feedback loops ·        Ensure timely inputs for policy management

·        Enable to adapt data collection, evaluation questions, etc. if needed

 

MEDIUM-TERM ACTIONS Purpose(s)
Consider alternative to classical ex-post evaluations, depending on the evaluation objectives and time constraints

(e.g. consider “accompanying” or mid-term evaluations that are done while the policy measure is still running)

·        Facilitate the adaptation of evaluation planning to the needs and priorities of policy making

3.3 – Preparation

For general guidance about how to prepare an evaluation, see the sources mentioned in part 3 |. We deal here with aspects of the preparation that can have an interaction with the policy cycle, and that were highlighted by stakeholders about their experience with evaluations. See also section 6.5 about the aspects related to the organisation of the evaluation.

SHORT-TERM ACTIONS Purpose(s)
Make sure that the relevant information will be available for evaluators to understand how the policy was supposed to work (description of the policy theory)

Keep track of the main changes (or decisions) made along the policy developments

·        Ensure that the evaluators will know the background, key elements and changes of the policy

·        Avoid that the analysis from the evaluation be disconnected from the reality of the policy

·        Avoid the risks that the memory of the policy be lost (e.g. due to staff turnover)

MEDIUM-TERM ACTIONS Purpose(s)
Prepare guidance about how to describe the policy theory and what information to keep track of when aiming at documenting a policy and its changes over time ·        Facilitate the information flows between policy officers and evaluators
Define simple procedures in case of staff turnover about information and experience transfer ·        Build an organisational memory of the policy

4 – Legitimacy and credibility

As mentioned in section 6.1, a successful integration of evaluation into the policy cycle requires political will or stakeholders’ interest in evaluation. Another key factor is how evaluation is perceived by stakeholders, and particularly in terms of legitimacy and credibility. Both are needed for stakeholders to have confidence in the evaluation results (see part 4 of Broc et al. 2018).

The legitimacy of the evaluation and the credibility of its results are essential for evaluation findings to be discussed fairly by the different parties, and then to be used when considering changes in the policy or other decisions.

When stakeholders have confidence in the evaluation results, this also helps strengthening the legitimacy of the policy and related decisions. This can then translate into securing or increasing the funding for the scheme (see Irish case study) or the involvement of stakeholders such as participants to voluntary agreements (see Finnish case study).

The level of confidence may depend on how stakeholders perceive the credibility of the evaluators, the relevance and reliability of the evaluation methodology, if they were involved in the evaluation process, if the results are transparent (not only public, but also well documented).

The two tables below suggest actions to create conditions to ensure legitimacy and credibility.

SHORT-TERM ACTIONS Purpose(s)
Consider what different points of view about the policy should be taken into account, what stakeholders should be included in the discussions about the evaluation ·        Identify key stakeholders for the discussions about the evaluation

·        Favour a pluralist approach (i.e. taking into account the differences in viewpoints)

Inform stakeholders about the evaluation process, from the decision of launching an evaluation (and particularly about the rationale to do an evaluation, the evaluation objectives and the selection of the evaluators) ·        Transparency and legitimacy of the evaluation process
Create opportunities of discussion or commenting along the evaluation process

(e.g., create a group gathering stakeholders’ representatives, with meetings for the key milestones: evaluation objectives, evaluation methodology, evaluation results)

·        Involve stakeholders in the evaluation process

·        Collect the different points of view

·        Ensure a common understanding (of the evaluation objectives, methodology and results)

·        Identify if adaptations (of the objectives of methodology) should be considered

·        Avoid late comments that would question the legitimacy or credibility of the evaluation

Organise an external review of the evaluation methodology and results

(e.g., appoint a scientific committee including evaluation experts and academics)

·        Provide guarantee about the quality of the evaluation methodology and results

·        Identify possible shortfalls or limitations, and how to overcome them

Use of conservative assumptions or correction factors in case of missing or incomplete data ·        Avoid over-estimations of the results (and particularly for energy savings)
Test the robustness of the results

(e.g. plausibility checks, comparison of different calculation methods, benchmarking)

·        Credibility of the results
Ensure a minimum level of documentation of the evaluation methodology and results ·        Transparency of the evaluation and its results

 

MEDIUM-TERM ACTIONS Purpose(s)
Define transparent criteria to select the evaluators ·        Transparency of the evaluation process

·        Credibility of the evaluators

Define transparent procedures for verifying the quality of the data.

And when relevant, define guidelines and organise training for stakeholders involved in data collection.

·        Transparency of the evaluation process

·        Ensure quality of the data (thereby credibility of the results)

Consider how to facilitate the use of measured or metered data for the evaluation of actual energy savings ·        Credibility of the results
Ensure that the means and time available for the evaluation are adequate vs. the evaluation objectives / ambition ·        Credibility of the evaluation process
Clarify what criteria are used to ensure the independency* of the evaluators ·        Transparency of the evaluation process

·        Legitimacy of the evaluation

·        Credibility of the evaluators

Define templates to ensure a systematic and minimum documentation** of the evaluation methodology and its results ·        Transparency of the evaluation results

* About the independency issue, feedback from the interviews and online surveys done for the EPATEE project has shown different points of view and interpretation of what an independent evaluation would be, and how to ensure independency of the evaluators. To summarize, commissioning external evaluations is one, but not the only way. Alternatives can be that evaluations be done by a distinct service or institution than the one(s) in charge of deciding, funding and implementing. The possibility for an independent public body (e.g. Court of Auditors) to do evaluations on its own initiative also proved to provide good guarantees.

** About the documentation issue, in most cases, evaluators will not publish all the details of their methodology (e.g. about modelling, sampling technics). Either due to intellectual property, commercial confidentiality, or because this would lead to too long and indigestible reports. However, a minimum documentation is always possible to provide a view on the main aspects of the methodology and data issues. This is essential to keep the memory of the evaluation, avoid misinterpretation of the results, and enable a review by external experts.

5 – Organisation

5.1 – Clarify the roles, responsibilities, contacts and possibilities to contribute

A clear organisation helps the evaluation process to be well identified by the different services, bodies, stakeholders involved in the policy cycle. This is thus a prerequisite to facilitate the integration of evaluation activities into the policy cycle. Aspects about planning are dealt with in section 3. This section is focused on the roles and responsibilities of the different parties who can be involved in the evaluation process.

Experience shows that clearly assigning the role to coordinate or supervise the evaluation to a central contact creates favourable conditions for a partnership between evaluators and practitioners (i.e. persons in charge of or involved in implementing the policy).

Such a set-up ensured a clearly defined role and set of responsibilities providing a sense of ownership and a ‘stake in’ the outcomes to those running the evaluation. This mutually beneficial partnership approach seemed to work best with clear delegation, factoring in evaluation into delivery activities from the onset and, where relevant and appropriate, using specialist organisations.” (Giorgi, 2017)

Another key aspect of the organisation of the evaluation is the way the different parties (policymakers, policy officers, delivery practitioners, etc.) will be involved in the evaluation process. As reminded by Giorgi (2017), “the policy cycle is not the sole responsibility of the policymakers, it needs to be shared and owned across all relevant parties.” Which also applies for evaluation that is not the sole responsibility of the evaluators (or evaluation commissioners).

This implies to clarify the roles and responsibilities, but also to create favourable conditions for interactions between parties, opportunities for discussions, etc.

Such organisation leads to the evaluation not being perceived as a process separate from the policy cycle or developments, but as a part of it. Moreover, as highlighted by Giorgi (2017), “a wider sense of purpose and belonging in an evaluation can help nurture buy-in”.

SHORT-TERM ACTIONS Purpose(s)
Nominate an evaluation coordinator or supervisor ·        Create a central contact, well identified by all parties involved in the policy cycle or in the evaluation process

·        Facilitate the coordination of the evaluation process

Identify the stakeholders to be involved, and clarify the roles and responsibilities.

Clarify when and how discussions about the evaluation will be organised.

·        Involve stakeholders in the evaluation process

·        Transparency of the evaluation process

Clarify what contributions to the evaluation are expected from each party, and how and what they will be used for ·        Facilitate stakeholders’ participation in the evaluation process

·        Transparency of the evaluation process

MEDIUM-TERM ACTIONS Purpose(s)
Consider if this would be relevant to set up an evaluation unit, working group or alike (for example to define evaluation plans, prioritize evaluations to be done) ·        Ensure a general supervision of the evaluation activities
Identify the different services, public bodies, subcontractors (when relevant) that can have interactions with the evaluation process (particularly for information sharing or data collection).

Use past experiences to define guidance for coordination / cooperation between services or bodies.

·        Facilitate information flows

·        Ensure good coordination / cooperation along the evaluation process

Establish a general mapping of the stakeholders involved in energy efficiency policies ·        Facilitate the identification of stakeholders
Define guidelines about involvement of stakeholders in the evaluation process ·        Facilitate the organisation of the evaluation

5.2 – Identify possible synergies with existing processes

A frequent feedback from stakeholders about evaluation is that it should avoid to create additional administrative burden. And that it should as much as possible take advantage of existing tools, processes, etc.

Discussions with all parties (on short term) and a review of the different processes of the policy cycle (on medium term) can help identify opportunities of synergies. Synergies can be particularly helpful to optimize data collection. See also section 3 about planning.

SHORT-TERM ACTIONS Purpose(s)
Discuss the evaluation process with parties that could be involved in it, to identify existing tools, processes, etc. that can be used for the evaluation ·        Facilitate synergies

·        Minimize the needs/means that would be used solely for this evaluation

Prepare data collection so that data needed for the evaluation can be collected as much as possible along policy implementation ·        Optimize data collection
MEDIUM-TERM ACTIONS Purpose(s)
Review existing tools (e.g. ICT platforms), processes (e.g. for reporting or monitoring), interactions, etc. that could be used for evaluations, and identify how this would be possible ·        Facilitate synergies

·        Minimize the needs/means that would be used solely for evaluations

Analyse if the interactions between monitoring and evaluation could be improved*.

Analyse if problems of interoperability need to be solved (e.g. between different data platforms).

·        Facilitate information flows

·        Optimize data collection

6 – Communication and mutual understanding

An evaluation can involve people with different backgrounds (e.g. statistics, economics, engineering, sociology) and having different experience / connections with the policy evaluated. They can thus have different perceptions or views about what evaluation means, what it should be. Likewise about the objectives of the policy, its delivery scheme or its results. Ensuring a good communication can therefore be challenging, while it is essential for information sharing and to get a complete picture of the policy. Interactions / exchanges between the different parties, and particularly between policy officers (and policymakers whenever possible) and evaluators or policy analysts, are ways to favour the integration of evaluation into the policy cycle, and to avoid evaluation to be perceived as a fully separate process.

SHORT-TERM ACTIONS Purpose(s)
Make sure the right contacts* are identified for each party to be involved in the evaluation process ·        Ensure an easy communication along the evaluation process
Clarify the evaluation objectives, and organise a feedback loop (when relevant)

(e.g., put evaluation on the agenda of the steering committee of the policy; create an email that stakeholders can use to ask questions about the evaluation)

·        Ensure a shared understanding of the evaluation objectives (and thereby realistic expectations)
Facilitate exchanges between policymakers, practitioners/implementers and analysts/evaluators

(e.g., plan meetings at the key stages of the evaluation; create a steering committee of the evaluation)

·        Maintain regular contacts between the evaluation team and evaluation recipients

·        Ensure a mutual understanding (which does not necessarily mean a consensus or agreement)

·        Favour a pluralist approach of evaluation (taking into account differences in viewpoints)

·        Foster closer collaboration between policymakers/policy officers and analysts/evaluators)

* the “right” contact is not only about finding the contact for the communication link. It is also about finding the contact that is open to the evaluation process, as explained by one of the respondents to the second EPATEE online survey:

“Finding the right people to work together as a team. One person can easily dominate discussions; you need people who are willing to learn, rather than to say that they already know the answers.” (see Bini et al. 2018)

MEDIUM-TERM ACTIONS Purpose(s)
Maintain an updated list of contacts from the different services and bodies involved in the different stages of the policy ·        Maintain regular contacts

·        Facilitate an easy communication

·        Avoid missing or outdated links in the communication loops

Facilitate capacity building and experience sharing about evaluation issues (e.g., targeted workshops or trainings; technical briefs; testimonies about past evaluations) ·        Increase awareness and knowledge about evaluation

7 – Communication about the evaluation and its results

The review of the EPATEE case studies highlighted that the communication about the evaluation and its results can be as important as doing the evaluation (see part 12 of Broc et al. 2018).

Making it clear from the start of the evaluation, what the audience will be, to whom the evaluation report will be communicated, to whom the main findings will be communicated (and how), and most importantly what the evaluation findings will be used for, all this has proven to be effective ways to motivate the different parties (including the evaluators) to take actively part in the evaluation process.

Anticipating debates or misinterpretations of the results is also important to avoid misuse of evaluation findings. At the opposite, adapting the communication format to the targeted audience can increase the attention given to the evaluation findings, thereby increasing their use.

SHORT-TERM ACTIONS Purpose(s)
Clarify how evaluation results will be communicated and to whom

Clarify what the evaluation findings will be used for

·        Transparency of the evaluation process

·        Foster the involvement of stakeholders

·        Avoid to generate frustrations

Ensure that anyone who contribute to the evaluation (e.g. interviewees, data providers) will get a feedback about what their contributions have been used for ·        Foster the involvement of stakeholders

·        Avoid to generate frustrations

Plan opportunities to discuss results with stakeholders at the main stages of the evaluation, and above all before they are presented to a larger audience ·        See if adaptations are needed in the way to present the results to ensure they are well understood (e.g. adding explanations)

·        Avoid any effect of surprise or shock (especially in case of unexpected results)

·        Ensure consistency in the communication that can be done by different channels and parties

Prepare different communication formats according to the types of audience (when relevant)

Prepare an executive summary

·        Ensure that the key messages can be easily caught
Make it clear the validity / limitations of the results ·        Transparency of the evaluation results

·        Ensure that the results are well understood

Ensure that the main evaluation report includes a minimum documentation of the evaluation methodology and results ·        Transparency of the evaluation results

·        Keep the memory of the evaluation

·        Enable external reviews

·        Facilitate further experience sharing

MEDIUM-TERM ACTIONS Purpose(s)
Analyse how evaluation results can be used as part of the general communication about the policy ·        Raise awareness about the policy, its objectives, impacts, etc.
Share general findings beyond the policy silo (share findings about evaluation practices and policy making that can be useful for other policy fields, and get insights from other policy fields) ·        Foster interactions between policy fields, services, institutions, etc.

·        Stimulate experience sharing and capacity building