The chapter reviews the methods and approaches used across French government communication offices to define and measure performance indicators for their communications. It identifies gaps and opportunities for improving the design of communications as well as evaluation and learning. The chapter offers recommendations to raise evaluation standards and proposes approaches for mainstreaming them across all institutions building on successful international examples.
Public Communication Scan of France
2. Raising standards for measurement, evaluation, and learning in public communication
Copy link to 2. Raising standards for measurement, evaluation, and learning in public communicationAbstract
Approaches to the evaluation of communication campaigns in France
Copy link to Approaches to the evaluation of communication campaigns in FranceCommunication departments across the 10 French government and public agencies reviewed for this Scan are not spared from the industry-wide challenge of evaluation, although they compare favourably overall with the international picture. Evaluation has become an established practice across French institutions and there are visible efforts to make its practice increasingly more sophisticated. This chapter finds ample scope for supporting these efforts based on the practices found among the most mature peers in OECD countries and other international actors, as well as the standards defined in the industry and literature.
Public institutions in France have broadly adopted methods for evidence-driven communication. The evaluation of communication campaigns has become a routine practice for French ministries and agencies, building on the systematic definition of communication campaign KPIs as part of the approval process for campaign-related procurement.
The review of a sample of 50 recent campaigns shows that all but one were evaluated. In this respect, French institutions stand out from the common picture across OECD governments, where evaluation remains infrequent. A majority of the DICOMs seem to carry out evaluations of their campaigns regularly and thoroughly. Several DICOMs use indicators from previous campaigns to compare their performance and determine KPIs for subsequent campaigns.
However, the depth and quality of the evaluation is highly variable and points to important areas of improvement that can contribute to making communication more effective, efficient, and strategic. The evidence reviewed points that evaluation concerns primarily output metrics and ones relative to paid components of communication campaigns. Conversely, long-term outcomes and impact are more rarely evaluated, if at all.
Many of the limitations to the depth and quality of evaluation relate to issues with the design and objectives of communication campaigns. Indeed, these are predominantly conceived to generate wide reach and visibility for policy issues, programmes and services, but are seldom intended (at least explicitly) to engender change in the audiences or society. Although this trend owes to factors often outside the direct control of communication teams (see Chapter 3), there is scope to improve practices for defining campaign objectives that would lead to better measurable impact.
The prevalence of evaluation across campaigns is a solid foundation to build upon for introducing more advanced approaches that allow to measure and demonstrate better the impact of the campaigns.
This section focuses on the processes and methods used to develop and evaluate communication campaigns as the predominant instrument deployed by DICOMs, agencies, and the SIG alike, and that to which the largest financial and personnel resources are dedicated. It reviews the current practices across DICOMs for the design and evaluation of their communication, including their choices of performance indicators. It provides recommendations for the shift towards greater measurement of outcomes and impact, drawing in particular on the approval procedures for campaign-related procurement. The latter has proved a valuable a lever to encourage good practices, but could be reinforced to drive further improvements, as discussed below.
Building evaluation into the design of communication campaigns
The first steps towards evaluating public communication come at the conception stage of an activity. The connection between objective-setting and measurement emerged as a recognised best practice throughout the review of international standards and literature above, and across the practices of those governments that are more mature in this space (OECD, 2021[1]).
As concerns the development of campaigns, communication directorates across French institutions are consistently in line with this practice. Out of the sample of 50 campaigns analysed via the OECD survey of 10 communication directorates, nearly all (94%) involved the definition of KPIs at the conception stage (Figure 2.1). In this respect France stands out from the international landscape, where only a small minority of institutions (12 out of 63) surveyed by the OECD in 2020 reported defining the evaluation metrics or methods at the development stage of their communication strategy (OECD, 2021[1]).
Despite this positive record, qualitative analysis shows that KPIs are often defined more to comply with procurement requirements than to rigorously determine indicators specific to a given campaign objective. Notably, the compulsory form to request approval from the SIG for any creative, advertising, or social research services requires that the directorates specify how they intend to measure the objectives of a campaign and the indicators to be used to evaluate the performance of the advertising portion of the activity only.
As a result, the definition of KPIs for the activities can sometimes amount to a box-ticking exercise by those communication teams that would otherwise forego this step amid the pressures of progressing to execute the campaign.
Figure 2.1. KPIs are defined at the onset of each campaign
Copy link to Figure 2.1. KPIs are defined at the onset of each campaignSurvey question: In the last five campaigns conducted by your ministry/agency, for how many of them were key performance indicators defined from the onset of the campaign?
Note: n=50 campaigns, each respondent could select a quantity from 0 to 5.
Source: OECD survey of France government communication head at ministry and agency level, 2025.
Interviews indicated that in practice communication teams (primarily those in ministries rather than agencies) tend to set KPIs for the reach of target audience groups and the visibility of the campaign. Those are metrics with which they are comfortable and which are obtained by default from the advertising agency which services they use (as detailed further below). These measures are important metrics for the performance and comparison of different communication channels and contents, but are seldom sufficient as key indicators for a campaign overall, due to their inability to measure outcomes for the audience, as noted above.
Because organic communication activities that form part of a campaign are not included in the approval procedures, KPIs tend to be defined without taking these activities into account. The expectations for performance and contribution of organic communications to the overall objective of the campaign is thus not specified at the beginning. Moreover, with the exception of the SIG, unpaid communications are not evaluated with any regularity.
Defining the purpose of communications: inform, change and recruit
Similarly to the requirement for setting KPIs for an activity, the approval procedure form that communicators must submit to the SIG requires them to categorise the overall purpose of the activity under three possible options: inform (the public), change (behaviour), and recruit (public service personnel). This step ought to go hand-in-hand with the selection of KPIs to ensure these are appropriate to estimate the achievement of the objectives. The three options mirror some of those found in guidance across OECD countries to frame all the communication activities they carry out and favour a more purpose-oriented communication (Box 2.1). The focus on recruitment is a notable addition in France and a considerable share of public sector campaigns (Figure 2.2).
Box 2.1. Macro-level objectives for public communication in Belgium and the United Kingdom
Copy link to Box 2.1. Macro-level objectives for public communication in Belgium and the United KingdomTo govern the general purpose of public communication and assist communicators across ministries and institutions in defining their communication strategies, several countries have codified the macro-level objectives that communicators can choose from to frame any activity. Belgium’s central communication service relies on three primary objectives that overlap in part with France’s: awareness, behaviour change, and reputation management.
The GCUK defined a similar set of macro-level objectives under the acronym CORE that combine those used in both France and Belgium’s:
C – Changing behaviours that benefit society
O – Operational effectiveness of public services
R – Reputation of the United Kingdom and responding in times of crises
E – Explanation of government policies and programmes
Source: Presentation by Belgium’s Federal Public Service Chancellery of the Prime Minister at the meeting of the OECD Public Communication Network on 26 February 2025; GCS (2025), Guide to campaign planning: OASIS, https://gcs.civilservice.gov.uk/guidance/marketing/delivering-government-campaigns/guide-to-campaign-planning-oasis/.
From a review of the 50 government campaigns, over half have set informing the public as their main purpose. This is consistent from an estimate provided in interviews with the Secretariat for the totality of campaigns, which instead suggested that the remaining half is made up predominantly of recruitment campaigns and less so behaviour change ones (approximately 10% of the total).
The inform objective category groups together campaigns to make the public aware and receptive on a given issue and to promote some public programmes. However, a good share of the campaigns that fall under this category also seem to include elements that imply that the audience take some actions (for example, reporting personal or family status changes to adjust applicable taxes and deductions, or using an online tool to obtain advice on reducing alcohol consumption). In the sample reviewed, more than one in five campaigns had two objectives, one being to ‘inform’ and the other being ‘change’ or ‘recruit’ in equal measure.
This degree of fluidity between the objectives signals a lack of specificity about what the campaigns seek to achieve and departs from the intent of the framework to provide clarity about their purposes. According to three interviews, campaigns that have dual purposes have been found to perform worse against each purpose than those that only have one.
Based on the accounts of the DICOMs, internal expectations for campaigns often rest on their general visibility, which they are comfortable with obtaining and measuring. Most interviews confirmed that ministries and agencies are at ease that they can reach their target audiences most of the time. This seems to explain the preference to limit the goal of communication to publicising a topic or government initiative. Conversely, interviews pointed to some hesitancy towards setting objectives for changing behaviours that the DICOMs feel less comfortable planning and measuring.
The underlying objectives of information campaigns are typically less strategic or precise (for ministries more than for agencies), which can limit how much real impact they can have for policy (see Chapter 3). Many of these campaigns likewise tend to target very wide audiences, which drives up the cost of the advertising without necessarily increasing their impact.
By contrast, the guidance under the UKGC Evaluation Cycle urges that only a minority of campaigns ought to aim at solely changing awareness and attitudes, because of the difficulty in obtaining concrete outcomes that contribute to policy results. The guidance specifies that these types of activities ought to still account for “how the communication objectives of increasing awareness or changing attitudes feed into overarching behavioural and policy objectives” (GCS, 2025[2]).
Figure 2.2. Informing the public is the main purpose of half of communication campaigns
Copy link to Figure 2.2. Informing the public is the main purpose of half of communication campaignsWhat were the general objectives of the last five campaigns conducted by your ministry/agency?
Note: n=50 campaigns, multiples responses possible. 11 campaigns are double counted due to having more than one objective. SIG responses are included under ‘Ministries’. Total ‘inform’ campaigns 32, ‘recruit’ 15 and ‘change’ 14.
Source: OECD survey of France government communication head at ministry and agency level, 2025.
Recruitment stands out as an increasingly common purpose of campaigns for most DICOMs, and one where the objectives are most defined. Recruitment campaigns are often allocated some of the largest budgets and in some cases DICOMs are able to measure their outcomes and impact in terms of the volume of job applications, their quality (matching the profiles sought) and the final recruitment.
However, recruitment also differs in that it is not communication for citizens’ own information, but rather for the institutions’ operational needs. The impact of communication is thus directly for the institution, and indirectly for the citizens insofar as recruitment contributes to the good functioning of government services. If recruitment-related communication continues to grow, especially in terms of its share of communication budgets, it may be necessary to consider ringfencing resources dedicated to citizen-focused communication, to ensure they are not diverted toward meeting internal human resource needs.
The definition of a campaign’s purpose and the alignment of KPIs with its objectives is a good practice that ought to be reinforced to drive more strategic and impactful communications. To this end, the SIG, as the receiver of the approval requests, could support DICOMs with revising the purposes and objectives of their proposed campaigns. This step would help to favour a more accurate classification of purposes and could nudge greater balance of campaigns that contribute to measurable, policy-relevant actions from the audience. The following sections suggest ways in which the approval procedures can be leveraged for this goal.
Selection and measurement of communications’ performance indicators
French public institutions consistently measure and evaluate the performance of their communication campaigns’ content and are able to determine if they reached their target audiences or if their message resonates. Most of the DICOMs interviewed are comfortable with setting and tracking KPIs that show how visible a campaign is, particularly in terms of the performance of advertisements.
However, they rarely assess the effect that the campaign content has beyond its performance on the communication channels. Moreover, other elements of communications, chiefly organic (i.e. unpaid) activities and ones not linked to specific campaigns, often go unmeasured.
Evaluation literature and industry best practice (see previous chapter) emphasise the importance of measuring not only the performance of the communication outputs, but also the outcomes they have on audience perceptions, actions, and choices. Ultimately, evaluating communication activities ought to include an estimated contribution towards the overarching policy or organisational objectives (impact measures). French communication directorates thus have an opportunity to expand the evaluation of their work to include indicators related to longer-term outcomes and impact, and to make this standard practice.
The review of a sample of 50 recent campaigns shows that the prevalent types of performance indicators measured concerned output indicators, primarily those linked to the reach of the communication contents. As Figure 2.3 shows, reach and impressions, along with specific target audience reach, were the most common indicators measured in at least three quarters of the campaigns.
Figure 2.3. Types of indicators measured across communication campaigns
Copy link to Figure 2.3. Types of indicators measured across communication campaignsIn the five most recent campaigns that your ministry/agency has completed, which of the following KPI categories were measured?
Note: n=50 campaigns, multiples responses possible. “Engagement” in the questionnaire appeared as “Engagement (reactions, interactions, click-through rates, time spent on page, watch time)”.
Source: OECD survey of France government communication head at ministry and agency level, 2025.
Moreover, the reach indicators measured in many cases relate to advertising content, which is routinely analysed by the marketing agency and shared by default. Interviews with the DICOMs confirmed that these reports can constitute the sole evaluation conducted for some campaigns. Conversely, the evaluation of organic communication content is much less frequent in the context of communication campaigns. Indicators that primarily concern organic communication content in Figure 2.3, such as media coverage, social media mentions, and sentiment analysis of discourse on the campaign subject across traditional and online channels, are less commonly measured.
Notably, short-term outcome indicators of audience engagement with the content were the second most common category of indicators, measured across 3 in 4 campaigns in the sample. These indicators include audience reactions and interactions with the content, click-through rates to relevant government webpages, time spent on a page, or watch time for video content.
Overall, reach and engagement indicators are useful output metrics for communicators to capture. They serve to understand how different strands of the campaigns, different content formats or channels, are performing so to judge which tactics to prioritise and how to stay on track towards reaching objectives. For this reason, they are especially useful indicators to assess at the early stages of a campaign when the outputs can be adjusted. Nonetheless, they are not sufficient indicators of results for a campaign or activity.
As such, these should not be prioritised as the key performance indicators for the purposes of the summative (i.e. final) evaluation. Rather they are main indicators during the process (i.e. during the campaign) phase of evaluation. Considerations about the use of different indicators at different stages of an activity are expanded on in the section on Mainstreaming evaluation best practices across government.
Measuring outcomes and impact more systematically
The high frequency with which the volume and reach of communication outputs is defined as a KPI is consistent with the prevalence of campaigns which primary purpose is to inform the public (or “give visibility” to an initiative or topic, in the words of the DICOMs interviewed). In this sense, even when outcomes are measured via post-tests, they relate to awareness. This is seen in the half of campaigns that measured audiences’ recollection of seeing the content and tested their retention of the message.
Behaviour or opinion change is measured in only one third of campaigns, consistently with this being the primary stated purpose of only a minority of them. Interviews indicated that behaviour change is often only measured in terms of declarations of respondents’ intent to act on the campaign message in post-test surveys. Several communication directors noted that due to their inability to observe the changes, they are sceptical about these indicators. These declarations, however, are rarely compared with exogenous data that could serve as a proxy for the behaviour change sought, according to interviews.
One key indicator that could be easily measured as a proxy for outcomes and impact is the conversion of the campaign content into audiences’ journey across public websites, and their access to programmes, services, and recruitment. Institutional campaigns commonly guide audiences to find more information and then take a step to sign-up, download, or interact with a public service or initiative. Yet these types of engagement are traced in only a minority (36%) of the campaigns analysed. This practice is also lagging at the international level, where 21% of the 57 senior communicators surveyed by the OECD in 2025 claimed to evaluate policy or service uptake resulting from their communications.
Interviews confirmed that increasingly, with support from the SIG, DICOMs are tracking flows of web traffic originating from specific campaign elements. This enables communicators to understand which content performs better in driving citizens towards government initiatives. But despite these data, they rarely measure the steps that result from this traffic for actions such as downloading an application form for a programme. For example, one communication director interviewed affirmed that their team lacks access to the number of job applications the received via their recruitment campaigns.
For a minority of cases, four of the ministerial DICOMs interviewed reported measuring results on online services take-up, energy consumption, and job applications received, but noted this applies only where intended results were concrete and measurable. Conversely, public agencies commonly evaluate results, occasionally even running control trials and collaborating with research and social marketing experts to track long-term behaviour change and estimate economic effects (see Box 2.2).
Lack of access to baseline and ex-post data on recruitment, service use, or programme performance related to the campaign were cited as a reason limiting the measurement of these long-term outcomes. However, this is one of the most important classes of indicators to understand the efficacy of communications and estimate their impact. For instance, in Australia, the NSW Government’s previous version1 of the evaluation framework urged communicators to analyse government databases to set and measure targets such as revenue, donations, registrations or other suitable proxies for the effects of the campaign (New South Wales Government, 2020[3]).
Efforts to make government communication more strategic and impactful should prioritise systematically obtaining access to baseline and ex-post data on the services, programmes, or recruitment drives that communication is intended to support.
In many instances the needed data would already be held within the public sector, but internal siloes can prevent communication teams from easily obtaining them. Identifying the relevant data could be made into a key step of the communication brief between the DICOMs and the Director-Generals or programme teams requesting the campaign. To reinforce this practice, the SIG could amend the forms for the approval of campaign procurement to require that at least one long-term outcome and one impact KPI are identified for the campaign.
Box 2.2. Evaluation of outcomes and impact of communication on public health by Santé Publique France
Copy link to Box 2.2. Evaluation of outcomes and impact of communication on public health by Santé Publique FranceThe public health agency, Santé Publique France (SPF), has adopted some of the most advanced methods to evaluate the effects and impact of public communication campaigns against direct and indirect policy and health objectives.
SPF routinely carries out public health prevention campaigns that seek to change citizens’ behaviours and incite to make them healthier choices. One such campaign, “Mois Sans Tabac” (“No smoking month”) is a yearly initiative taking place in November that challenges smokers to quit during the month.
The campaign, and others by SPF, is evaluated using longitudinal surveys, statistical and econometric analysis to determine the effects of the campaign and its contribution to policy and societal impact. For instance, the Mois Sans Tabac campaign has been evaluated for its efficacy on inducing behaviour change, both in the short and long term, with follow-up surveys measuring the effects on smoking one year after the campaing.
It has also been assessed for its estimated economic benefits (reduced healthcare expenditure, workforce productivity) and return on investment deriving from the number of smoking-induced diseases that are estimated to be prevented compared to a scenario where there was no such initiative.
The campaign has also been evaluated for its relative efficacy year after year to draw insights and learning on how different audiences react to the campaign and register online to participate in the initiative. Figure 2.4 shows the lower performance of the campaign in 2020 compared to previous years, which prompted considerations about the design and audience targeting for subsequent iterations.
Figure 2.4. Effects of the Mois Sans Tabac campaign on attempts to quit smoking 2016-2020
Copy link to Figure 2.4. Effects of the <em>Mois Sans Tabac</em> campaign on attempts to quit smoking 2016-2020
Source: SPF (2022), “Effectiveness of the French Mois sans tabac on quit attempts in the first year of Covid-19: a population-based study”, https://www.santepubliquefrance.fr/determinants-de-sante/tabac/documents/poster/effectiveness-of-the-french-mois-sans-tabac-on-quit-attempts-in-the-first-year-of-covid-19-a-population-based-study; Guignard R, et al. “Effectiveness of the French Mois sans tabac on quit attempts in the first year of Covid-19: a population-based study https://www.santepubliquefrance.fr/determinants-de-sante/tabac/documents/poster/effectiveness-of-the-french-mois-sans-tabac-on-quit-attempts-in-the-first-year-of-covid-19-a-population-based-study; Devaux M, et al., Tobacco Control Epub (2023) https://tobaccocontrol.bmj.com/content/tobaccocontrol/early/2024/07/31/tc-2023-058568.full.pdf; Guignard R, et al. (2019), ”Efficacité de Mois sans tabac 2016 et suivi à 1 an des individus ayant fait une tentative d’arrêt, à partir du Baromètre de Santé publique France 2017” https://www.santepubliquefrance.fr/determinants-de-sante/tabac/documents/enquetes-etudes/efficacite-de-moi-s-sans-tabac-2016-et-suivi-a-1-an-des-individus-ayant-fait-une-tentative-d-arret-a-partir-du-barometre-de-sante-publique-france.
As discussed in the following sub-section, the approval procedures are an important lever that could be reviewed to nudge better evaluation practices. Such actions could be additionally supported with detailed guidance, as is available in some of the most advanced OECD governments in this field, and with the relevant skills development (both discussed in the section on Mainstreaming evaluation best practices across government).
Reinforcing the approval procedures to nudge better measurement practices
Improving the efficacy of all types of campaigns (and measuring such efficacy) ought to start with matching specific objectives with the appropriate outcome indicators. The approval procedures in place for the procurement of communication services and advertising spend have proved to be an important tool for nudging good practices for more strategic and evidence-based communication.
However, it has so far fallen short of pushing for the kind of rigorous measurement and evaluation of communication’s impact sought by the recent government reforms targeting the SIG and the communication function. As discussed below, similar processes in other OECD countries are used to enforce evaluation and could serve as examples to reinforce requirements in France.
Procurement approval procedures are mandatory and apply to a large majority of French ministries and agencies that seek to contract external services via a centralised framework contract. The framework contract negotiated for the whole of the public sector guarantees preferential rates for public institutions and is managed by the SIG. For this reason, the SIG is the entity that signs off on all campaigns with a paid advertising component and with a budget above EUR 50 000, a threshold that was lowered in September 2024 (SIG, 2024[4]). Besides communication campaigns, approval procedures are in place on other steps under a centralised “simplified process” (Démarches simplifiées) platform (including creation of social media accounts, opinion polls and studies, and market consultations).
The form asks the teams to specify the purpose of the activity (inform, change, or recruit) and the KPIs for the paid media components of the activity. Examples of indicators cited in interviews reaffirm these favour reach metrics: target coverage, completion rate for an audience reach target, number of website visits. The form requires that an evaluation report is shared with the SIG after completion of the activity. For all campaigns with a budget above EUR 300 000, communication teams are additionally required to carry out a post-test2 and share its results with the SIG (SIG, 2024[4]). This is an important requirement to increase the rate of evaluation of outcomes, which also lags at the international level. Across 38 governments surveyed by the OECD Secretariat, less than a quarter (22%) reported using surveys in their evaluations (OECD, 2021[1]).
The approval process gives the SIG an important role to guide better practices across government, which it is increasingly seizing. With the clarification in 2024 of the SIG’s oversight and co-ordinating role across governmental campaigns, the approval of a campaign and the related procurement comes with its involvement in the steps preceding its roll-out. This primarily concerns large-budget projects, which increasingly include support on aspects such as opinion studies to inform the strategy and brief for the creative agency, planning meetings, strategic advice and cross-government alignment, advice on KPIs, and support to track campaign traffic to government webpages.
In the scenario where approval is refused by the SIG, the activity ought to be suspended. However, in practice there are almost no precedents of refusal according to interviews. Instead, in recent years submissions have been increasingly scrutinised and revised with advice from the SIG until they comply with expectations.
For approvals to be effective at nudging better planning and evaluation practices it is important that the procedure is not just additional paperwork but is recognised as a serious, value-adding step of planning effective campaigns. According to interviews conducted for this study, this is becoming the case. Having been reinforced in recent years, these procedures are now an important step of internal accountability that faces senior-level scrutiny, whereas in the past they had been more of a formality.
However, compliance with the evaluation requirements is reportedly still inconsistent. Interviews with DICOMs have indicated that teams do not always produce thorough evaluation reports, often limiting their reporting to the reach estimates provided by the advertising firm contracted. Post-tests are additionally conducted with limited budgets, if at all, which reduce the scope for testing the real efficacy of the campaign on outcomes (see below).
International practices can offer useful lessons for how the SIG can optimise the procurement approval procedures as a means to achieve the objectives that underlie its recent reform drive. Indeed, similar processes are found across countries, with the purpose of supporting more streamlined procurement, ensure the efficient allocation of public finances, reduce duplication, and ensure transparency and accountability (see Box 2.3).
By comparison, the procurement approval procedures in the likes of Australia, Canada, and the United Kingdom are more stringent than those in effect in France. They include higher demands to justify advertisement spend and report on its efficacy through evaluation. Failure to comply with impact reporting can even block communication teams in the United Kingdom from obtaining approvals on subsequent activities.
Box 2.3. Approval procedures for communication-related procurement in Australia, Canada and the United Kingdom
Copy link to Box 2.3. Approval procedures for communication-related procurement in Australia, Canada and the United KingdomAnnual planning
In Canada, communication teams must comply with centralised procurement procedures before launching a paid campaign or public opinion research. Notably, this process is consolidated into an annual plan where the majority of spending is approved for the year. This measure encourages forward planning, particularly for big budget initiatives.
In the United Kingdom, any initiative with spending on Advertising, Marketing and Communications (AMC) exceeding GBP 100 000 in one financial year is equally subject to an approval process managed by Government Communications within the Cabinet Office. Like for Canada, requests need to be consolidated into an annual plan in a process labelled “strategic planning exercise” that is vetted by UKGC with a traffic light system to accept or reject demands. All unplanned demands must prove that the needed spending could not have been anticipated, and both unplanned requests and requests that are not immediately accepted require the requesting teams to submit a technical case to UKGC to justify the request in detail.
Justification for spending (cost-benefit analysis)
In the Australian state of New South Wales, it is legislated that all government advertising must show public funds are being used efficiently and effectively, there is a demonstrated clear need for advertising, and that paid advertising is necessary beyond owned, earned, or stakeholder engagement channels. Campaigns must follow a formal approval process against a defined framework and guidelines, which becomes more complex as the budget increases. Campaigns under $250,000 require a short review; those between AUD 250 000 and AUD 1 million need an independent peer review; and campaigns over $1 million must also include a cost-benefit analysis (see sources).
Performance indicators and pre/post-testing as prerequisites
Defining performance indicators is a mandatory step of approval procedures in Canada and the United Kingdom alike, and evaluation reports are a common requirement.
In Canada, pre-tests1 are required more often than post-tests: campaign with media buy above CAD1 000 000 must have the creative contents tested ahead of launching. Additionally, advertising campaigns with a media buy over CAD 2 000 000 must be evaluated following the Advertising Campaign Evaluation Tool (ACET), which provides a template for testing the outcomes and impact of the campaign. Results of all public opinion polls and advertising campaigns must be posted within 6 months of completion.
In New South Wales, approval procedures require an “advertising effectiveness report” to be submitted within 3 months of completion for campaigns exceeding AUD 50 000 in spending over 12 months.
The UK process is more stringent and makes future spending approval contingent on compliance with commitments to report the activities’ results: “All approved campaigns must submit regular KPI reporting, as well as an end-of-year evaluation report to GCS. The end-of-year report should outline the results achieved against the campaign’s objectives, identify any lessons learnt and provide recommendations for future activity.”
Transparency and accountability measures
In Canada, mandatory evaluation reports are made public as part of transparency and accountability measures. New South Wales Government reports detailed information about all paid campaigns, and encourages institutions to publish on their website information such as advertising rationale, objectives, costs and outcomes.
1A pre-test is done before the launch of a communication campaign, and measures the comprehension and adherence with the activity allowing to confirm the efficiency of the campaign and correct any misunderstanding ahead of a larger distribution.
Source: GCS (2025), Advertising, marketing and communications spending control guidance, Advertising, marketing and communications spending control guidance - GCS; Government of Canada (2025), “Appendix A: Mandatory Procedures for Advertising”, Directive on the Management of Communications and Federal Identity, https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=30682§ion=procedure&p=A; Government of Canada (2025), “Appendix B: Mandatory Procedures for Public Opinion Research”, Directive on the Management of Communications and Federal Identity, https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=30682§ion=procedure&p=B; New South Wales Government (2025), Advertising campaigns over $1 million, https://www.nsw.gov.au/nsw-government/communications/government-advertising/advertising-campaigns-over-1-million; New South Wales Government (2025), Advertising campaigns up to $250,000, https://www.nsw.gov.au/nsw-government/communications/government-advertising/advertising-campaigns-up-to-250000#toc-approval-process-for-campaigns-under-250000; New South Wales Government (2025), Advertising campaigns between $250,000 and $1 million https://www.nsw.gov.au/nsw-government/communications/government-advertising/advertising-campaigns-between-250000-and-1-million; New South Wales Government (2018), Cost Benefit Analysis of Government Advertising: A User Guide, https://www.nsw.gov.au/nsw-government/communications/government-advertising/advertising-campaigns-over-1-million#toc-cost-benefit-analysis; NSW Government Advertising Act (2011) https://legislation.nsw.gov.au/view/whole/html/inforce/current/act-2011-035.
The SIG could propose to modify the evaluation requirements to cover the definition of KPIs across the full scope of the activity (rather than only its advertising component) and the selection of outcome indicators linked to intended impact metrics, in addition to the performance metrics already in use. This new potential requirement could be enabled by the introduction of a dedicated evaluation framework and indicators matrix as recommended below. Reinforcing the approval procedures could be a means to support the adoption of other recommendations in this chapter like those on the format and contents of evaluation reports, or on the pre-allocation of budget for evaluation, for example.
Notably, approval procedures in other OECD countries are also a means to enforce requirements to co-ordinate communication under annual planning cycles. Annual planning is one of the newly-introduced objectives under the recent reforms of communication and the SIG’s role (see Chapter 3). It could be supported by modifying the approval procedures to favour a large annual exercise covering a majority of foreseeable communication activities.
This recommendation would be compatible with the considerable number of cyclical campaigns (primarily carried out by public agencies), but also help increase the share of interministerial campaigns that are increasingly favoured to optimise public finances and enhance the coherence and legibility of government messages.
Mainstreaming evaluation best practices across government
Copy link to Mainstreaming evaluation best practices across governmentThe formal requirements that give the SIG oversight over the approval of campaign-related procurement have been important levers for mainstreaming the evaluation of government campaigns. However, ensuring the design and evaluation of campaigns follows good practices will additionally require introducing the relevant methods, resources, and skills that can enable communicators across government to adopt such practices.
Interviews conducted by the Secretariat have confirmed that DICOMs tend to face notable challenges in designing communication activities in ways that will produce measurable results. This explains part of the problems with evaluating their work beyond short-term outcomes. For example, Figure 2.5 shows that public agencies, who tend to carry out larger activities planned well in advance with more specific programme-related objectives, also tend to measure campaign outcomes more regularly than ministries.
Additionally, there are a range of barriers related to the competencies, resources, and culture of evaluation that stand in the way of elevating standards for evidence-based and strategic communication. Figure 2.5 illustrates these barriers in terms of the reasons given by the DICOMs for not evaluating their campaigns’ outcomes.
The main reasons point to a contentment with measuring only output indicators (this response is likely understated as interviews suggests that some respondents might use a broader interpretation of the outcomes they measure). However, most DICOMs interviewed expressed a genuine desire to go beyond the current evaluation practices and be able to measure the impact of their work, despite voicing doubt about their ability to do so. This finding aligns with a general gap in skills and competencies.
Figure 2.5. Reasons for not evaluating campaign outcomes
Copy link to Figure 2.5. Reasons for not evaluating campaign outcomesIf the outcomes of communication activities are not evaluated (especially if the results of one of the five most recent campaigns have not been evaluated), what are the main reasons?
Note: n=10 services, multiples responses possible. SIG responses are included under ‘Ministries’. ’We can estimate success based on output metrics” full version label was ‘We can estimate success based on output metrics (reach, volume of coverage/mentions)’
Source: OECD survey of France government communication head at ministry and agency level, 2025.
Budget limitations emerged as another key barrier. Interviews confirmed that even where the total campaign budget triggers the requirement for post-tests, the budget is for evaluation is not ringfenced. In turn the budget is often too small for post-tests to go beyond most basic methods, if any is left over. Indeed, the bulk of the budget tends to be spent towards producing high-end creative campaign contents and buying advertising space to display them, leaving small or no funds for pre-testing contents or measuring their effects.
Finally, the availability of benchmarking and exogenous data to evaluate activities against is a recurring barrier to address as a priority.
The SIG can lead efforts to overcome these barriers and to mainstream best practices for measurement, evaluation, and learning across government. Key actions to do so would include introducing comprehensive guidance and resources to support communicators in adopting more advanced practices, while providing support with making more benchmark data and evaluation tools more widely available and used.
Moreover, the introduction and adoption of guidance ought to be accompanied by investments in the professionalisation of teams. This can serve the goals of improving both technical skills and organisational culture that favour more rigorous use of evidence as the basis for all communication work. As a key element of best practice, actions to professionalise teams should place emphasis on improving how measurement and evaluations are used for learning and applied for continuous improvement within and across teams.
Finally, the SIG can apply the logic of evaluation to its own efforts to drive change across government DICOMs, by introducing relevant indicators for the adoption of good practices, such as the share of campaigns that define SMART objectives, the number of evaluation reports completed, the number of officials who received training.
This section of the Scan discusses four core approaches for raising measurement, evaluation and learning standards across French government communication departments according to leading international practices, namely:
The development of a common, comprehensive framework for MEL;
The consolidation of access to and sharing of data for benchmarking, informing choices and measurement of KPIs against policy and recruitment goals;
Steps to turn evaluations into lessons learned; and
Capacity-building and professionalisation initiatives.
The SIG is the entity best placed to spearhead these actions, given its central mandate to reform, and elevate practices for communication in government and the parallel initiatives it leads, including across some of the four dimensions above. Moreover, ministries and agencies interviewed by the Secretariat have expressed support for the reforms the SIG has led in recent years and welcomed its role in co-ordinating and supporting cross-government communication. In particular, some DICOMs welcomed the guidance and direct support with evaluation metrics and methodologies they had received from the SIG.
While most of them have expressed their agreement with the need to improve measurement, evaluation and learning, they have also cautioned against undue burdens and demands that they would struggle to meet. Communicators, particularly within ministries, highlighted limitations with capacity and resources to comply with additional requirements. Likewise, most were wary of introducing common metrics that would benchmark the performance of their campaigns against those of better-resourced peers.
Mainstreaming evaluation best practices across government therefore ought to ensure that ample support to communication teams across ministries compensates for more stringent requirements, for example, on the approval of campaign-related procurement.
Building a common framework and tools for evaluation
In reviewing international practices for this Scan, the Secretariat observed that a majority of OECD countries do not have formal processes, metrics, or guidance for the evaluation of communications. However, most of those countries and institutions in which evaluation is more advanced tend to have dedicated guidance on how to perform evaluation.
For example, Australia, Canada, Singapore and the United Kingdom provide leading examples of evaluation frameworks and toolkits that accompany their application. Communication teams in some countries have also noted using the evaluation framework developed by the international standard-setting body AMEC (2025[5]), which has the same key elements found across all leading frameworks. The European Commission additionally provides a comprehensive template of performance indicators that can be used to evaluate any type of communications (European Commission, 2024[6]).
Developing a common framework or toolkit for evaluation in France would be a useful step to define expectations for what rigorous MEL looks like and guide communication teams across ministries and agencies to adopt good practices. The SIG could draw on the leading government examples presented in this section and on the MEL Manual for Public Communication developed by Professor Macnamara (2024[7]) and published by AMEC as a basis for a tailored document for French public institutions.
A common element of leading frameworks in the field reflects the continuous nature of measurement and evaluation throughout an activity. The Singaporean example in Box 2.4 illustrates how data and evidence are analysed to inform decisions at each stage of developing and executing a campaign. This is consistent with the emphasis on formative and process evaluation highlighted in the literature (see section on the Foundations of Measurement, Evaluation and learning in the first chapter) and in other countries’ guidance. Macnamara’s MEL Manual (2024[7]), for example, suggests that the majority of evaluation should be done at the formative stage of campaigns to inform implementation and provide baseline data.
Guidance on performance and impact indicators for paid and organic campaigns
The selection of performance indicators is likewise aligned with the stage of the campaign cycle during which the evaluation is performed. Here Macnamara (2024[7]) distinguishes between the early stages, when a wider range of indicators on the performance of outputs helps determine the efficacy of the content and make necessary adjustments, and the end of the activity, when evaluation should focus on measuring only a selected few indicators from which communicators can infer the results against the campaign’s SMART objectives.
By this logic, a French framework could include guidance on the choice of performance indicators by classifying them into two tiers: a first tier, where between 3-5 key performance indicators are defined against a baseline and measured at the end of the campaign to determine its outcomes and impact; and a second tier, where the most relevant performance indicators for each communication channel are measured to test their relative efficacy. A matrix of indicators matching these two tiers could be a valuable element of a French framework to guide communicators in making the appropriate selection. Annex A presents selected templates.
The first tier of KPIs could include metrics such as observed or declared behaviour and perception change (including information retention and acceptance for purely informative activities), clickthrough rates and engagement with webpages promoted via the campaign, registrations, downloads, applications for jobs or benefits, or other relevant actions relating to the government services or initiatives being promoted.
Impact indicators, such as the rate of successful applications, voter turnout, or longer-term changes to trust, economic, or health outcomes should be included wherever possible, compatibly with the theory of change method, but with caveats about estimation rather than exact measurement. Evaluation reports that are addressed to non-communicators within the government should focus on explaining the results achieved against these focused KPIs.
Box 2.4. Framework for evaluation of communication campaigns in Singapore
Copy link to Box 2.4. Framework for evaluation of communication campaigns in SingaporeThe Ministry of Digital Development and Information (MDDI), which operates as the central entity in charge of cross-government communication in Singapore, has developed comprehensive guidance for data-driven communications. The guidance illustrates how measurement and evaluation comes into play from the objective-setting stage and places considerable emphasis on analysis upstream and throughout a campaign.
The guidance includes a Campaign Evaluation Framework base on the inputs-outputs-outcomes- impact model that guides users to measure the results of their campaigns across 9 key dimensions or indicator classes: 1) Reach, 2) Engagement, 3) Awareness, 4) Understanding, 5) Satisfaction, 6) Support, 7) Attitudinal, 8) Intention and 9) Behavioural Change. Each of these indicator classes includes a standard battery of questions to supports its measurement.
The guidance requires that the evaluation compares baseline and final data collected under the 9 dimensions to determine the effectiveness of the campaign against SMART objectives, which also draw on previous campaign data as benchmarks.
Figure 2.6. Singapore’s Data Driven Communications Cycle
Copy link to Figure 2.6. Singapore’s Data Driven Communications Cycle
Source: Ministry of Digital Development and Information, Singapore (2025).
Box 2.5. Guidance on attributing outcomes and impact to communications from OECD countries
Copy link to Box 2.5. Guidance on attributing outcomes and impact to communications from OECD countriesEvaluating behaviour change communication, guidance from UK and Netherlands
The United Kingdom guide on behaviour change communication puts forward important considerations for the design and evaluation of such activities that accounts for the presence of barriers to behaviour change.
Relying on the theory of change model (see Chapter 1), the guide urges communicators to identify all the conditions that need to be in place for the behaviour change to occur. Notably, the guide focuses on barriers to Capability, Opportunity, and Motivation (COM-B model) that communication can help overcome to make behaviour change possible.
When it comes to evaluation, the guide recognises that communication may be focusing or able to influence only one or two of the barriers to change, while others may depend on other aspects of the policy or initiative. Therefore, it encourages communicators to measure all the barriers that may affect the behaviour change to determine if the conditions were in place for communication to affect the desired change. This forms part of contextualising the contribution of communication to policy impact.
Moreover, the guide explains how to attribute behaviour change to communication, by measuring factors that prove the barrier(s) addressed by communication were overcome. This tends to be measured primarily via focus groups and quantitative survey.
The Netherlands have developed the Communication Activation Strategy Instrument (CASI), a guide that offers a structured process to develop communication strategies based on behavioural insights. The guide specifies steps to measure behaviour and collect relevant data, and urges users to always pre-test, even minimally, their interventions.
The CASI guide proposes two methods for evaluation: experiments and effect measurements. In an experiment, there are two groups: one that receives the communication intervention (intervention group) and one that does not (control group). This method is deemed ideal for establishing a causal relationship between the intervention and the outcome. Effect measurement estimates the total effect but has limited scope to attribute causality.
Calculating return-on-investment (ROI) in communications, United Kingdom example
While evaluating an activity’s impact can demonstrate whether objectives were met, for paid-for campaigns it is also important to estimate the value that was generated from the financial investment made. This can be a helpful, if imperfect, metric of efficiency that accompanies other indicators of outcomes and impact. The UKGC has developed a five-step process to estimating the ROI on their paid activities, for relevant objectives. These steps are:
1. Defining objectives focused on quantifiable outcomes (e.g. the volume of direct foreign investments generated, or the number of public sector employees recruited);
2. Working from a baseline of the quantifiable outcomes (i.e. Indicators) above ex ante;
3. Forecasting the trend for how these outcome indicators would change from the baseline over the period of a communication (i.e. a counterfactual scenario where no communication intervention was conducted);
4. Isolating the effect of communication from other factors that will affect these outcome indicators. For example, if a separate intervention for policy implementation (e.g. a tax, subsidy, or legislative change) was introduced to drive a change on the same quantifiable outcomes, then its effect should be quantified to exclude it from the measurement of the campaign’s effects.
5. Accounting for externalities (i.e. unintended consequences, whether positive or negative) that could result from the campaign and the changes it engenders against its stated objectives. By quantifying their costs it can be added or subtracted from the ROI calculation.
These steps rely on quality assumptions about exogenous factors related to other levers of government or outside its control. The evaluation process should involve refining these assumptions after the campaign to update them in view of factors that were not foreseen at planning stage.
The ROI calculation then becomes a process of quantifying the value of the intended outcomes and externalities and comparing how much value is generated for each British Pound invested in a campaign. A complete worked example is provided in the UKGC Evaluation Cycle guide to illustrate this process.
Source: GCS (2021) The Principles of Behaviour Change Communications, https://gcs.civilservice.gov.uk/wp-content/uploads/2021/02/The_principles_of_behaviour_change_communications.pdf; https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwj-6bGM25GOAxVDK_sDHR_aEwYQFnoECBUQAQ&url=https%3A%2F%2F; Netherlands Ministry of General Affairs (2020), CASI, www.communicatierijk.nl%2Fbinaries%2Fcommunicatierijk%2Fdocumenten%2Fpublicaties%2F2021%2F12%2F09%2Fcasi-in-english%2FCASI%2BEnglish.pdf&usg=AOvVaw1vQ_ra9XSV_aeHPuCgx9UI&opi=89978449; GCS (2025) Evaluation Cycle, ROI, https://gcs.civilservice.gov.uk/publications/gcs-evaluation-cycle/#ROI.
Estimating the effect of communication on behaviour changes and policy impact is one of the most challenging aspects of evaluation in the field, and one which ought to be addressed in a framework for French government communicators. This is a common concern in France like in other OECD countries, where attributing cause and effect to a specific communication campaign amid complex causalities is a source of hesitancy (Buhmann and Likely, 2018[8]).
However, some of the evaluation frameworks reviewed for this study do a good job of guiding communicators through steps for estimating the contribution of communication to outcomes and presenting such results in ways that avoid under- or overstating the role of a campaign on a given change (see Box 2.5).
The second tier of non-key indicators could include a wider range of output and short-term outcome indicators such as reach rates for target audience groups, levels and quality of engagements with the content, volume and tone of earned coverage and mentions (including via partnerships), click rates, web search, quality of web traffic, sentiment analysis and more. These indicators are mostly for consumption by professional communicators and have limited meaning to other internal stakeholders.
For paid campaigns, metrics of financial efficiency can also be captured for purposes of allocating budgets and accountability. The United Kingdom’s Evaluation Cycle guide notes output-specific indicators such as cost per awareness raised, per behaviour change, or per expression of interest as a gauge of how effective content and channels are from a financial point of view (GCS, 2025[2]). The guide also provides a methodology to calculate return on investment (ROI), although some of the literature and experts in the field caution against this as a reliable indicator of value (Buhmann and Volk, 2022[9]).
Setting the KPIs for the campaign at the outset ought to go hand-in-hand with planning the evaluation. In particular, considerations about the methods and cost of evaluating a campaign should be factored in at this stage. An often-cited reason given by DICOMs for omitting to perform post-tests, or doing very limited ones, is insufficient budget remaining once the creative and paid components of a campaign are completed. The United Kingdom’s Evaluation Cycle guide specifies that between 5-10% of the total campaign budget should be allocated to evaluation. This element could be a valuable one to replicate in a French evaluation guidance and to be potentially reflected in the campaign procurement approval forms.
Evaluation of low-cost and organic communications
Evaluation is just as important for low-cost or organic communication activities. Added up together, these types of communications can take up important public resources as an investment of time and efforts. However, without dedicated budgets the evaluation would need to rely on freely available data, which typically is more abundant for outputs than for outcomes or impact.
In France, many institutions interviewed admitted to not evaluating their non-advertising activities, whether related to elements of a larger campaign or standalone communications. As noted, for some ministries, campaign evaluation can be practically synonymous with the reports provided by the advertising agency in the absence of a post-test. Similarly, one DICOM interviewed estimated that activities that amount to a third of the work done across the communication directorate are never evaluated.
This can be the case of events, for example, but also elements such as press office work, partnerships, management of institutional digital channels, crisis and internal communications. While some ministries measure simple performance of some channels, like social media and email newsletters, they lack clear objectives for these channels to determine their overall contribution to the overarching communication strategy of the ministry.
Public agencies tend to be more consistent than Ministries in their evaluation of unpaid or low-cost activities and could be the source of good practices and benchmark data in this realm. Two noted in interviews with the Secretariat that they regularly evaluate activities like events for number and relevance of people attending, conversions that resulted from the event, and ran qualitative surveys of participants. Another agency mentioned having requested the addition of a communication-related question in application forms for a financial support scheme (asking people how they heard about the scheme) to understand the relative efficacy of different channels and contents.
The SIG stands out from this picture, having adopted methods to measure the performance of organic activities both within the context of campaigns and in aggregate, such as in yearly reports (see Box 2.6). The methods and indicators used could provide useful references in the context of a comprehensive evaluation guidance developed for French communication departments. Nonetheless, there is scope to further develop how these activities are evaluated even within the SIG to support strategic decisions about how different levers of communication are deployed in support of government goals. Analysis and measurement across digital channels and partnerships allow communicators to gauge the relative performance of different outputs, but their aggregate contribution towards strategic objectives could be better evaluated.
Box 2.6. Evaluation of organic communications across digital and partnerships in the SIG
Copy link to Box 2.6. Evaluation of organic communications across digital and partnerships in the SIGAlthough the SIG does not have a press office function within it, several of its departments carry out regular low-cost or unpaid activities. Each according to their specialty has developed methods and indicators to measure the performance of these activities in the context of campaigns or outside.
Web and digital communication
Within the SIG, the Digital Ecosystem Department (Département Ecosystème Numérique) has introduced a dashboard that can be used to measure web traffic (open to all) alongside a portal with guidance for communicators on how to analyse these data. The team additionally supports communicators across government with tracking audiences’ journey to government websites.
This is enabled by the audience.communication.gouv.fr tool, launched in May 2024, as the one-stop-shop “observatory” for government websites. It collects data and analytics on traffic to the vast network of French government websites and mobile applications for analysis and comparison. It is therefore a useful tool for guiding improvement across the performance of government communications and user experiences with online interfaces.
As noted in this chapter, this is an essential source of data for the effectiveness of communication for bringing citizens to the point of accessing services, career opportunities and information to shape their choices for the better. It can also be used to improve the content and experience of web users to make it more accessible, understandable and relevant.
Besides EcoNum, the Editorial team (Rédaction) is behind all the content developed for web and social media within the SIG. The team also measures the performance social media content across the SIG’s digital channels, which is mostly unpaid and serves to amplify cross-government campaigns besides providing daily government updates to followers and web visitors.
The Editorial team has recently introduced a tracking system to log all the content it develops and publishes across different online formats and channels. At the time of writing, this included only a record of the content published, but there is an ambition to develop it into a tool that can track performance indicators for the same content. Performance is instead presently analysed across a range of digital analytical tools on a monthly cycle, to analyse its posts’ popularity and the engagement obtained. Routine monitoring and tracking of indicators informs strategic choices about which content to develop and which channels to use for different audience groups. These insights are processed in yearly benchmarking exercises that summarise the results and set new goals and new approaches for the following year.
External partnerships
The Partnerships division of the SIG contributes to cross-government campaigns by establishing collaborations with external organisations that can amplify and validate governmental messages, including before audiences that are harder for public institutions to reach. These collaborations are entirely unpaid: partners co-create and share content via their channels, and occasionally shoulder costs of paid amplification (or provide advertising in-kind). As such, they are important means of growing the scale of government campaigns beyond the allocated budgets.
The team has adopted a list of primarily output indicators that allow it to measure the volume of coverage and engagement obtained via partners’ channels including social media, emails, newsletters, apps, web, audio (including podcasts), and events. Although these indicators rely on partners’ collaboration, they are typically agreed upfront with the SIG and observed by the partners. Obtaining data from partners can require dedicated follow-up from the Partnerships team, but, according to their accounts, most partners tend to be motivated to show their value-add to the campaign.
Overall, this evaluation process allows the Partnerships team to quantify the contribution of this organic activity to the overall reach and engagement indicators for a campaign. Over time, partnership data can be used to develop benchmarks on the relative performance of a partnership for a given objective, with the addition of qualitative measures for the value of the partner’s channels and brand for reaching more specific audience targets and in deeper ways.
Source: OECD Secretariat interviews with SIG, 2025.
For example, based on the yearly reporting for the SIG’s digital communication activities, there is no stated objective or goal for this strand of communication activity to measure its impact against, even though significant investments were made in expanding the volume of content across channels in 2024. Although the objectives for this activity may be implicitly understood, clarifying the strategic purpose of growing the SIG’s online presence, would make it easier to measure results and make decisions about where to concentrate resources and efforts.
In the context of the SIG’s partnerships activities, there is scope to expand qualitative evaluation to capture a more granular view of how partnering with third party organisations helps further specific objectives or reach more relevant audiences. Assumptions about this value-add motivate the work of the SIG to bolster its campaigns with amplification by other relevant organisations. However, these assumptions should be tested where feasible, especially at the formative stages of a campaign. Since these activities occur for the most part in the context of paid campaigns, there could be scope to evaluate them in the same ways as the other components of the campaign (which is recommended practice in the United Kingdom, see Box 2.7). Pre-testing the partner content with target audiences could validate whether it is better trusted or more resonant than government content, for example.
Overall, the frameworks reviewed for this study, such as those from the European Commission and the United Kingdom, point explicitly to adopting proportional approaches to the evaluation of all activities. In the context of low-budget or organic activities, this translates to simpler metrics that are freely obtainable. The United Kingdom has produced dedicated guidance for evaluating this category of activities in the most appropriate way, based on three levels of monitoring (GCS, 2025[10]). Organic activities that are part of larger paid campaigns are meant to be evaluated according to the full-scale methods envisioned in their framework. Conversely, measurement and evaluation for all other activities is defined according to three tiers described in Box 2.7.
Reinforcing measurement and evaluation early in the process
As noted throughout this study, measurement and evaluation are an integral part of the conception of communication activities. A key component of a framework to guide French government communication directorates to adopt good practices in MEL should therefore concern formative evaluation. This is again in line with the frameworks used in most advanced OECD countries and the European Commission.
Formative evaluation encompasses all the data and analysis that communicators draw on to understand the communication problem to address, set SMART objectives, and test different approaches. Formative evaluation is likewise the stage of the process where performance indicators are selected, through the identification of available data to measure at baseline and throughout the activity.
Box 2.7. Guidance on evaluating low-cost and organic communication activities in the United Kingdom
Copy link to Box 2.7. Guidance on evaluating low-cost and organic communication activities in the United KingdomThe UKGC has defined three tiers of evaluation based on a communication activity’s complexity and budget allocated.
Basic monitoring: Simple metric tracking to gauge real-time performance and flag potential issues/risks, comparing metrics to benchmarks/targets, primarily measuring outputs.
Enhanced monitoring: Tracking outputs and outtakes in dashboards/reports to understand audience reaction, presentation of the analysis in a concise report to have digestible actionable recommendations.
Comprehensive evaluation: Full measurement across outputs, outtakes, outcomes, and impact using a reporting template.
Each tier is supported in the full-length version of the guidance by a list of possible indicators to assess outputs and some short- and medium-term outcomes performance. The guidance places considerable emphasis on digital channels as a source of rapid and freely available data that can be used to evaluate most activities even if only partially. A decision tree (see Figure 2.7) guides communicators to select the most appropriate evaluation approach.
Figure 2.7. Evaluation Decision Tree for low-cost and organic communication activities
Copy link to Figure 2.7. Evaluation Decision Tree for low-cost and organic communication activities
Source: UK Government Communications Service, Evaluating low/no-cost communications https://gcs.civilservice.gov.uk/publications/evaluating-low-no-cost-communications/.
On this dimension, French communication departments reviewed for this study demonstrate consistently drawing on evidence to inform their campaigns. Figure 2.8 illustrates the sources of evidence used across the sample of 50 recent campaigns. Overall, data from existing sources tends to be more commonly used than research carried out specifically for the campaign.
Routine public opinion surveys and polling conducted by ministries and the SIG, for instance, are the most common data source for nearly all campaigns. They offer the advantage of tracking opinion on certain issues longitudinally but tend to be less specific to the core issues of a campaign. Instead, custom surveys, focus groups, media and social listening can offer more granular insights for campaign design, but are less common.
Of the campaigns reviewed, those that involved custom research and pre-testing tended to use multiple of the methods noted in Figure 2.8, whereas a large minority of campaigns drew only on existing surveys and prior campaigns’ evaluation reports. For example, none of the campaigns led by the SIG in 2024 involved pre-testing, according to interviews. However, a majority of campaigns tend to be evaluated while running for the purpose of adjusting the selection of channels, primarily paid ones (see Figure 2.9. Interviews suggested this step is often consists of data and recommendations provided by the advertising agency).
Following international good practices on front-loading a majority of the evaluation at the formative and early roll-out stages of and activity would require making pre-tests a routine step of the process. For instance, peers in the United Kingdom interviewed for this study stressed they work to always pre-test their campaigns before they are launched, even if under time pressures.
Figure 2.8. Sources of evidence to inform communication campaigns’ design
Copy link to Figure 2.8. Sources of evidence to inform communication campaigns’ designFor the last five campaigns conducted by your ministry/agency which of the following data were used to develop the campaign?
Note: n=50 campaigns, multiples responses possible.
Source: OECD survey of France government communication head at ministry and agency level, 2025.
Indeed, time pressures were noted by most DICOMs interviewed as the key barrier to pre-testing, due to short intervals between the campaign brief and expected launch. However, interviews also suggested that in cases where campaigns have performed less well, it was because the creative content was less suitable for some of the advertising and communication channels where target audiences would be reached. Misalignment in planning and development of a campaign, sometimes because of siloes between internal teams working on separate elements, emerged as another common reason for underperforming campaigns. Prioritising measurement and evaluation as the campaign is developed and about to be launched could therefore help correct potential problems and maximise the performance of the campaign outputs.
Benchmarks and baseline data for setting performance targets
Setting SMART communication objectives and performance targets for selected indicators requires access to relevant benchmark and baseline data. As noted above, DICOMs interviewed often said to lack relevant exogenous data that would allow them to estimate long-term outcomes and impact. They also expressed being uncomfortable setting appropriate performance goals, including out of fear of internal stakeholders’ expectations exceeding the available budgets.
Providing guidance on setting realistic and relevant performance targets is one way in which the SIG can help elevate practices for evidence-based strategic communication. Moreover, the SIG can provide additional sources of benchmark data across government campaigns that can be leveraged to make target-setting more accurate and frequent.
The SIG already benefits from vast sources of data thanks to the consolidation of the public sector outsourcing of research and advertising services. For example, the centrally contracted social research firm holds over 300 previous post-tests performed for public sector campaigns. Similarly, the advertising firm that executes the paid components of all governmental campaigns offers an interactive dashboard with performance metrics for all past campaigns. At the time of writing, each DICOM could only access data relating to its own campaigns, whereas the SIG can view data for all public sector campaigns. These are highly valuable sources of data which use can be reinforced.
The SIG could introduce centralised tools that offer all communicators access to the vast data available to accompany practical guidance on how to use it. The SIG already develops and provides proprietary tools tailored to the needs of French government communicators such as those mentioned in Box 2.6 above. For example, one such tool exists to measure traffic on all government websites. Another, the Observatoire des Priorités Gouvernementales (Observatory of Government Priorities, see Box 3.4) has been recently developed by SIG to track media and public discourse concerning public policy topics that clustered under a set of core government priorities. This live dashboard provides all communicators across government with fresh monitoring data to contextualise their activities.
These initiatives illustrate valuable approaches that the SIG could take to streamline access to databases of cross-government campaign data. A majority (62%) of the 50 campaigns in the sample already rely on previous campaign results as benchmarks, particularly a good number of those with a yearly recurrence. An expanded pool of peer activities could offer the most relevant benchmarks by topic, audience targets, budget size, and other relevant parameters, so to maximise the value of past evaluations for setting strategic goals for communication outputs.
Conversely, obtaining relevant baseline data on outcome and impact measures such as service use, adherence to programmes, policy compliance, recruitment rates, will require developing deeper collaboration with the policy and programme teams within each ministry and agency whose objectives communication serves. This collaboration forms part of ensuring communication adds concrete value towards organisational objectives, as discussed in the next chapter. The selection of indicators for results and impact, and identification of the appropriate baseline data should be part of involving non-communicators who commission a campaign in the definition of its objectives and strategy.
Turning evaluations into lessons
Learning, the third pillar of the MEL model (Macnamara, 2023[11]) is the one that adds the greatest value to institutions from evaluating their work. The culture of evaluation in many organisations often revolves around demonstrating success (Macnamara, 2024[7]). Conversely, the Barcelona Principles 4.0 stress that evaluation is about understanding results and progress, not necessarily success, so to learn and grow (AMEC, 2020[12]). Taking the time to reflect on the work done, results obtained, and to extrapolate learnings that inform future decisions is therefore a core element of international best practice. As this section notes, lessons can apply to communication approaches, but they can also apply to the methods of working, the level of collaboration, and the processes in place, so to improve management practice too.
Evaluation reports are the standard means to capture the analysis of the results of a communication activity and key takeaways and learnings. Evaluation reports are typically produced in 82% of the public institutions covered in the 2025 OECD Survey of Government Communicators, and in about half of these institutions they are purposely used to learn from experience and as a basis for future decisions.
Several OECD governments, including United Kingdom, New South Wales in Australia have built evaluation report templates to ensure common standards in reporting. In Australia and Canada, reports for large-budget campaigns are additionally published for transparency and accountability purposes.
Producing quality, analytical reports is therefore a good practice that should become mainstream across French institutions and make the most of the above efforts to measure and evaluate their communications. As shown in Figure 2.9, evaluation reports are already commonly available and tend to be shared internally in nearly every case. Most commonly, these are sent to the Ministers, Director-Generals, and policy departments whose initiatives the campaigns were intended to support, with the intent to show what was achieved.
DICOMs already use evaluation reports for learning. In most cases, these reports inform decisions on current or future campaigns and serve as benchmarks. Only about a third of these reports, however, are shared with the SIG, contrary to the requirements in the approval procedures for campaign-related procurement. Besides compliance with these requirements, sharing evaluations centrally could help build the cross-government benchmarking resources noted above and facilitate the dissemination of good practices.
There are opportunities to build on this positive record and to reinforce how reports are produced to maximise their value for data-driven communication and for learning. As noted previously, evaluation in several DICOMs is often limited to the reports provided by the advertising or social research firms, which primarily assess the performance of the advertising elements of a campaign and their reception by audiences. These reports, especially ones focusing on outputs instead of post-tests on outcomes, can have some limitations for drawing lessons and summarising them in writing. Often, they can also be limited to reporting output metrics, rather than outcomes and potential impact of an activity.
The SIG has established a routine of compiling the measurement and evaluation of its activities into extensive reports that capture all the elements of a campaign. These reports typically capture the activities by breaking down each element of the campaign into its objectives, delivery, and results, explaining the rationale for the selected approaches and the results it yielded.
Figure 2.9. Uses of evaluation for reporting and decision-making
Copy link to Figure 2.9. Uses of evaluation for reporting and decision-makingFor the last five campaigns conducted by your ministry/agency, how was the evaluation used, if an evaluation was conducted?
Note: n=50 campaigns, multiples responses possible. For the answer choice ‘Evaluation report was sent to SIG’ only, the share is calculated without SIG, n=45 campaigns.
Source: OECD survey of France government communication head at ministry and agency level, 2025.
Notably, the SIG reports include an analysis of the campaign impact on public discourse and opinion, whether through the post-test results or the analysis of digital and media mentions connected to the campaign topic. In a few relevant cases, the analysis also notes results on behavior change and impact towards policy goals. This part of evaluation reports provides a valuable snapshot of the campaign and its effects.
These reports are a useful template that could be adapted for use by all DICOMs to standardise their campaign reporting and make it more analytical. The format used by the SIG can be further reinforced with some elements focused on learning.
For example, out of a sample of the SIG evaluation reports reviewed by the OECD Secretariat, only one of them included a final assessment and lessons section. Including such a section in every report, or even making it part of an executive summary, would highlight the key information to retain from the evaluation exercise among a wide range of details and data points.
Similarly, evaluation report templates could guide communicators to prioritise clarity and simplicity above the highly technical elements of typical evaluation reports that are difficult for non-communicators to grasp. Many post-test and advertising agency reports can indeed be inappropriate formats for internal government stakeholders. As noted above, for these audiences, it is important to convey the concrete outcome and impact KPIs in way that they will be able to understand.
Finally, evaluation reports by the SIG presently tend to mirror the internal teams structures and reflect a relatively siloed way of working, despite notable progress at co-ordination and integration in the Service. Interviews confirmed that each specialist team primarily follows only the campaign elements they are responsible for and can lack visibility on how their inputs are used or what the other teams are contributing. Moving towards a holistic approach for reporting results and learning could thus help reinforce integration between teams and implicate them beyond their immediate remits.
Experts and government practitioners have acknowledged that each communication activity also bears important lessons on how to manage a project, collaborate across institutions and disciplines, and build fruitful team dynamics. For this reason, some countries’ communication evaluation frameworks include a dedicated step for evaluating the activity’s process and management. This is the case in Canada, where the Canada Revenue Agency’s own evaluation toolkit provides two separate steps for reviewing and learning from how the work was conducted (Box 2.8).
Box 2.8. Focus on learning in evaluation guidance by the Canadian Revenue Agency
Copy link to Box 2.8. Focus on learning in evaluation guidance by the Canadian Revenue AgencyThe Canada Revenue Agency’s (CRA) Public Affairs Branch has developed a simple, practical evaluation guide for communicators along five steps:
1. Get ready to evaluate your project before it begins
2. Collect your data
3. Analyse your data
4. Showcase your results
5. Get approval and share it
The guide contains templates for requesting inputs from different teams, questionnaires, and reporting documents that each communicator can use to complete each step in the process.
The portion on collecting data calls for measuring both the project management and the processes. Project management is evaluated via a post-project survey that is submitted to all those who contributed to a communication project, within the communication branch and in other branches of the CRA. This survey aims to capture lessons for collaboration and is particularly valuable to foster more effective co-operation between the policy and communication disciplines. Survey results form part of the overall analysis presented as part of evaluation reports.
Separately, the communication processes are evaluated in a “hot wash” meeting, held between all relevant contributors to the project towards its end. A dedicated worksheet is used for each participant to prepare their reflections and feedback on what worked well and what could be improved for implementing similar projects in the future.
The guidance on analysis additionally puts forward questions to contextualise the overall activity results that can bear useful lessons. These include process-related questions such as “Could timing have impacted your results?”, “Could other team’s communications have impacted your results?”, “Have your products inspired your colleagues to make changes to their own products?”.
Finally, the guide urges its users to “offer to meet and formally present your evaluation report to encourage a culture of evaluation and for two-way communication” as part of sharing the report internally.
Source: Canada Revenue Agency, Public Affairs Branch, A practical guide: Evaluating communications: Best practices in evaluation (internal document).
With the growing focus on interministerial collaboration and better integration of all communication disciplines within each DICOM embedding such an approach within the evaluation process could be a means to further these goals. Interviews suggested that certain challenges, such as a prevalence of simultaneous, competing campaigns during periods of media saturation, are familiar to many communicators. However, these lessons are not recorded or taken on board in any formal way, despite being linked to under-performing campaigns, leading to the repetition of poor practices.
Another common element found in leading examples of evaluation guidance is encouragement to share and discuss evaluation reports. In countries including the United Kingdom and Canada, cross-government repositories of evaluation reports help teams source relevant examples and benchmarks to inform their work. Moreover, both governments offer opportunities for exchange among peers to facilitate the transfer of lessons and good practices. Activities of this nature feed into efforts for the professionalisation and reinforcement of capabilities, which is another primary lever for mainstreaming evaluation best practice across French institutions.
Building capabilities and culture of evaluation
The analysis and recommendations in this section have focused on international good practices for MEL and how to mainstream them in French government communication directorates. A prerequisite for widespread adoption of good practices lies with building sufficient capabilities and skills for communicators. For this reason, the above recommendations about building a comprehensive framework for MEL and the tools to support its use ought to be complemented by actions to equip communicators with the knowledge, skills, and culture to embed MEL in their work.
The skills and culture gap in data-driven communication is an industry-wide and international challenge (Zerfass, Verčič and Volk, 2017[13]; OECD, 2021[1]). Communicators across OECD countries interviewed for this study confirmed the findings of the OECD’s (2021[1]) international report about insufficient skills being hindering evaluation their work. They also often reported difficulties with recruiting and retaining staff with strong data and analysis skills, which are competitive in job markets (OECD, 2023[14]). It is also common for teams to rely on external support to perform measurement and evaluation (some institutions noted doing so also out of a desire for evaluation to be independent and objective).
A similar pattern is visible across French DICOMs. As noted above (Figure 2.5) several ministries highlighted skills and staff gaps as barriers to evaluation. These factors add to cultural elements, namely scepticism of communication’s ability to achieve tangible impact and preference to only measure reach and visibility among some (although by contrast, some interviewees voiced a genuine desire to measure more impact for policy).
DICOMs also rely in significant part on external providers for evaluation, although over half have an individual or team responsible for this task, as per Figure 2.10. Despite this, interviews clarified that most often, these are not dedicated evaluation teams or not internal to the communication department, which limits how much they can support with systematic efforts to evaluate campaigns and other activities.
As noted, standardised post-tests are the main method for measuring outcomes and add to advertising performance reports and similar reporting developed by contractors implementing or evaluating campaigns. However, interviews highlighted that many communicators lack the confidence and expertise to scrutinise and challenge the approaches and results of the external suppliers. Instead, several claimed to trust their suppliers’ expertise and not question the performance reports received.
As a result of the consolidation of public sector communication procurement, the SIG can ensure that standards are met by these external providers, who have an important role in contributing to the quality of measurement and evaluation across French institutions. Nonetheless, it is important that communicators themselves have the necessary competencies to critically assess evidence and apply it themselves in the formative stages of activities, from the selection of SMART communication objectives to the definition and measurement of KPIs.
Efforts to build capabilities and culture could therefore prioritise specialist skills for designing and evaluating evidence-based communication strategies, focusing on moving beyond output indicators that DICOMs are commonly comfortable with. At the same time, professionalising the communication function would also require that all relevant staff have baseline competencies for critically analysing data.
Figure 2.10. Internal capabilities for measurement and evaluation
Copy link to Figure 2.10. Internal capabilities for measurement and evaluationDoes your ministry/agency have a person or team responsible for measurement and evaluation with the following expertise: research methodology, statistical analysis, evaluation design, and critical analysis?
Note: n=10 services, single answer. SIG responses are included under ‘Ministries’. Not but we outsource” full version label was ‘No, but we outsource measurement and evaluation from providers with this expertise
Source: OECD survey of France government communication head at ministry and agency level, 2025.
Separately, professionalisation efforts should additionally target the leadership level to reinforce strategic decision-making and methods to effectively advise and manage internal stakeholders. As the following chapter discusses, campaign objectives, expectations, and budgets are often imposed on communication offices by Director-General, ministerial cabinets, and policy teams. These constraints have emerged as the main barrier to more effective and strategic communications. As part of addressing them, it would be important to empower senior communicators with the upward management skills to counsel their stakeholders effectively.
A range of existing programmes and initiatives led by the SIG offer the right avenues to pursue these objectives. Over recent years, the SIG has established four main levers to professionalise government communication and integrate it further. According to interviews, these include: a) the introduction of common standards and tools; b) the formation of discipline-specific cross-government networks;3 c) a training offer under development, comprising online learning and technical workshops; and d) a programme of events and networking to consolidate public communication as one sector at all levels of government. These include regular roundtables and events targeted specifically at addressing performance and evaluation challenges. These initiatives are additionally complemented by the centralisation of procurement to a common set of providers, which serves as a driver of harmonised standards and practices steered by the SIG.
The initiatives introduced and under development mirror the leading practices found across OECD governments to professionalise teams and facilitate adoption of good practices. One key observation from international interviews concerns the importance of providing sustained channels for informal exchange and learning between peers to support bottom-up sharing, networking and learning. To this end, some countries, such as Belgium, Canada, and the United Kingdom, have established communities of practice that have been highlighted as effective platforms for their professionalisation efforts (see Box 2.9).
Box 2.9. Networks and communities of practice in Canada and the United Kingdom
Copy link to Box 2.9. Networks and communities of practice in Canada and the United KingdomThe Communications Community Office and Evaluation Community of Practice
Canadian government communicators are supported by the Canadian Communications Office (CCO) a central structure entirely dedicated to the professionalisation of the function and to fostering an active professional community.
The CCO supports numerous communities of practice, which are led by expert communicators within ministries and agencies on a voluntary basis. The community of practice for evaluation was formed in 2018 and has grown since then from 30 to almost 500 members.
The community holds regular virtual learning events, during which members can share significant evaluations, methods and good practices. This is combined with making resources, articles, case studies available via the CCO’s online portal.
For specific support and advice to members, the community’s leads also hold regular office hours, set time windows during which they can provide one-on-one advice.
Heads of Discipline in the United Kingdom
In the UKGC, communities of practice are organised according to seven communication disciplines: Data and Insight, Media, Strategic Communications, Internal Communications, External Affairs, Marketing and Digital. These informal roles have recently been formalised despite existing informally for some time.
Heads of Disciplines come from across ministries and are responsible for leading the continuous improvement and professional development of their area of expertise and have a leading role in implementing UKGC’s comprehensive reform Strategy in relation to the goals that touch on their respective areas.
Heads of Discipline for Data and Insights lead monthly meetings of data, insight and evaluation practitioners in each government department, according to a collectively defined agenda. This network also organises specific working groups around cross-government projects, such as the development of tools and guidance.
Source: Canadian Community Office (2025), https://www.canada.ca/en/government/system/government-communications/communications-community-office/about-communications-community-office.html; Government Communication Service (2025), https://gcs.civilservice.gov.uk/heads-of-disciplines/#:~:text=There%20are%207%20GCS%20communication,appointed%20to%20represent%20each%20discipline.; OECD interviews with peers from Canada and the United Kingdom.
While communities of practice exist in France, there is no one community dedicated to MEL. Indeed, MEL practice cuts across the disciplines and internal structures in place within SIG and DICOMs. Nonetheless, establishing such a dedicated community or tasking an existing network with integrating this discipline would be valuable. For example, the insights and analysis discipline already benefits from an active informal network that meets quarterly and exchanges regularly on current projects, best practices. Given their proximity to data and analysis this group could be well placed to lead a MEL community of practice.
The work of these communities tends to be additionally supported through cross-government intranets and platforms for sharing guidance, case studies, training materials and resources related to all communication disciplines. In France, the online Kiosque developed by the SIG for public sector communicators is intended to play such a role and become the central repository for every ministry, agency, or local government campaign information. By expanding the Kiosque, integrating the guidance and tools proposed in this chapter, and by promoting its routine use by teams across government, the SIG can support the goals of mainstreaming MEL good practices.
Finally, the practice of temporary or long-term staff movement between institutions, for instance through secondments, has also been highlighted as an effective method to transfer good practices, build advanced capabilities, and support cross-government collaboration. The United Kingdom has experimented with a similar programme for staff movement with the goal to foster cohesion and coherence of skills and processes across government communication teams.
Key findings and recommendations:
Copy link to Key findings and recommendations:Evaluation of communication is widely practiced across French DICOMs and mostly exceeds what is found across a majority of OECD countries. However, its depth and strategic use remain limited. French DICOMs could elevate their MEL standards to match those of leading governments and organisations in this field. This would contribute to enhancing the evidence-based, strategic approach to communication as a way to consolidate the recent wave of reforms to the function in France.
Most evaluations focus on output metrics such as reach and impressions, while long-term outcomes and policy impact are assessed only rarely. Evaluation reports are frequently limited to advertising performance data, and post-tests are underused due to budget constraints. Organic communication activities, such as social media or partnerships, are evaluated irregularly.
Overall, public agencies reviewed tend to evaluate their activities more thoroughly, thanks to a more predictable, service-centred plan for communications. Ministries, conversely, show the larger gaps, often as a result of higher time, budgetary, and political pressures.
Communication objectives can often be vague or overly focused on visibility. Poorly defined objectives are the main reason communicators struggle to prove impact against policy goals.
Defining and measuring SMART objectives requires ministries and agencies to systematically identify and access baseline and ex-post data related to the services, programmes, or recruitment efforts their campaigns aim to support. By expanding access to existing data repositories and dashboards, the SIG can help communication teams set more realistic targets and compare performance across campaigns. This could also support more evidence-based decision-making.
The SIG’s campaign procurement approval process has helped institutionalise evaluation by requiring KPIs and post-tests for campaigns above certain budget thresholds. By continuing to reinforce the approval process as a planning and accountability tool, the SIG can help embed evaluation more deeply into communication workflows. Strengthening compliance – such as by linking future approvals to the completion of evaluation reports – could further enhance the credibility and utility of the process for MEL.
By leaning on this approval process, the SIG could nudge the adoption of several good practices for MEL:
A more accurate categorisation of campaign purposes, to reduce the prevalence of information or dual-objective campaigns in favour of more results-oriented ones.
Drawing on international practices, the SIG could explore ways to use the approval process to reinforce alignment with annual communication planning cycles. This could help improve coherence across ministries, reduce duplication, optimise the timing of campaigns, and ultimately drive impact.
By promoting the idea of reserving a portion of campaign budgets (indicatively 5–10%) for evaluation activities, the SIG can help ensure that teams have the resources needed for pre-testing, post-testing, and impact analysis, areas that are often underfunded.
DICOMs often lack access to relevant data and internal capacity to conduct robust evaluations. Many communicators rely on external providers and express discomfort with interpreting data. Moreover, interviews highlighted the need to build a culture of learning. These findings point to the value of introducing comprehensive guidance for MEL, along with initiatives aimed to build the capacity and culture for MEL across DICOMs.
The chapter thus recommends that the SIG develops a common MEL framework, including indicator matrices, templates, and practical guidance in line with leading practices across the OECD. This would provide a shared reference point for all communication teams and support more consistent evaluation practices.
The framework could ensure that KPIs cover the full range of communication activities, including unpaid and organic components. Including outcome and impact indicators alongside performance metrics could help teams better align their efforts with strategic objectives.
The SIG’s own practices in tracking the performance of organic and unpaid communication activities could serve as a reference for other institutions and be enhanced via this framework.
Drawing on international best practices, the SIG could promote the idea that a majority of evaluation efforts should occur before full campaign rollout. Emphasising formative and process evaluation, such as pre-testing and early KPI tracking, can help teams refine their strategies and avoid costly misalignments.
A MEL framework should finally address elements of process and project management that could help improve collaboration within and across institutions.
Building on current initiatives for professionalisation would support the adoption of the above framework and practices for MEL. The SIG could lead the provision of dedicated MEL training and establish a MEL community of practice to develop an engaged cross-government community to support peers.
Capacity building efforts could additionally target senior communicators with training in strategic leadership and upward management. This would better position them to advise internal stakeholders, such as Directors-General and ministerial cabinets, whose influence over campaign objectives and budgets often limits the strategic scope of communication efforts.
While evaluation reports are commonly shared internally, they are rarely used to extract lessons beyond advertising. The SIG’s own reporting practices offer a model for more analytical and integrated evaluations. However, reports could be better tailored to the audiences they are addressed to: some could cover a smaller set of outcome and impact KPIs that help demonstrate the value of communication in terms that resonate with policymakers and programme leads.
Moreover, involving non-communicators (such as policy leads) in defining objectives and indicators could also improve alignment with institutional goals and understanding of communication’s value.
References
[12] AMEC (2020), Barcelona Principles 3.0, https://amecorg.com/wp-content/uploads/2020/07/BP-Presentation-3.0-AMEC-webinar-10.07.20.pdf.
[5] Association for Measurement and Evaluation of Communication (2025), Interactive Framework, https://amecorg.com/amecframework/framework/interactive-framework/.
[8] Buhmann, A. and F. Likely (2018), Evaluation and Measurement in Strategic Communication, Wiley-Blackwell, https://www.academia.edu/35851832/Evaluation_and_measurement_in_strategic_communication.
[6] European Commission (2024), Communication Monitoring Indicators - Supporting Guide, https://commission.europa.eu/document/download/b1ce0367-aec1-4018-b012-d131fa160284_en?filename=communication_network_indicators_supporting_guide.pdf.
[9] Falkheimer, J. and M. Heide (eds.) (2022), Measurement and Evaluation: Framework, Methods, and Critique, https://doi.org/10.4337/9781800379893.00039.
[10] GCS (2025), Evaluating low/no-cost communications, https://gcs.civilservice.gov.uk/publications/evaluating-low-no-cost-communications/ (accessed on 4 April 2025).
[2] GCS (2025), The GCS Evaluation Cycle, https://gcs.civilservice.gov.uk/wp-content/uploads/2024/02/2024-02-13-GCS-Evaluation-Cycle-FINAL-OFFICIAL.pdf (accessed on 2 April 2025).
[7] Macnamara, J. (2024), Jim Macnamara’s MEL Manual for Public Communication.
[11] Macnamara, J. (2023), Measurement, evaluation + learning (MEL): New approaches for insights, outcomes, impact., Routledge.
[3] New South Wales Government (2020), NSW Government evaluation framework for communication, https://www.nsw.gov.au/nsw-government/communications/evaluation (accessed on 2 April 2025).
[14] OECD (2023), Public Communication Scan of the United Kingdom: Using Public Communication to Strengthen Democracy and Public Trust, OECD Publishing, https://doi.org/10.1787/bc4a57b3-en.
[1] OECD (2021), OECD Report on Public Communication: The Global Context and the Way Forward, OECD Publishing, Paris, https://doi.org/10.1787/22f8031c-en.
[4] SIG (2024), Note d’Application - mise en œuvre de la circulaire du Premier ministre n°6453/SG.
[13] Zerfass, A., D. Verčič and S. Volk (2017), “Communication evaluation and measurement: Skills, practices and utilization in European organizations”, Corporate Communications: An International Journal, Vol. 22(1), pp. 2-18, https://doi.org/10.1108/CCIJ-08-2016-0056.
Notes
Copy link to Notes← 1. The NSW Government has now adopted the AMEC evaluation framework.
← 2. A post-test is a typically survey-based method used at the end of a communication or advertising campaign to assess the extent to which a targeted population was exposed to a message, their retention and recall of the message, and if they changed their mind or they plan to take any action as a result of seeing the message.
← 3. The networks group professionals on each of the following areas: Prefectures, Partnerships, Campaigns, Community Managers, Monitoring and Analysis, State Design System, State Branding Referents, Ad and Site-Centric Analytics, Crisis, Accessibility.