Translate page into
« Go back to list

Have we reached peak-toolkit?

We suspect that the world does not need another toolkit. There, I said it. This hunch is reinforced when I am met with wincing but affirmational smiles each time the topic emerges in conversation with public sector innovators.


We want to move beyond the innovation toolkit

Hi, my name is Angela Hanson, I am a design lead at the OECD Observatory of Public Sector Innovation (OPSI). My work involves innovation methods and tools, a topic informed by many bruises and lessons learned as an innovation practitioner in the public sector. Many of the insights in this post reflect my personal experience so I want to hear from you whether these resonate. 

My team is developing an “innovation toolkit,” which is part of our Horizon 2020 project. But instead of another toolkit, we believe that innovators need something like an innovation hardware store plus access to master craftsperson knowledge: an accessible way to find out what’s possible, navigate what’s available, match tools with the context and the available skills and capabilities, and get advice and support when inevitably issues arise. Several in the OPSI network have confirmed this direction, but we would like to hear your thoughts.

Our goal is not to ignore the many tools, guidance and playbooks that already exist and people have put significant effort to build, and people utilize on a daily basis. See my colleague’s prior blog post analysing some of the more popular ones. We are learning from patterns across tools in a growing compendium of existing toolkits. (And if you have another to add, please submit it.)

Public sector problem solving involves changing political winds, evolving expectations of governments’ roles, shifting levels of trust by citizens, a lack of discrete and stable way to measure impact, a constant, albeit democratically necessary, reassessment of shared values and purpose, and a dizzying combination of factors that are unsuitably flexible or rigid for the challenges at hand—all the while the rate of change outside of government accelerates. In other words, public sector innovation is complex, and that carries over to the use of even “simple” tools. 

Well, now what? 

How do we develop such an innovation resource that can work at different levels of government and across highly variable and complex contexts? Our small team is advancing the work of the OPSI, but we cannot provide comprehensive end-to-end innovation services and mentorship to all governments at once. We also do not, and cannot, know all the uses and limitations of every innovation tool in every context.

Our current hypothesis is that we can maximize value by helping build the foundations for an innovation resource – somewhere that others can add their tools for others to use, provide advice about their experience with tools, and share use cases of how different tools have been used and adapted to a particular context. We can try to provide some common language about what tools are, what they do, and a means of sequencing and navigating between all of the (large and growing) options available. We might also be able to provide “self-service” elements, to help people think through what their context suggests about what sort of tools they might best use or try.

But before we do so, we need to know if we are on the right path. Will you help guide us?

This is the first in a series of blog posts describing our evolving thinking as we develop a resource, and where we are seeking your feedback – experiences, thoughts, reflections or needs.


Where our thinking is now

First, here are some assumptions about our presumed user base for our future resource:

  • We assume our users are public sector staff in government
  • We assume our users are composed of a variety of different actors and roles inside of government. For instance, see this blog post about roles that parliamentarians can play in shaping innovation
  • We assume that oftentimes our users will have limited resources for their innovation work—at least in the initial phases

Second, we assume that toolkits have some important benefits, including:

  • They provide an easy means of identifying other methods that can be applied, and thereby help increase the range of options that people might consider
  • They help provide an introduction (and sometimes much more) to different methods, which can provide a useful entry point for people with little or no experience in them
  • They can give as a confidence boost to people (and organisations) trying something different, a reassurance of what steps to take
  • They can act as a validation of a tool or approach – “see, they use it” – that can provide reassurance that this isn’t a totally out-there thing

Third, we also assume that there are a number of limitations with innovation toolkits. Despite their proliferation over the last decade, there seemingly has not been a commensurate increase in people using and embracing new tools. Toolkits have definitely been important – but they have not been sufficient.

 I’d like to explore several gaps and shortcomings in a little more detail to help explain why we think toolkits have been a good start, and why we now we need something more if we are going to serve a wider range of public servants.


Toolkits often misaligned with problems

The road of good intentions is littered with broken apps for solving world hunger

Because of the tangibility of toolkits and, in some cases, brand identity or zippy visual design, it is easy to deploy toolkits that is misaligned with a problem—or even before the very reason for innovation has been established. Many toolkits assume challenges have been defined, facts have been gathered, and insights have been generated and socialised with the challenge owners and stakeholders. Some toolkits do focus on the more upstream and ambiguous phases of an innovation project or initiative, such as those involving futures, horizon scanning, systemic design, and some service design toolkits.

Deploying a downstream solution toolkit when the type of problem is unknown often results in developing solutions before the problem identified and framed, which can lead to many unintended consequences. In the public sector, not only can this result in broken apps, but also effects on citizens’ lives. This is a particular risk when product and service design toolkits are used for complex public sector problems. Prototyping and product road-mapping are futile if the problem space has not yet been defined.


Toolkits are often agnostic to existing skills, capacities, and experience  

Toolkits should not need a team of skilled artisan unicorns to operate—but if they do, include the unicorns and feed them

If selected and sequenced with intention, they can play important roles in introducing or practicing a capability or behavior, such as observational field research or divergent ideation, or around a specific problem-related task in an innovation process, such as prototyping or running experiments.  These intents need not be mutually exclusive nor exhaustive but it is important for the person or group introducing the toolkit to be explicit about the intent and decide whether and how to introduce it. Is the toolkit about building up a team’s practice or does it require experienced practitioners to operate? Toolkits’ relationships with skills and roles need to be clear.

 “The tool says we need a senior issues design ninja, a disruptive imagineering lead, and a master storyteller with 3 years experience as a change magician."  


This is a step beyond what some toolkits suggest in terms of the amount of time and people needed for a particular activity or facilitation. This does not necessarily do harm but can result in a missed opportunity or create confusion or frustration when the tool users do not have the right ingredients, collaborators or mentorship in place. While toolkits are an entry point for public sector staff to try a new way, a toolkit designed for experienced practitioners working in a dedicated innovation team may not work for the lone public sector staffer curious to try something new. 

Also, downstream solution development toolkits are best when developed by the solution team itself, based on the particular resources, practices and constraints in their organisation. For example, a usability testing toolkit will look very different in a public sector organisation subject to strict rules about experiments or surveys involving humans.


Toolkits deployed as the new shiny object but without intention or context

Novelty for its own sake is just distracting and people’s brains hurt already

Novelty and ritual changes play important roles in leading a group through a change—if they are introduced with intent. Novelty is fleeting—humans are very good at bounding new things into their existing understanding, categorising them, and moving on. Additional novelty can just create more information to process in this way. This is especially true in organisations with scarcity mindsets and disincentivised learning. Public sector staff can leverage the novelty and external legitimacy of a toolkit to create room in their organisation to address problems in new ways and go beyond organisational silos. And sometimes one just needs to get started and actually do something in order to provoke the system and see what happens so that innovators can understand the problem, situation or organisation. A little conflict or controversy is not a bad thing, but wasting the opportunity is. Toolkits can be useful probes and test balloons, provided that they are done in a safe-to-fail environment, are introduced with intent, and combined with reflection, learning and sense-making.

Organisations detect novelty and respond accordingly—try not to trigger the auto-immune response with a toolkit

If you want to be innovative, you have to break some eggs. But wait, that’s not what I meant.  


Toolkits rarely provide guidance for navigating an organisation’s murky dark matter as toolkits are introduced. Borrowing from the private sector concept of product-market fit, if a toolkit is not intentional about toolkit-mindset fit and ignores culture and context as well as the history of methods deployed in a given culture, potential collaborators will be cautious, even suspicious. This can lead to what my former colleague, who earned the nickname Chief Insolence Officer, calls “the organisational auto-immune response.” If people feel like their power or credibility is threatened, they will exercise their creative and innovative nature to subvert, undermine, or just wait out whatever is introduced. Humans are cunning and fun creatures like that. The toolkit will fit best when it is in a team or organisation’s “adjacent possible,” to borrow a concept from innovation writer Steven Johnson. Otherwise, a team may need additional coaching, capacity and/or time to build internal consensus. So, where does an innovator start and how can they avoid both groupthink and organisational rejection as they steward the innovation process?


Toolkits are too granular or too broad


‌Public Sector Innovation Manual, Step 1: Follow the pathway and watch out for bears. Step 2:? Step 3: Measurable success


Step-by-step manuals for public sector innovation are irrelevant after Step 1

I presume it is obvious why a “how-to” guide for innovation is too rigid for a situation involving novelty or complexity. However, it is enticingly easy for a public sector innovator, given the daily juggling act they must manage, to want to attempt replication of a pilot that worked elsewhere.

In recent years, “playbooks” have emerged as a popular construct developed to guide action within a discipline, notably in public sector digital services transformations (see playbook examples). Playbooks typically define the “what” and “why” and include an imperative phrase urging the actor to “do x sort of thing with y in mind.” These enable a broader, more flexible set of specific actions since the “how” is defined by the actor based on his/her situation. The relationship between guides, manuals, tools, toolkits, playbooks, methods, and methodologies is an interesting topic on its own and will be the subject of a future blog post. While a playbook offers a flexible set of “do’s” and “don’ts” there is little guidance regarding the sequence or factors an actor might need to consider when the unanticipated nevertheless predictable situational realities create roadblocks. The ability to break and recombine all the pieces is key. On the other hand, if toolkits are too abstract, they may not be repeatable nor lead to action.

The secret to scaling innovation is translatability, not replication

We suspect that innovators need different levels of granularity for different phases of an innovation cycle (as well as before a team or organisation enters the cycle—but more on that in another blog post). Downstream innovation, once a tangible idea or solution is conceived, is more about configuration and optimisation of the “knowns” while upstream innovation is more about contextualisation and pattern-finding through the “unknowns.” Very different tools are needed for each, as well as for orientation between the upstream and downstream. Complex adaptive systems thought leader Dave Snowden writes about the topic of granularity quite often, including most recently here and heavily influences our thinking on granularity.

So, our resource’s granularity sweet spot needs to be flexible enough to allow users to build upon it but granular enough to be repeatable and transferrable to others. Is it wishful thinking to believe we can create such an offering?


Toolkits create a sense of learned helplessness among public sector innovators

Outsourcing risk sometimes means outsourcing learning

Toolkits developed by experts may seem to be the unmodifiable doctrines of innovation, especially when they are developed by well-known players, delivered in an inflexible format or require expensive software to edit. Innovators may not see these as modular and breakable, especially those who have not had much experience with leading an innovation practice and may feel hesitant to question something produced by experts. Context-based toolkit modification and substitution should feel natural and welcome.

Some tools are stand-alone products that serve as “taster experiences” for a method used by private design firms. In these cases, the user journey of the innovator likely involves a realisation: “I need help from people who have done this before.” If that user happens to have control of sufficient funds, it may benefit a private firm through a consulting contract. Depending on how closely the firm works with the civil servants, the opportunity for the public entity to learn and build up internal capability can be lost. Sometimes, the consultant relationship can reinforce a sense of “learned helplessness” and private firm dependency among the civil service. While private firms can provide needed capacity for solving well-defined problems, there is little incentive for private firms to make themselves unnecessary by enriching the skillset and experience of civil servants.    

"This is going terribly. Let's just call the consultant."


Where our thinking is now…

So, can another resource add value for public sector innovators or will it just create noise? We are exploring more deeply some ways of signaling through the noise via adaptable navigation patterns for different public sector innovation contexts, innovator motivations and innovation maturity levels. Since we are developing something new, we want to share early and often to test our reasoning with you. We have a few questions we would like to test but please feel free to provide less structured feedback.

  • Does the idea of an innovation hardware store resonate? If so, how might we approach it? If not, why not?
  • Who else should we talk to about this approach?

Also, if you know of an interesting toolkit, please let us know about it.

Share your reactions, suggestions, insights, and ideas with us on Twitter, LinkedIn, or email.

To stay in the loop on Observatory of Public Sector Innovation happenings, subscribe to our newsletter.


Image credit: Tool image by Dominick Guzzo, 2012