Skip to content
An official website of the OECD. Find out more
Created by the Public Governance Directorate

This website was created by the OECD Observatory of Public Sector Innovation (OPSI), part of the OECD Public Governance Directorate (GOV).

How to validate authenticity

Validation that this is an official OECD website can be found on the Innovative Government page of the corporate OECD website.

Shaping the Future of Policy with Behavioural Science: Key Takeaways from the 9th OECD BRAIN Meeting

Around the world, governments are adapting to a rapidly evolving policy environment shaped by new technologies, rising citizen expectations and complex social and economic pressures. These changes demand more than good intentions or strong policy design on paper; they require deep understanding of how people actually experience government in their everyday lives. 

That’s why more than 180 delegates from 44 countries came together in Paris for the 9th meeting of the Behavioural Research in Action International Network (BRAIN). As a global community, we are moving beyond small experiments and working to embed behavioural science as a core capability of modern governance – improving services, strengthening trust and deploying technologies like AI in ways that work for people. 

The behavioural public policy agenda has grown broader and more ambitious. Evidence-based testing and rapid evaluation are helping policies keep pace with change. And as the field scales, international collaboration is essential. No country can do this alone but together we can turn behavioural science into systematic application and real-world results. 

Sludge Audits: Reducing Friction, Enhancing Access 

 A central focus of the meeting was how governments can identify and reduce “sludge”, the excessive and unjustified frictions that hinder access to public services and slow down governmental processes. Building on the success of the International Sludge Academy and the 2024 report Fixing Frictions: Sludge Audits around the World, the OECD continues to support countries in measuring and reducing sludge across both service delivery and internal operations. 

  To advance this work, the OECD is developing an international sludge audit method; a structured, scalable and internationally comparable approach for diagnosing, quantifying and addressing behavioural barriers and frictions in administrative processes and services. Developed in collaboration with an expert panel chaired by Professor Cass Sunstein, the methodology draws on insights from real world audits and the latest behavioural science evidence. It equips governments with a shared language, behavioural criteria and practical tools to surface hidden costs — including time, cognitive, emotional and social burdens. The method enables countries to benchmark progress, prioritise reforms, and exchange insights across jurisdictions, paving the way for more efficient, equitable and user-friendly public services.  

The development of the OECD sludge methodology is also being informed by an ongoing project led by the OECD with the Prime Minister’s Office of Finland and financed by the European Commission. The project focuses on improving the quality and accessibility of public services by identifying and reducing sludge in both national and local service delivery in Finland. The project adapts the method to Finland’s administrative context and tests its use in practice across a range of services – from conducting a sludge audit of an AI-driven chatbot, to examining interactions with the tax administration, to addressing seemingly simple but high-friction tasks such as finding information about local events. A key aim is to build long-term capacity within the Finnish administration to carry out sludge audits and implement the resulting behaviourally informed recommendations. Over time, the expectation is that streamlining these services will increase take-up and effective use of public programmes, strengthen trust and transparency, improve user satisfaction, and support more efficient and resilient continuity of service across the public administration in Finland.  

While sludge audits initially focused on improving citizen-facing services, governments are increasingly applying the same approach to their own internal processes. This underlines both the organisational value of behavioural insights and their contribution to the emerging field of behavioural public administration. Behavioural science complements traditional simplification efforts by identifying and quantifying the human and procedural frictions that make internal workflows more burdensome than they need to be. By focusing on removing “beige tape”, administrations can run sludge audits to pinpoint internal administrative burdens that slow down government, delay decision making and hinder the implementation of innovation and reform. This includes mapping where time pressures, cognitive load, or ambiguity in roles and responsibilities create avoidable friction for public servants. Looking ahead, emerging technologies such as machine learning and AI offer new opportunities to detect, monitor and reduce sludge at scale, helping governments target reforms where they will have the greatest impact. 

Governing in the algorithmic age with behavioural science 

AI and emerging technologies are not only transforming government operations but also reshaping how citizens and governments think, decide, and interact with AI systems, placing increasing pressure on cognitive and emotional resources. As artificial intelligence becomes increasingly embedded in public sector decision-making, behavioural science offers a critical perspective on how people interact with algorithmic systems.  While AI offers significant potential to streamline processes, improve service delivery and boost productivity and creativity, it can also create new cognitive demands, reduce transparency and introduce risks if not carefully managed. Some examples from the past BRAIN meeting included that public servants may begin to rely on AI outputs they cannot fully challenge, or substitute real engagement with citizens for AI-generated proxies. This can quietly scale bias, exclusion and loss of accountability. Speakers also highlighted a broader concern that over-reliance on AI, combined with rising cognitive overload and the spread of misinformation in digital environments, may erode our collective “brain capital”: the attention, judgement and emotional resilience that modern governments and societies depend on. To address these fundamentally behavioural risks, behavioural science can enrich AI governance. It helps policymakers to anticipate where people will struggle, when they may over-trust automated systems, and how to design safeguards that keep humans informed, empowered and accountable. In this way, behavioural science helps ensure that AI systems are understandable, trustworthy and aligned with human values.  

The human mind is in recession. Technology strains our brain health, our capacity and skills.  We do have challenges with brain health, brain skill measures are not doing so well and this comes a time when we are investing a lot in artificial intelligence. – Expert at the Network Meeting 

In addition to the technology driving AI, it is critical to understand how people interact with AI-generated content, automated decisions, or hybrid human–machine workflows because this is ultimately where questions of fairness, accountability and trust are felt by the public. If citizens do not understand why a decision was made about them, cannot challenge it, or feel it was produced by a system rather than by an accountable institution, confidence in government can erode quickly. Speakers in the last BRAIN meeting noted that citizens may respond differently depending on whether advice or a decision by government appears to come from a human, an AI system, or a combination of both – which has implications for how responsibility is perceived and enforced. Also, different groups may experience AI-generated content in distinct ways, for example, young people, who spend a large amount of time online, are especially exposed to the ways AI shapes information, interactions and opportunities. Their perceptions and behaviours will, in turn, influence broader societal trust and the long-term adoption of AI systems.  

It is equally important to consider how governments themselves are deploying AI across decision-making, service delivery, and hybrid human–machine operations because these systems can directly shape who receives services, how resources are allocated, and on what terms decisions are made. If AI is introduced without careful design and oversight, it can unintentionally embed bias into decision-making, create ambiguity about who is accountable for an outcome, or exclude people who cannot easily navigate automated processes. Responding to these challenges, the UK Government has developed the Mitigating Hidden AI Risks Toolkit, a practical resource designed to anticipate and address risks that are not always visible through technical testing or legal compliance checks. The toolkit uses structured scenario planning and thought experiments, integrating behavioural science to uncover where AI systems might inadvertently cause confusion, exclusion, or harm. It focuses on the “middle layer” of human–AI interaction: the everyday decisions and workarounds of the people using AI inside government, which ultimately determine whether small frictions, blind spots or incentives scale into systemic problems. By systematically exploring how real people might misunderstand, misuse, or be disadvantaged by AI tools, policymakers can identify vulnerabilities early and adapt designs before scaling. 

Improving Market Fairness: The Importance of Applying Behavioural Science to Competition Policy

As digital platforms and commercial practices become more complex and increasingly personalised, traditional regulatory approaches are struggling to keep pace. Consumers are routinely faced with design choices that steer their behaviour, through dark patterns, biased defaults or manipulative framing, in ways that are difficult to detect, hard to compare across providers and not always easy to challenge. This raises concerns about fairness, transparency and effective competition. In this context, behavioural economics is no longer confined to theory about how consumers and firms behave. It is being applied in practical ways to improve how competition authorities prioritise cases, define markets, assess competitive effects and design remedies. Behavioural science provides a complementary lens by examining how consumers actually make decisions in real settings, often under conditions of limited information, attention and time. 

Regulators are increasingly turning to behavioural science tools to detect and counteract practices such as dark patterns, misleading defaults and excessive choice architectures that reduce consumer welfare. These insights are also being used to improve the design of enforcement communications and compliance strategies, ensuring that businesses understand and respond appropriately to regulatory expectations. By moving beyond legal formalities to address the psychological realities of consumer behaviour, behavioural competition policy aims to restore fairness and autonomy in increasingly complex marketplaces. 

What’s next?

The 9th BRAIN meeting made clear that whether reducing sludge or beige tape, guiding the design and implementation of AI-related public policy, improving market transparency or building institutions that reflect real human experience, behavioural science is proving essential to meeting today’s complex policy challenges. 

Looking ahead, the emphasis will be on scaling interventions, developing shared standards and expanding international cooperation. Behavioural science is no longer just about changing behaviour—it is also about transforming how governments think, plan and act in partnership with the people they serve.  

The 10th BRAIN meeting will take place online on Wednesday the 5th of November. If you’d like to get involved and you’re a government official interested in leveraging BeSci to improve your work, please reach out to [email protected] and find out more information at The Behavioural Research in Action International Network (BRAIN) 

This blog is funded by the European Union. Its contents are the sole responsibility of the authors and do not necessarily reflect the views of the European Union.