Skip to content
An official website of the OECD. Find out more
Created by the Public Governance Directorate

This website was created by the OECD Observatory of Public Sector Innovation (OPSI), part of the OECD Public Governance Directorate (GOV).

How to validate authenticity

Validation that this is an official OECD website can be found on the Innovative Government page of the corporate OECD website.

Algorithmic Transparency Standard

Transparency is a key driver of responsible innovation and improved public trust in governments’ use of data and algorithms. The UK’s Algorithmic Transparency Standard is a recording standard that helps public sector bodies provide clear information about the algorithmic tools they use and why they’re using them. The Standard is one of the world’s first policies for transparency on the use of algorithmic tools in government decision making and is internationally renowned as best practice.

Innovation Summary

Innovation Overview

Algorithmic tools are increasingly being used in the public sector to support many of the highest impact decisions affecting individuals, for example those used in policing, social welfare, healthcare and recruitment. Public attitudes research consistently highlights transparency as a key driver of public trust and so building the practical mechanisms and pathways for transparency is crucial for gaining and maintaining trust. It is also key to enabling organisations to innovate and share knowledge on how to deploy algorithmic tools effectively.

Why is transparency important? The public has a democratic right to explanation and information about how the government operates and makes decisions, in order to understand actions taken, appeal decisions, and hold responsible decision-makers to account. Under the UK GDPR, citizens have the right to information about the use of their personal data as well as to know about the use of automated decision-making, including meaningful information about how decisions are made - the so-called “right to explanation”.

There is insufficient information available on what kinds of algorithmic tools government bodies are using and how. Greater transparency in this field will fulfil the democratic expectation that the government explains how it makes decisions, especially in the context of public-facing services with direct impact on individuals and groups. There is currently no standardised manner of presenting information about the use of algorithms in government and members of the public are unable to easily find information they might need to understand how government agencies are using these technologies. Public bodies that would like to be more transparent about how they are using algorithmic tools often struggle with how to communicate this complex information in an accessible way.

Being open about how algorithmic tools are being used provides an opportunity for government departments and public sector bodies to highlight good practice, facilitate learning and knowledge exchange, and contribute to improvements in the development, design and deployment of algorithmic tools across the public sector. It helps those who build, deploy, use, or regulate these tools to identify any issues early on and mitigate any potential negative impacts.

The Algorithmic Transparency Standard (ATS) addresses these gaps by establishing a standardised way for public bodies to report transparency information about how they are using algorithmic tools in decision-making. It enables these organisations to proactively publish details about the algorithmic tools they use and make this information accessible to the public on gov.uk. It has similarities with related approaches such as datasheets or model logs.

How does the ATS work in practice? The ATS functions as a template that guides organisations using algorithmic tools to complete transparency reports. Currently, the Standard is divided into two tiers. Tier 1 includes a simple, short explanation of how and why the algorithmic tool is being used and instructions on how to find out more information. Tier 2 is divided into five categories:

  1. Information about the owner and responsibility.
  2. Detailed description of the tool and rationale for using it.
  3. Information on the wider decision-making process and human oversight.
  4. Information on the technical specifications and data.
  5. A list of risks, mitigations, and impact assessments conducted.

The UK government recognised the need to increase transparency of algorithm-assisted decisions and committed to scoping transparency mechanisms in the National Data Strategy (2020), and to developing a cross-government standard in the National AI Strategy (2021). Throughout 2021, we ran a public engagement exercise as well as a series of workshops to discover what information on algorithm-assisted decision-making in the public sector should be published and in what format.

We published the first version of the Standard in November 2021. In the first half of 2022, we piloted it with organisations across the public sector, seeking to understand how to put it into practice, what could be improved in terms of content and form, and what further support teams might need. We have now published the first 6 transparency reports from the pilots, covering central administrations, police forces and regulators. In October, we published a draft updated Standard on GitHub.

Alongside the pilots, we held an open call for feedback from the public and several roundtable discussions with suppliers of algorithmic solutions to incorporate the perspectives of third party suppliers. These were attended by nearly 100 representatives from the private sector (facilitated by TechUK and the UK’s Crown Commercial Service).

Innovation Description

What Makes Your Project Innovative?

The ATS is innovative as it is one of the world’s first initiatives of its kind and is internationally renowned as best practice. At its core is a deliberative public engagement exercise and co-design process which centred on members of the public and accessibility. Increasing algorithmic transparency has been at the forefront of AI ethics conversations globally, with many calls for greater transparency around the use of algorithmic tools in the public sector. Much AI ethics work, including on fairness, accountability, and transparency, has so far been conceptual and theoretical, while practical applications have been more limited, particularly in the public sector. Some existing examples include work done in Helsinki, Amsterdam and New York City. Our Standard is one of the most comprehensive policies and one of the very few undertaken by a national government to enhance transparency on the use of algorithmic/automation tools in government decision making.

What is the current status of your innovation?

We published the first version of the Standard in November 2021 and piloted it across the public sector until August 2022. Throughout the piloting process, we sought to understand what works and doesn’t work, if any aspects of the Standard were unclear, and what further support teams might need when using it. We have gathered evidence and feedbacks with roundtable discussions, engagement with government bodies, and an open call for feedback from members of the public. Since June 2022, 6 completed transparency reports have been published. We are now seeking more views on the updated Standard and are starting to investigate longer term options for hosting a repository of completed transparency reports with a focus on accessible design for meaningful transparency.

Innovation Development

Collaborations & Partnerships

The design and development of the Standard occurred in cooperation with different stakeholder groups, including UK government stakeholders, industry, academia, and civil society experts. Moreover, through a public engagement exercise led by the Centre for Data Ethics and Innovation (CDEI) and Britain Thinks, we captured public attitudes to algorithmic transparency which directly informed the final makeup of the Standard.

Users, Stakeholders & Beneficiaries

The Public: the transparency reports will enable members of the public to read about what kinds of algorithmic tools government bodies are using and how. Civil society organisations, journalists and third parties will be able to interpret and translate this information. Public Bodies: being open about how algorithmic tools are being used provides an opportunity for public bodies to highlight good practice, facilitate knowledge exchange, and improve the quality of algorithmic tools.

Innovation Reflections

Results, Outcomes & Impacts

We have piloted the ATS with more than 10 teams across the public sector between January and June 2022, with 6 completed reports publicly available.

The pilot process has demonstrated widespread support for algorithmic transparency from pilot partners, who highlighted:

  • Benefits from internal learning and reflection as the use of the ATS invites internal scrutiny.
  • Benefits from providing a template including guidance on potential risks that should be considered during tool development and procurement.

Consultation with members of the public and suppliers of algorithmic tools revealed widespread support (e.g. 97% of suppliers supported the policy).

Over the longer term, the impacts we are aiming for include:

  • Increased public trust in the use of algorithmic tools in the public sector.
  • More responsible innovation and use of algorithmic tools by public bodies.
  • Increase of ethical/transparency considerations being embedded into the development and use of algorithmic tools.

Challenges and Failures

Challenges:

  • Articulating the importance of transparency and building momentum for using the Standard. Response: Within the government, we have engaged widely and made clear the benefits of transparency and its importance for innovation. With external suppliers we hosted roundtable discussions to gather views and bring stakeholders into the policy process.
  • Involving different types of stakeholders in the development and iteration of the Standard, going through repeated rounds of discussion, feedback, and iteration. Response: Think carefully about the design of the engagement process and the diversity of participants to get a broad range of perspectives.

Potential failures:

Not having enough ‘high impact’ use cases in the pilot phase, which can demonstrate the benefits of using the Standard for building public trust in the use of algorithmic tools.
Response: focus more resources on talking to a broader range of teams across the public sector and making the case for transparency.

Conditions for Success

Leadership and buy-in:
As this is currently a voluntary standard, uptake depends on the appetite of public sector teams to use the Standard and be transparent about their tools. Creating the motivations and incentives for transparency are crucial for success.

Adoption and uptake:
For this policy to be successful, it is important that public sector teams who are developing and implementing algorithmic tools are aware of the ethical considerations that should go into this process, have assurance processes in place and prioritise embedding ethics into the entire project lifecycle.

Replication

The Standard has so far been featured in various international fora and working groups such as the Open Government Partnership’s Open Algorithms Network. We have been in contact with several officials from different national governments to talk about aligning our policies on algorithmic transparency. We have also been made aware of private companies that have adopted and taken the Standard as inspiration in their own transparency processes such as Wolt (https://explore.wolt.com/fi/fin/transparency).

Lessons Learned

  • Many public sector teams would like to be more transparent and consider ethical questions in their projects from the beginning but might lack the guidance, capabilities or resources to do so.
  • This initiative and others can help to encourage a proactive culture in the public sector around embedding ethics into data and automation projects from the start.
  • Our public engagement exercise found that the general public may not necessarily be interested in examining the content of each individual transparency reports themselves, but will be reassured that this information is available openly and can be accessed by experts who can scrutinise it on their behalf.
  • Communicating complex technical information in a way that makes it easy to understand by the general public is difficult and something that public sector teams may need more support on.

Anything Else?

Links to our key documentation:

The Standard: https://github.com/co-cddo/algorithmic-transparency-standard/blob/main/template_table.md

Guidance: https://github.com/co-cddo/algorithmic-transparency-standard/blob/main/Guidance%20for%20Public%20Sector%20Organisations%E2%80%99%20Use%20of%20the%20Algorithmic%20Transparency%20Standard%20v1.1.pdf

Gov.uk
Existing transparency reports: https://www.gov.uk/government/collections/algorithmic-transparency-standard

We are also planning on launching a phase of user testing with members of the public through another engagement exercise in the coming months. This is intended to tease out the impact and success of the project on building public trust in the use of algorithms in government.

Status:

  • Implementation - making the innovation happen
  • Evaluation - understanding whether the innovative initiative has delivered what was needed

Innovation provided by:

Date Published:

9 November 2022

Join our community:

It only takes a few minutes to complete the form and share your project.