Skip to content
An official website of the OECD. Find out more
Created by the Public Governance Directorate

This website was created by the OECD Observatory of Public Sector Innovation (OPSI), part of the OECD Public Governance Directorate (GOV).

How to validate authenticity

Validation that this is an official OECD website can be found on the Innovative Government page of the corporate OECD website.

Algorithmic Bias Bounties

First Bounty Details

Bias Buccaneers is the first non-profit algorithmic bias bounty organization. We organize competitions to engage a broad global community in identifying and fixing ethical problems in the algorithms all companies use. The long term vision of Bias Buccaneers is to create global expertise, standards, and verifiable talent in a nascent, but rapidly growing, field.

Innovation Summary

Innovation Overview

Society is increasingly impacted by the way in which human biases have become part of—and even amplified by—artificial intelligence (AI) and machine learning (ML). The underlying cause for bias within automated decision systems can be difficult to identify and as a burgeoning field of research, finding qualified candidates to help assess and remediate harmful outcomes before an algorithm is used in production may not always be possible.

Bug bounties are a standard practice in cybersecurity that has yet to find footing in the algorithmic bias community. The first-ever bias bounty was spearheaded at Twitter by two of our founders, Jutta Williams and Dr. Rumman Chowdhury. Hosted at DEFCON, the premiere hacker conference, the competition drew high-quality applicants from around the world. Our competition drew applicants from 9 countries, and our diverse group of winners included students, researchers, corporate teams, and startups.

While initial one-off events demonstrated enthusiasm for bounties, Bias Buccaneers is the first nonprofit intended to create ongoing bounties that collaborate with public institutions, private researchers, and technology companies to pave the way for transparent and reproducible evaluations of AI systems.

Outlined in the latest US NIST AI Risk Management Framework, bias bounties should be a part of any gold-star algorithmic ethics program but there are few, if any, examples for public or private institutions to learn from. We aim to cultivate an environment where we and others can learn how to grow understanding and capability. Our goals are twofold:

  • Create fun, engaging, and transparent methods of evaluating and addressing algorithmic bias. We will operationalize these methods through bug bounties focusing on specific datasets, algorithms, and applications. These bounties will have real incentives for Crew to be creative in finding potential, high-impact bias risks to be shared with the community.
  • Create standards that are crowd-tested and approved, and therefore, useful. We aim to make it easy and fun for AI engineers and data scientists to adopt and use algorithmic bias, fairness, and explainability standards. Our standards are developed by the experts at the open-source effort AVID, and vetted by a team of AI Risk and Security Experts.

Our long term vision is to create worldwide expertise, standards, and verifiable talent in a nascent, but rapidly growing, field. Armed with the guidance and frameworks developed by our partner, AVID, our ‘bias hackers’, or "Crew", will be well-equipped to tackle future challenges in this area --be it as part of the broader AI ethics community or as leaders and contributors in organizational efforts.

Innovation Description

What Makes Your Project Innovative?

Bias Buccaneers is the first non-profit taking a bottom-up, community-first approach in combating algorithmic bias. Our project is innovative in its programmatic approach to crowdsourced bias detection that rewards and engages a global community. Bias bounties, much like their counterparts in infosec, allow structured public feedback on a myriad of potential flaws in a technical system. Bounties also reduce barriers to entry by providing rewards to successful participants.

Prior approaches have fallen short, as they tend to be top-down in implementation and opaque in practice. Bounties help supplement efforts byt existing algorithmic ethics teams by providing global perspectives. Bounties also help policymakers and regulators understand what will and will not work in practice. As a technology non-profit, Bias Buccaneers is more credible than in-house practices and more applied than traditional policy-based approaches.

What is the current status of your innovation?

We launched our first Bias Bounty at CAMLIS, a global AI Security conference on October 20th, 2022. It will conclude one month later. We designed our website (https://biasbounty.ai/) and the necessary technical infrastructure to handle technical submissions from around the world.

In order to encourage community development, we are evaluating the best way to encourage in-person “hackathons” for local groups around the world to meet.

To ensure learnings are codified in standards, we are partnering with the open-source effort AVID (https://www.avidml.org/). AVID is developing an operational taxonomy for all of responsible ML, and an open-source knowledge base of vulnerabilities for ML models and datasets.

We have received significant enthusiasm and already have soft commitments from major companies and conferences to hold at least two more bounties globally in 2023.

Innovation Development

Collaborations & Partnerships

At its core, the purpose of Bias Buccaneers is to bring a diverse group of stakeholders behind a shared mission to eliminate algorithmic bias. Our first bounty engages enterprises (Microsoft, Twitter, Splunk, Oracle), startups (Reality Defender, Robust Intelligence), Civil Society (AVID), & Government (NIST, MITRE) in our planning & development. The bounty program has attract practitioners ranging from university students to practicing data scientists to ethical hackers in the infosec community.

Users, Stakeholders & Beneficiaries

AI impacts all of us, and Bias Buccaneers allows everyone to be involved in improving it. Citizens are able to take direct action and be rewarded for identifying issues in AI systems that impact them. Government officials are able to observe what auditing approaches will and will not work in practice. Civil society organizations are able to test their bias frameworks at scale on real problems. Companies are able to securely test their algorithms and improve their products.

Innovation Reflections

Results, Outcomes & Impacts

Our first bounty is currently underway. To date, we have observed great enthusiasm from corporations looking for new ways to demonstrate commitment to algorithmic bias education, awareness, and mitigation discovery. We have seen active engagement from students, NGOs, governments and the press who have registered as participants and are actively investigating this important topic.

We believe that in the future we will see more commitment to advanced academic research since there is a demonstrated career progression for new professionals to earn and grow professionally in this field.

We also believe that as more applied and practical methods are attempted and documented as a result of these competitions, a new body of work will be available to global ML engineering organizations (public and private) to guide best practices and improve algorithms during design thereby diminishing the harmful social effects of poorly trained models.

Challenges and Failures

We have had surprisingly few challenges to date. We were met with significant enthusiasm by all stakeholders, and received some seed funding to ensure the first bounty is a success.

The primary challenge we face is ensuring there is adequate education on what a bias bounty is, since this is a new approach in algorithmic ethics, and how it can be useful to drive meaningful change. While we do not think this will be addressed leading up to the first challenge, part of our program evaluation will be to write follow-up briefs for companies and governments and share our findings and next steps.

A secondary challenge we have as a small organization is spreading the word beyond the western world. To mitigate this, we are applying to programs such as this one, and reaching out to like-minded individuals at top-tier universities around the world.

Conditions for Success

Creating a successful ongoing bounty program requires investment in technical infrastructure, leadership, and staff. Currently, the program is a self-funded by founders. To realize the change we imagine, we will need:

  • Secure infrastructure to hosting code, data, & computational capacity. Partnering with industry will mean investing in the appropriate secure infrastructure that allows open access.
  • Sponsorship & partnership with trusted regulatory partners interested in testing bias frameworks, & companies interested in ongoing product feedback
  • Dedicated global staff to help us build community, plan & implement bounties, engage with sponsors & conferences, and be our evangelists around the world.
  • Education and growth. Bias bounties are more than fun competitions - they should be integrated into standard ethics practices. Achieving this goal will require ongoing partner education with a wide range of global institutions and individuals.

Replication

The bias bounty concept is inspired by bug bounties in information security (infosec). In infosec, third-party platforms host ongoing bounties by companies and are well-integrated & accepted models of rapidly identifying security issues. We hope that the field of algorithmic ethics will benefit from bounties the way infosec has benefited from bug bounties.

Aspirationally, within organizations, bias bounties can be integrated as part of an ethical risk program to rapidly respond to product failures. For governments and standards bodies, bounties can serve as a testing ground for new standards, and highlight the path forward for effective regulation and legislation. For citizens, bounties allow engagement with & access to technology that was previously inaccessible.

Bounties expand & professionalize a nascent field, as talented individuals cultivate their skill in algorithmic bias detection. As global regulation demands third party auditors, bounty programs can serve to train workers

Lessons Learned

Hosting a successful bounty is radically different from hosting a successful ongoing bounty program. When creating an ongoing program, leaders must: maintain enthusiasm of participants and sexperts, carry learnings from the competition into implementation, and create novel and sophisticated ongoing bounties that address salient problems. This requires a careful balance of cultivating industry and government relationships that does not ignore the citizen hacker.

Project Pitch

Supporting Videos

Status:

  • Implementation - making the innovation happen

Innovation provided by:

Media:

Date Published:

17 January 2023

Join our community:

It only takes a few minutes to complete the form and share your project.