Call for Comments: Artificial Intelligence (AI) Primer

Written by , and on 1 August 2019

Announcement: Public consultation has now been extended to 15 September 2019. 


Update: The final version of the AI primer has been published at


As we mentioned in a blog a few months ago, OPSI has been working to develop a “primer” on AI to help public leaders and civil servants navigate the challenges and opportunities associated with the technology to understand how it may help them achieve their missions.

Today, we are excited to launch a public consultation on our initial draft of the primer, which we have tentatively titled Hello, World: Artificial Intelligence and its Use in the Public Sector. “Hello, World!” is often the very first computer program written by someone learning how to code, and we want this primer to be able to help pubic officials take their first steps in exploring AI. This is the second in a series of primer on topics of interest for the innovation community, following on from the June 2018 Blockchains Unchained.

The AI primer has three purposes:

  1. Help governments understand the definitions and context for AI some technical underpinnings and approaches.
  2. Explore how governments and their partners are developing strategies for and using AI for public good.
  3. Understand what implications public leaders and civil servants need to consider when exploring AI.

To make sure we get these right, we need your input. Are you interested in the potential for AI to transform government? Do you have knowledge or expertise in the field of AI that you would like to contribute? Are you skeptical about AI in government? We are interested in hearing your thoughts and feedback on the draft.

Share your comments, feedback, and contributions by 15 September 2019.


You can contribute in three ways:

  1. Adding comments to a collaborative Google Doc.
  2. Adding comments and edits (in tracked changes) directly to a .doc version and e-mailing it to us at [email protected]. A PDF version is also available.
  3. Leaving comments at the end of this blog post.


OPSI is open to all types of constructive feedback through the consultation, including:

  • Does the report strike the right balance between technically sound yet accessible for civil servants?
  • Are there any gaps, inaccurate statements, or missed opportunities? For instance, there is some debate on whether rules-based approaches should be considered as being truly AI. Did we address this appropriately?
  • Are there additional examples, tools, resources, or guidance that civil servants should be aware of?

As a companion piece for the draft primer, we have also developed an AI Strategies & Public Sector Components page, which discusses each country’s complete or forthcoming national AI strategy, or comparable guiding policies that sets forth their strategic vision and approach to AI. This includes a focus on the extent to which each specifically addresses public sector innovation and transformation. The site also includes links to the key strategy and policy documents.

AI is also an issue gaining traction within the OECD, where horizontal teams have integrated AI as part of the Going Digital initiative and have published AI Principles and an OECD Recommendation. In addition, our colleagues from the Digital Economic Policy Division recently published the Artificial Intelligence in Society report, and colleagues in the Digital Government team and OECD Working Party of Senior Digital Government Officials (E-Leaders) are drafting a working paper on state of the art uses of different kinds of emerging tech (including AI) in governments. OPSI’s AI primer seeks to complement the great work of these teams.

We look forward to hearing your thoughts on the draft and hope that you can help us ensure that this is a helpful tool for helping public servants to better understand AI and how it can be used in government as well as the associated challenges and implications.


Findings from the Primer

AI has a long history and its definition and purpose are context-specific

Although AI has been a hot topic in recent years, it has been researched and discussed for over 70 years. Over that time, there have been several cycles where expectations soared, but then people turned away after the tech failed to live up to its hype. There is no uniformly accepted definition of AI because it means different things to different people, including those of us in the public sector. The primer seeks to provide civil servants with a knowledge base about the history of AI, what it can mean for government, and where it may be going in the future.

AI is technically complex, but civil servants need to know the basics

At a technical level, while there are a variety of forms of AI, all AI today can be classified as “narrow AI”. In other words, it can be leveraged for specific tasks for which computers are well suited, such as understanding text, classifying objects, and understanding spoken language. Machine learning approaches such as “unsupervised learning”, “supervised learning”, “reinforcement learning”, and “deep learning”, hold significant potential for a variety of tasks, yet each has its own strengths and limitations. While complex, each of these can be broken down into building blocks. The primer seeks to explain them in a way that provides civil servants with essential details, but doesn’t weigh them down with a level of technical detail that most won’t need.

Governments hold a special role in the AI ecosystem, and they are taking action

AI holds great promise for the public sector and places governments in a unique position. They are charged with setting national priorities, investments and regulations for AI, but are also in a position to leverage its immense power to innovate and transform the public sector, redefining the ways in which it designs and implements policies and services. OPSI has done an initial mapping and has identified 38 countries (including the EU) that have launched, or have plans to launch, AI strategies. Of these, 28 have (or plan to have) a strategy focus specifically on public sector AI. Many governments have also launched real-world projects that use AI to improve government in many ways, as discussed throughout the report.

While the potential is great, so are the considerations governments must make

Through research and interviews, a number of key considerations have risen to the surface. As discussed in the primer, they must:

  • Provide support and a clear direction but leave space for flexibility and experimentation.
  • Develop a trustworthy, fair and accountable approach to using AI.
  • Secure ethical access to, and use of, quality data.
  • Ensure government has access to internal and external capability and capacity.

The volume of considerations that civil servants must take into account may seem overwhelming. However, governments have devised approaches to addressing them. This guide discusses these approaches, and the potential exists for some to be adapted for use in other governments and contexts.

AI can support all facets of innovation

As OPSI’s work has shown over the last few years, innovation is not just one thing. Innovation takes different forms, and each should be considered and appreciated in the public sector. OPSI has identified four primary facets to public sector innovation:

  • Mission-oriented innovation sets a clear outcome and overarching objective for achieving a specific mission.
  • Enhancement-oriented innovation upgrades practices, achieves efficiencies and better results, and builds on existing structures.
  • Adaptive innovation tests and tries new approaches in order to respond to a changing operating environment.
  • Anticipatory innovation explores and engages with emergent issues that might shape future priorities and future commitments.

AI is exciting because it is a general purpose technology with the potential cut across and touch on the multiple facets of innovation. For instance, global leaders already have strategies in place to build AI capacity as a national priority (mission-oriented). AI can be used to make existing processes more efficient and accurate (enhancement-oriented). It can be used to consume unstructured information, such as tweets, to understand citizen opinions (adaptive). Finally, in looking to the future, it will be important to consider and prepare for the implications of AI on society, work, and human purpose (anticipatory).

We want your feedback

We need your thoughts in order to ensure this prime is useful to civil servants. Please submit all comments, feedback, and contributions by 15 September 2019. You can do this in three ways:

  1. Adding comments to a collaborative Google Doc.
  2. Adding comments and edits (in tracked changes) directly to a .doc version and e-mailing it to us at [email protected]. A PDF version is also available.
  3. Leaving comments at the end of this blog post.
  1. Hi,
    Ensure that there is diversity of all Artificial Intelligence (AI) teams, including programmers and developers who build and validate AI models. This is in order to minimize bias and maximize equity. The harmful effects of not achieving diversity in AI can be found in New York City where algorithms used by their police has resulted in profiling blacks, as well as numerous companies that build AI tools generally have a small percentage of women as part of their workforce. The AI products usually result in poorly built algorithms with devastating results, including outcomes that exude racism, sexism, classism, antisemitism, etc. Women and people of color are at minimum a necessary check on algorithms before they and during the build outs to prevent discrimination and the unintended bias in models, especially if the interventions that these models are designed to either be punitive (racial profiling by police) or assistive (missing the intended vulnerable populations for health and social services). Overall, AI needs to be fair and equitable if it is to be trusted by the populations they intend to serve.

  2. Introduction
    The Portuguese Psychologists Association defines who can access and practice Psychology in Portugal.
    We are very interested in IA, and with all the new technologies associated with the 4th Industrial Revolution due to the impact that they have on behavior, and also how the knowledge of behavior as a strong impact on these new technologies.

    The first comment that we would like to make is that different countries that develop AI come from different historical, social and cultural perspectives, as well as from a perception of human rights that is not universal. It is therefore necessary that AI systems do not reproduce the same biases and prejudices that result from the historical, social and cultural differences of their countries of origin. Robot Sophia is an example of this, since she is a Saudi citizen and has appear to have more rights than women in Saudi Arabia.

    For this regard it is of most importance that whatever developments IA may brin, they should always pass trough the scope of the respect of the Fundamental Rights, principles and values. Respect for human dignity to add the idea of helping people to make their own decisions and the notion of Equality, Non-discrimination and Solidarity including the rights of persons belonging to minorities we thought it would be useful to the discussion.

    In that way, and talking about these principles, we think it should be added, the principle of responsibility – taking in account Hans Jonas perspective, and including a social responsibility principle, and the principle of integrity, a very important one to state that everyone should considered all these principles before undertaking any kind of options.

    The Principle of Explicability: “Operate transparently”, we consider that responsibility requires helping others to reach their own nature. AI should be responsible for promoting every person’s autonomy and well-being. This autonomy is based on increased self-knowledge, which will help and empower people to make more (own) conscientious decisions. Responsibility is also linked with the conflict between personal and society interests. The interests of society, as well as the interests and rights of each individual, must be taken into account. The difficulty with that is that often-individual interests collide with societal ones. In this case, AI must try to eliminate the potential negative consequences for each and try to find the best possible outcome. Nevertheless, it should be clear that the individual should come first.

    Integrity implies coherently applying these ethical principles of AI in order to make it more and more accessible to the general public. As such, integrity helps to promote acknowledgement and trust in the profession. Therefore, integrity as defined might be compromised whenever some agent allows to be influenced by his/her/it own interests or beliefs. In the end, one must pay attention to potential conflicts of interest, which at a later time may put the AI in the position of having to disrespect the ethical principles, even if involuntarily.
    AI systems should be developed and implemented in a way that protects societies from ideological polarization and algorithmic determinism.” There is the problem of receiving inputs according to what we “like” and the people we “like”, and therefore the fundamentals for the decision-making process are fallaciously well grounded, since it comes from a reduced spectrum and with little contradiction and, therefore, may cause biases. It is also important to generate news forms for giving consent.

    Potential longer-term concerns, we are concerned about several topics: 1) Big Data – we consider we should have neutral institutions to keep peoples data and to use it just in peoples interests; 2) Jobs and Wages – we think it would be useful to find processes to put AI paying taxes and developing interest and activities for people.; 3) How to deal with Machine Learning and Human Enhancement?.

    General Comments

    In general terms, it is important to point out the scarce participation of specialists in Psychology, taking into account all the concepts and psychological processes involved and the contributions that Psychological Science can bring to the AI area (in fact there is no psychologist in the Group that makes up the High-Level Experts Group). It is also worth mentioning the diminished emphasis on education and the promotion of digital literacy in AI of citizens, in particular as opposed to the concept of Trust (the latter is referenced 125 times throughout the document, while the words education or skills appear only 7 and 5 times, respectively).

    Specifically, we consider that it is imperative to address the Human Enhancement issue, that it is not addressed in this document. Similarly, it is also necessary to address the issue of decision-making – how AI can interfere in the way people usually make decisions. We think the differences in the world and how AI can affect the relations between countries and powers should be another topic addressed, as well as Big Data as one of the present big challenges for world governance.

    Finally, we consider it useful to extend the process of analysis and decision making of documents such as these, implying a greater diversity of participants in the process of defining essential concepts for the later streamlining of processes.

  3. As a Code for Canada fellow at the Canada Energy Regulator, I am part of a team which is redesigning a key energy database which contains historical energy, environmental, Indigenous, and energy regulation information for the nation. Much AI is embedded in it’s functioning and it relies heavily on an accurate timeline of events. For example, over 50 years of permafrost data for our tundra lies in this database, it must be accurate over time for us to track changes in the climate through these decades. In my work, one key component of AI policy development which hasn’t been touched on here is the government’s role in data ethnography which refers to the analysis of how people live with data over time.
    All AI algorithms are trained with datasets, these are, by necessity, datasets that have been collected from our past. For example, copyright licenses expire after 50 years or more and enter the public domain via open data. This data from books, art, and more, has a history and with it, the many biases of that history, which we may not want to repeat itself. This issue become apparent when we train assistant bots, such as Siri and Alexa, with female voices or when Google images correlates gorillas with African faces. Using datasets from our past, repeats that past.Moreover, this data is incomplete and is always changing–this is concerning given that data forms the basis of many policy decisions. Understanding where that data comes from, where it has traveled, and its general behavior is key to trusting the data. Measuring this path, not only ensures decision-making is conducted in an informed way, but lays the groundwork for assessing algorithmic impact. It gives us a way to interrogate the data and find answers informed by the limits of the data. 
    To put words to this ethnographic analysis of AI and algorithms, the “Max Planck Institute for Human Development, proposes a radical idea: the best way to understand them is to observe their behavior in the wild. In Nature Magazine, Rahwan (and 22 colleagues) calls for the inauguration of a new field of science called “machine behavior.” ( Terms are yet to be created to define this ethnographic analysis of AI and algorithms, machine behavior is only one way to describe it. Another approach is referring to this kind of data as “thick data”, in which “ethnography and big data analytics can work together to provide a more comprehensive picture of big data, and can thus, generate more societal value together than each approach on its own.” ( To me, this combination asks, what of machine sociology? In what ways are algorithms socialized? How does this impact our decision-making on an individual basis, as a society, as governments? In what ways can we educate, but also facilitate the public’s access to AI and AI algorithms, can they create their own AIs with their own data? In Estonia, citizens have access to a portal with their own data which they can access at anytime. This record of their data can behave as a governmental ethnographic analysis of each citizen’s data, upon which informed decisions can be made, policy or otherwise. David Eaves of Harvard Kennedy School, in his parliamentary address as evidence in Canada earlier this year, stated ” easy to pull disparate information about a citizen all together to get a very clear view about who that person is, and then to offer that information to different parts of government as it’s trying to do its service.  This is very different from many other countries…”. (
    I’d venture to say that this is just one way to do this, we have yet to re-imagine what this can look like in other countries, in other government contexts, in our many futures as a global society.  

Leave a Reply

Your email address will not be published. Required fields are marked *