Several multilateral organizations such as the UN, OECD, EU and G7 are drafting or have published principles and guidelines for an ethical trajectory of AI. The principle of “inclusion” is prevalent in those documents.
This principle is important because AI technology has the potential to exacerbate inequities on many levels: between corporations, governments and citizens; between people with access to the technology and those without; between richer and poorer countries, and so on. Inclusion — be it in research, development, deployment, governance, or simply public discourse — is meant to mitigate some of these effects.
However, it still seems unclear what inclusion really means — who should be included? when and where? and how this should be accomplished? Given the global scale and fast pace of technological development and deployment, these questions may shape the future trajectory of our societies.
There is thus an urgent need for concrete ideas on how to operationalise the principle of inclusion in practice. How would you ensure that relevant stakeholders are at the table when decisions — human or algorithmic — are made?
The more specific and actionable your idea, the better! Very general ideas like “more cooperation between stakeholders” are difficult to implement. Try to clarify: What is the specific issue you address? How do you propose to resolve it? Who should act?
The final selection by the advisory board will be done according to the criteria ‘innovativeness’, ‘inclusiveness potential’ and ‘feasibility’.
(feel free to add additional resources in the comments below)
Definition of AI
There is no universally accepted definition of AI and several governments and organisations have published competing definitions. For the purposes of this project, we use the recently published definition by the European Commission, as it is explicitly framed for a non-expert public, elaborated in detail, and avoids references to ‘human-like’ behaviour:
“Artificial intelligence (AI) refers to systems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals”
Principles and guidelines
The Ethics guidelines for trustworthy AI published in April 2019 by the European Commission states that “we must enable inclusion and diversity throughout the entire AI system’s life cycle”, and mentions as stakeholders “those involved in making the products, the users, and other impacted groups”, including “society at large”.
- The G7 leaders issued a “common vision for the future of AI”, including “involve women, underrepresented populations and marginalized individuals as creators, stakeholders, leaders and decision-makers at all stages of the development and implementation of AI”.
- The Partnership on AI includes over 80 industry leaders and non-profits in the AI sector. They published 8 tenets they commit to, including to “actively engage stakeholders” and “striving to understand and respect the interests of all parties that may be impacted by AI advances”. Also read Google’s AI Principles and Microsoft’s AI principles (“inclusiveness”).
- The OECD will publish principles and guidelines in May 2019, with chapters on “inclusive growth” and “fairness”.
- At the 40th International Conference of Data Protection and Privacy Commissioners, 2018, a Declaration on Ethics and Data Protection in Artificial Intelligence has been published, including 5 points on “creation of opportunities for public engagement” (p. 5)
- Other “principles and guidelines” documents include the Asilomar AI principles on ethics (“shared prosperity”, “human values”), the Montreal Declaration for Responsible AI (“democratic participation”, “diversity inclusion”), the Toronto Declaration (based on the protection of human rights), or the AI Universal Guidelines by The Public Voice.
- There are also concrete recommendations on bias and diversity in the AI sector by AI Now.
- The CITRIS Policy Lab at UC Berkeley has compiled a list of ethical AI principles and guidelines from government, industry, standards bodies, NGOs, etc.
The Future Society lead a global debate on governing the rise of artificial intelligence that surfaced inspiring ideas like “a multi-stakeholder, interdisciplinary and collective intelligence process for policy making and oversight” and “engage a diverse and inclusive community by building grassroots movements”.
- Summary report of the Global Governance of AI Roundtable 2018 in Dubai, and a few takeaways from this year’s by Marek Havrda.
- The UN High-Level Panel on Digital Cooperation is leading a global consultation process and posits that “including all relevant stakeholders in digital cooperation mechanisms will help ensure that the creation, deployment, and governance of digital technologies is inclusive”. Their report should be published in June 2019.
Proposals for the global governance of AI
Gasser and Almeida, 2018: Conceptual Framework for AI Governance to bridge consumers, developers and policymakers’ informational gaps.
- ITU, 2018: Artificial Intelligence for Development Series: Interfaces, Infrastructures, and Institutions for Policymakers and Regulators
- Kemp et al., 2019: a model for “inclusive, reflexive and anticipatory international governance of AI.”
- The Oxford Future of Humanity Institute, 2019: International Standards to Enable Global Coordination in AI Research & Development
- Erdélyi and Goldsmith, 2018: establishment of an international AI regulatory agency for ethical and legal issues on AI
AI Ethics Standards
OCEANIS (“open community for ethics in autonomous and intelligent systems”) brings together actors working on standards in AI.
National governance efforts
- The Future of Life Institute maps national AI policies, alongside great background readings on the promises and challenges of the technology. For a deeper dive, read Jessica Cussins’ CLTC report (with a focus on security).
- Useful overviews of national AI governance efforts in a recent Forbes Article and on Medium.
- Proposed Model for AI Governance Framework for Singapore.