Background
Several multilateral organizations such as the UN, OECD, EU and G7 have recently published principles and guidelines for an ethical trajectory of AI. The principle of “inclusion” is prevalent in many of those documents.
This principle is important because AI technology has the potential to exacerbate inequities on many levels: between corporations, governments and citizens; between people with access to the technology and those without; between richer and poorer countries, and so on. Inclusion — be it in research, development, deployment, governance, or simply public discourse — is meant to mitigate some of these effects.
However, it still seems unclear what inclusion really means — who should be included? when and where? And how should this be accomplished? Given the global scale and fast pace of technological development, these questions may shape the future trajectory of our societies.
There is thus an urgent need for concrete ideas on how to operationalise the principle of inclusion in practice. How would you ensure that relevant stakeholders are at the table when decisions — human or algorithmic — are made?
The more specific and actionable your idea, the better! Very general ideas like “more cooperation between stakeholders” are difficult to implement. Try to clarify: What is the specific issue you address? How do you propose to resolve it? Who should act?
The final selection by the advisory board will be done according to the criteria ‘innovativeness’, ‘inclusiveness potential’ and ‘feasibility’.
Further reading
(feel free to add additional resources in the comments below)
Definition of AI
There is no universally accepted definition of AI and several governments and organisations have published competing definitions. For the purposes of this project, we use the recently published definition by the European Commission, as it is explicitly framed for a non-expert public, elaborated in detail, and avoids references to ‘human-like’ behaviour:
“Artificial intelligence (AI) refers to systems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals”
Principles and guidelines
-
The UN High-Level Panel on Digital Cooperation published its report The Age of Digital Interdependence early June 2019, calling for an "inclusive digital economy and society" ("inclusive" is mentioned 59 times on 40 pages).
-
The OECD has published Principles of Artificial Intelligence in May 2019, with chapters on "inclusive growth" and fairness, recommending better public access to AI, data sharing and consideration of social diversity. The princples have already been adopted by over 40 countries.
-
The Ethics guidelines for trustworthy AI published in April 2019 by the European Commission states that “we must enable inclusion and diversity throughout the entire AI system’s life cycle”, and mentions as stakeholders “those involved in making the products, the users, and other impacted groups”, including “society at large”.
- The G7 leaders issued a “common vision for the future of AI” in 2018, including “involve women, underrepresented populations and marginalized individuals as creators, stakeholders, leaders and decision-makers at all stages of the development and implementation of AI”.
- The Partnership on AI includes over 80 industry leaders and non-profits in the AI sector. They published 8 tenets they commit to, including to “actively engage stakeholders” and “striving to understand and respect the interests of all parties that may be impacted by AI advances”.
- Many industry leaders published their own principles and guidelines, e.g. Google’s AI Principles, IBMs Everyday Ethics for AI and AI Fairness 360, Microsoft’s AI principles, (“inclusiveness”). Leaders of Tencent and Baidu have also mentioned their principles in public.
- At the 40th International Conference of Data Protection and Privacy Commissioners, 2018, a Declaration on Ethics and Data Protection in Artificial Intelligence has been published, including 5 points on “creation of opportunities for public engagement” (p. 5)
- Civil-society initiatives include the Asilomar AI principles on ethics (“shared prosperity”, “human values”), the Montreal Declaration for Responsible AI (“democratic participation”, “diversity inclusion”), the Toronto Declaration (based on the protection of human rights), or the AI Universal Guidelines by The Public Voice.
- There are also concrete recommendations on bias and diversity in the AI sector by AI Now.
- The CITRIS Policy Lab at UC Berkeley has compiled a list of ethical AI principles and guidelines from government, industry, standards bodies, NGOs, etc.
Global conversations
-
The Future Society lead a global debate on governing the rise of artificial intelligence that surfaced inspiring ideas like “a multi-stakeholder, interdisciplinary and collective intelligence process for policy making and oversight” and “engage a diverse and inclusive community by building grassroots movements”.
-
The Berkman Klein Center at Harvard also lead a global dialogue on AI. Together with other partners, they organized the Global Symposium on AI Inclusion. Check out their reader on the topic.
- Summary report of the Global Governance of AI Roundtable 2018 in Dubai, and a few takeaways from this year’s by Marek Havrda.
Proposals for the global governance of AI
-
Gasser and Almeida, 2018: Conceptual Framework for AI Governance to bridge consumers, developers and policymakers’ informational gaps.
- ITU, 2018: Artificial Intelligence for Development Series: Interfaces, Infrastructures, and Institutions for Policymakers and Regulators
- Kemp et al., 2019: a model for “inclusive, reflexive and anticipatory international governance of AI.”
- The Oxford Future of Humanity Institute, 2019: International Standards to Enable Global Coordination in AI Research & Development
- Erdélyi and Goldsmith, 2018: establishment of an international AI regulatory agency for ethical and legal issues on AI
AI Ethics Standards
-
OCEANIS (“open community for ethics in autonomous and intelligent systems”) brings together actors working on standards in AI.
National governance efforts
- The Future of Life Institute maps national AI policies, alongside great background readings on the promises and challenges of the technology. For a deeper dive, read Jessica Cussins’ CLTC report (with a focus on security).
- Useful overviews of national AI governance efforts in a recent Forbes Article and on Medium.
- Proposed Model for AI Governance Framework for Singapore
- France's For a Meaningful Artificial Intelligence - Towards a French and European Strategy