Several multilateral organizations such as the UN, OECD, EU and G7 have recently published principles and guidelines for an ethical trajectory of AI. The principle of “inclusion” is prevalent in many of those documents.

This principle is important because AI technology has the potential to exacerbate inequities on many levels: between corporations, governments and citizens; between people with access to the technology and those without; between richer and poorer countries, and so on. Inclusion — be it in research, development, deployment, governance, or simply public discourse — is meant to mitigate some of these effects.

However, it still seems unclear what inclusion really means — who should be included? when and where? And how should this be accomplished? Given the global scale and fast pace of technological development, these questions may shape the future trajectory of our societies.

There is thus an urgent need for concrete ideas on how to operationalise the principle of inclusion in practice. How would you ensure that relevant stakeholders are at the table when decisions — human or algorithmic — are made?

The more specific and actionable your idea, the better! Very general ideas like “more cooperation between stakeholders” are difficult to implement. Try to clarify: What is the specific issue you address? How do you propose to resolve it? Who should act?

The final selection by the advisory board will be done according to the criteria ‘innovativeness’, ‘inclusiveness potential’ and ‘feasibility’.


Further reading

(feel free to add additional resources in the comments below) 


Definition of AI

There is no universally accepted definition of AI and several governments and organisations have published competing definitions. For the purposes of this project, we use the recently published definition by the European Commission, as it is explicitly framed for a non-expert public, elaborated in detail, and avoids references to ‘human-like’ behaviour:

“Artificial intelligence (AI) refers to systems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals”


Principles and guidelines

Global conversations

  • The Future Society lead a global debate on governing the rise of artificial intelligence that surfaced inspiring ideas like “a multi-stakeholder, interdisciplinary and collective intelligence process for policy making and oversight” and “engage a diverse and inclusive community by building grassroots movements”.

  • The Berkman Klein Center at Harvard also lead a global dialogue on AI. Together with other partners, they organized the Global Symposium on AI Inclusion. Check out their reader on the topic.

  • Summary report of the Global Governance of AI Roundtable 2018 in Dubai, and a few takeaways from this year’s by Marek Havrda.

Proposals for the global governance of AI

AI Ethics Standards

National governance efforts