In the last two years a multitude of efforts to establish some ethical guidelines for the responsible development and usage of AI have emerged. These initiatives have almost exclusively come from civil society as well as the private sector, from the West, and they were often positive aspirational principles that lack specificity. It is beyond the scope of this text to list them all, however, here are some of the more prominent ones:
- Asilomar AI principles (https://futureoflife.org/ai-principles/),
- Montreal Declaration (https://www.montrealdeclaration-responsibleai.com/the-declaration),
- IEEE (https://ethicsinaction.ieee.org/)
- Google (https://www.blog.google/technology/ai/ai-principles/)
- Microsoft (https://www.microsoft.com/en-us/ai/our-approach-to-ai)
- OpenAI (https://blog.openai.com/openai-charter/)
- ACM (https://ethics.acm.org/)
- FAT ML (http://www.fatml.org/resources/principles-for-accountable-algorithms)
- UNI (http://www.thefutureworldofwork.org/media/35420/uni_ethical_ai.pdf)
- Japanese Society for Artificial Intelligence (http://ai-elsi.org/wp-content/uploads/2017/05/JSAI-Ethical-Guidelines-1.pdf)
Just adding another initiative by itself has limited added value. Similarly, it is impossible to get everyone to agree to the same principles immediately. However, there is still great potential in firstly mapping them on the level of individual principles and secondly translating aspirational goals into more specific commitments. This could for example include tasking a few people with aggregating and ordering principles as well as relevant legislation and then holding a series of inclusive workshops with relevant global stakeholders from governments, private sector, academia and civil society on how these principles could be operationalized. Subsequently, a more narrow group of experts could write a manual based on that, that not only explains the principles but includes what they mean in specific cases as well as what policy instruments there are to enforce such ethics.
Such an approach can potentially be effective; as it makes not only sure that there is an inclusive stakeholder process, but that ongoing governance efforts are complementary rather than rivalrous (see https://doi.org/10.1093/isq/sqv018). Two great recent examples for mapping processes are 1) the Montreux Document (https://en.wikipedia.org/wiki/Montreux_Document) on pertinent international legal obligations and good practices for states related to operations of private military and security companies during armed conflict, and, 2) the Tallinn Manual (https://en.wikipedia.org/wiki/Tallinn_Manual) on the international law applicable to cyber warfare. The former is a great demonstration of how a small country like Switzerland can play a central role in defining norms. The latter is de facto the by far best legal framework for state behavior in cyberspace, however, it has a problem with input legitimacy as it was drafted by a NATO institution.
This idea is somewhat similar to the policy kitchen idea by Forslund et al. on hosting a global workshop on national AI strategies, however, it is different insofar as a) National AI Strategies are mostly focused on economic development whereas this focuses on norms & rules and b) civil society and the private sector should be included in this norm-defining process c) the public and the discussion should be provided with an aggregate of previous norm-efforts as an input d) there should be a concrete output in the form of some AI ethics principles with global input legitimacy and some form of AI policy solutions toolbox that can be used by companies and governments. Having said that, one could also aim to subsume such an effort into an “IPCC” for AI.