In Europe, we now have informed and free choice of cookies. We should provĂ­de no less for explicit and implicit AI standards.

In order to achieve reliability and correctness of operation, computer system engineers are developing explicit or implicit standards. Shannons’s original theory of information contains an example of an implicit standard in the form of a dictionary of English language used for calculating information content. A more recent example are knowledge graphs as the ones contained in search engines: A specific semantic model of the world as expressed in a specific language. Over time, such standards might adapt in order to conform with (evolving) expectations by the general public. But standards also shape public discourse. People might still have the freedom to diverge from dictionary definitions of certain words, but the transformative power of AI technology has the potential of enforcing standards in every area of life. A soft version of this power might be teaching search engine users which terms can be considered equivalent and which can not. A hard version might be AI systems interpreting laws based on a standardised interpretation of language and facts. New standards hold the potential of facilitating consensus building and thus facilitating efficiency, economic prosperity and the propagation of shared values. But human creativity relies in part on openness to a certain level of inconsistency and variation. The tight cooperation of humans with artificial intelligence might therefore also have negative consequences for creativity and innovation. And the power of defining and enforcing standardised meaningful models of the world can easily used for explicitly or implicitly silencing dissent. Who are the agents of standardisation? Will a limited number of multinational corporations supply them? Will humans have the transparent choice between competing standards? Will national or international rules lead to participation, transparency and choice for individuals and groups? Or will nation states monopolise the process of standardisation of language, meaning, truth and intelligence? Right now, Artificial Intelligence is standardising our languages (through semantic search via dominant search engines), our understanding of the world and development of our worldviews (through content proposals) and the definition of objective Truth (through the censorship of "false" information). This standardisation process is to a certain degree unavoidable, because of the nature of information technology described above. But in order to preserve an environment inclusive not just for people but also for different and dissenting views of the world, we should mandate that there is competition and diversity among AI standards, that the use of these standards is made transparent and that end users have informed and free choice of the AI standards used for and on them.