New possibilities are often accompanied by new dangers. That is why we have taken a closer look at possible disadvantaged groups and believe that women should be given important attention.

For us it emerged very quickly that it is not easy to filter out a disadvantaged group as a whole and then make demands in relation to it.
For this reason, we have chosen three disadvantaged groups - "disadvantaged groups", i.e. people who are directly affected by Artificial Intelligence decisions. These include, above all, women, people with disabilities, people in precarious working or living conditions or people with difficult access to education. The second group is the "persons affected in military areas", i.e. people who could suffer from the influence of artificial intelligence because of their location in war or conflict zones. New technology usually goes hand in hand with the test phase in the military, and if power is linked to new weapons, this can be a danger to the civilization living there. The last group includes "the decision-makers of public agencies". This group of people, mostly politicians, people in government and legislation, must rely on industry and its experience because of the lack of sufficient knowledge. A lack of a legal framework for Artificial Intelligence can create new markets and powers that can have a major impact on legislation and policy handling.
All in all, we have opted for the group "Women" from the first subgroup and in the following we point out possible demands that can protect this group, but above all involve it, so that the work of equal rights and access is not worse, but even better.

Even today, women do not have access to the world of work like their male counterparts. Be it as an example, in management positions or in recruitment and rejection due to their age and the possibility that the woman could become pregnant in the coming years. Despite the development of presumed equality, these decisions occur frequently. We have asked ourselves whether these decisions will not be even easier to make in the future. Decisions against women. Because, in addition to all the possibilities, programmed algorithms also give rise to the suspicion that it might be morally easier to decide against women on the basis of programming, since no personal decision depends on it.

For this reason, we have developed the following requirements for the development and successful involvement of Artificial Intelligence:

  •  Women-specific information; educational; and experiential spaces through which experiences can be exchanged, knowledge imparted, and problems overcome.
  •  Activating campaigns that draw attention to the possible problems but also offer the possibility that women are directly involved in the development of artificial intelligence in society - even those who would not deal with it personally.
  • Awareness training for developers and all those involved in artificial intelligence. It is important to create understanding and awareness from the beginning, so that women do not have to get into the situation of losing due to new developments.
  • Anti-discrimination commissioner with focus on artificial intelligence. This should also create awareness and experience.
  • Introduction of quotas for women in future committees that are related to Artificial Intelligence.
  • A monitoring of decision processes based on algorithms. It should not be established at all, as has already happened several times, that algorithms emerge that exclude or disadvantage certain groups from the outset.
  • More productive cooperation, more exchange between politicians / decision-makers and experts
  • Development of a founding board for the development.
  • In the development of evaluations based on algorithms it is also important, also in medicine, that gender-specific data is collected. Here, too, no generalization may take place.