Artificial Intelligence (AI) has the potential to influence the everyday lives of many people. Therefore, there is currently a broad social discourse on the need for greater transparency in the use of AI systems, apart from technical discussions.
A central theme is the accusation that AI systems discriminate against certain groups of people, for example, by recognizing dark-skinned people less accurately, allowing residents of certain neighborhoods to order from online stores only with advance payment, and paying women less in job interviews. In fact, AI systems only do one thing: they analyze data from the past and draw, very simplistically put, conclusions from correlation analysis to process further data.
What AI systems cannot do is recognize causal relationships. An AI system cannot "explain" why two employees receive different salaries for comparable activities, and certainly not whether it is justified.
We must not confuse cause and effect in the discussion about the application of AI. AI systems do not create social problems, but they can make existing injustices, disadvantages, etc. apparent. The use of AI systems carries the risk of solidifying such distortions, but also the chance to recognize and eliminate them.
Social problems cannot be eliminated by AI systems - they can only be solved socially (i.e., politically). Our legal system already has rules to prevent discrimination, such as the Federal Act on Equal Treatment (GlBG). These can be applied to manual decision-making processes as well as AI systems. Therefore, it is not the AI system that discriminates, but rather the selection of the training data used, the classifications, and the underlying attributes of the assessment.
What we need is more expertise and social awareness in the selection of data with which we train AI systems. For example, an AI system can select candidates for personal invitations from the pool of received job applications. To ensure equal opportunities, in this case, the training data must be balanced in terms of age and gender, among other factors. We must ensure, through continuous quality management, such as questioning the decisions made by the AI, that the training data are significant for the intended use and do not contain any unintended (or even intended) distortions. And this must be made transparent to create trust among those affected in public.
And that still requires human intelligence.
By Werner Achtert, Managing Director Public Sector, msg - published in .public 01-2021.