Algorithms: Study on discrimination risks translated
Algorithms have the potential to disadvantage people on the basis of certain personal characteristics. This is the conclusion reached by Carsten Orwat, researcher at ITAS, in the study that was published first in 2019 and is now available in English.
Whether in the granting of loans, the selection of future personnel, or in predictive policing – thanks to comprehensive data sets, algorithms can prepare or execute decisions in more and more areas, with sometimes significant consequences. How great is the risk of discrimination associated with such decisions, and how can this risk be minimized?
Distinction characteristics can be problematic
According to the author Carsten Orwat, algorithms must distinguish between groups of people in many of the fields mentioned. However, this is problematic, especially if the distinction is made according to characteristics such as age, disability, ethnic origin, gender, or other legally protected characteristics.
Using examples, the study shows that these risks are quite real. For instance, there is a risk of unequal treatment if algorithms differentiate potential employees by gender, or if they reject credit seekers because of their ethnic origin.
Recommendations on regulation
The study also identifies possibilities to minimize these risks. One of its recommendations is to create preventive offers, such as advice to personnel and IT managers. In addition, Carsten Orwat recommends reforms and clarifications in anti-discrimination and data protection law as well as access options to documentation for equality bodies in order to better protect the rights of those affected. (20.08.2020)
Further links and information: