ARE ALGORITHMS RACIST?
Artificial intelligence systems should enable more objective and balanced decisions. But they can fail.
In the future, more and more decisions in our society will be left in the hands of automated systems that not only assess whether we can get a job, loans or an apartment, but can also recognize tumors, make weather forecasts or write texts. These machine learning systems, which are often based on neural networks and called “artificial intelligence” (AI), can, they promise, process more information faster and more efficiently. In the future, they should even produce fairer results than humans, with their “limited” cognitive abilities.
However, teaching machines to make reason-based, context-specific decisions requires more than pure computing power. Because what connects AI and its algorithms with the reality of human living environments is, above all, data.
So today it is all about training AI models with ever larger data sets and improving ‘smart’ algorithms in such a way that they pull out of the sea of data those that are relevant to the respective “optimization problem”. – either loan or court verdict – find the right solution. More detailed mathematical models with countless parameters and millions of arithmetic operations promise the “objectification of the subjective”, that is, clearer and better decisions.But data, which is supposed to standardize values and norms, often develops an authority of its own that is more than questionable.
The sea of data from which the models draw their information offers no guarantee of automated justice. Depending on the data set, it can also be full of distortions (English biases), discriminatory sophistry, and historical quirks, so that even the smartest algorithms often resist reproducing a delicate status quo. For example, the artificial intelligence systems that are supposed to be used for facial recognition are often flawed when it comes to recognizing “people of color” because they were trained on photos of white people.Robert Julian-Borchak Williams was detained by Detroit police for 30 hours in January 2020 despite doing nothing. The AI had failed, confused his face and the police – for too long – lived on his verdict. AI-supported evaluation systems also repeatedly reveal problematic biases. At US company Amazon, women have been disadvantaged in application processes for years because data sets were tailored to men. Algorithms, although formalized mathematically, are neither objective nor neutral to the extent that they are based on historical data sets and are programmed by humans.True to the motto ›The limits of my data are the limits of my world‹, they never act apolitical, never free from ideology and can not only continue with erroneous conclusions, but even manifest racist discrimination, prejudices and gender injustices, they produce them accordingly. quite systematic way. . At US company Amazon, women have been disadvantaged in application processes for years because data sets were tailored to men. Algorithms, although formalized mathematically, are neither objective nor neutral to the extent that they are based on historical data sets and are programmed by humans.True to the motto ›The limits of my data are the limits of my world‹, they never act apolitical, never free from ideology and can not only continue with erroneous conclusions, but even manifest racist discrimination, prejudices and gender injustices, they produce them accordingly. quite systematic way. . At US company Amazon, women have been disadvantaged in application processes for years because data sets were tailored to men. Algorithms, although formalized mathematically, are neither objective nor neutral to the extent that they are based on historical data sets and are programmed by humans.True to the motto ›The limits of my data are the limits of my world‹, they never act apolitical, never free from ideology and can not only continue with erroneous conclusions, but even manifest racist discrimination, prejudices and gender injustices, they produce them accordingly. quite systematic way. . to the extent that they are based on human-programmed and historical data sets. True to the motto ›The limits of my data are the limits of my world‹, they never act apolitical, never free from ideology and can not only continue with erroneous conclusions, but even manifest racist discrimination, prejudices and gender injustices, they produce them accordingly. quite systematic way. .to the extent that they are based on human-programmed and historical data sets. True to the motto ›The limits of my data are the limits of my world‹, they never act apolitical, never free from ideology and can not only continue with erroneous conclusions, but even manifest racist discrimination, prejudices and gender injustices, they produce them accordingly. quite systematic way. .
Therefore, a lot of effort is put into teaching algorithms how to make socially acceptable decisions using “selected data sets” or complex tools to assess fairness. But data alone can only inadequately reflect the plurality, complexity and ambivalence of our societies.Today, more than ever, we have to critically question where we want to hand over decision-making power to algorithms. Because the idea that fishing in the sea of data could free us from processes of social negotiation or rational and value-oriented decisions may seem quite ‘smart’ today. But maybe she’s not always so smart.
This article first appeared in taz.FUTURZWEI, issue 18/2021, as part of the column of the working group for sustainable digitality, to which the authors belong. The working group cooperates withDigital Ecology Council. In the column, members of the working group question current digital developments and offer socio-ecological perspectives.digitizationin.