Artificial intelligence models inadvertently acquire gender, race or class biases.
Algorithm they’re racist. They are macho. They discriminate by purchasing power, by nationality, on thousands of grounds. It is something we have assumed at the same time that artificial intelligence was accommodated on everyone’s lips.
Once the complaint is made, artificial intelligence experts try to solve these biases with which their algorithms are born, or acquired. But it’s not easy at all. The problem is that biases go unnoticed and sometimes they are so inherent in the data that models are trained on that it becomes difficult to solve the issue.
The data with which the algorithms are trained has been accused of causing these deviations. But these are also found in the design of the system. When managers decide what they want to achieve with this model of machine learning or deep learning, they are marking how the software will be. The goal can be biasedinstead of fair. The economic benefit of a business could be sought, which would first imply discrimination between customers who spend more and those who spend less. It would not be strange if this translated into a differentiation based on purchasing power, with which indirectly it would be discriminating on this ground.
Obviously, data may also have to be guilty of algorithm bias. They may not be representative of reality as a whole or simply reflect existing prejudices. The world is far from perfect, and little by little these imperfections are assumed as the first step to correct them. But with this imperfect information you train algorithm.
To illustrate it with an example that has happened. Managers of a human resources department order a model to help recruit the most suitable candidates. However, the data fed into the algorithm include past hiring decisions in which men were favored over women. Artificial intelligence can only inherit this biaswhich human resources managers are likely to disagree with at the moment.
When data contains prejudices, it is difficult to get rid of them. There are more obvious improvements. For example, in the case of the recruitment algorithm, the terms that allude to gender included in the justification of the decisions that have served to train the model can be ignored. But it is very difficult to polish the information completely. Other words or forms of expression may also contribute to this gender bias and go unnoticed.
The working method
The experts in artificial intelligence algorithms invest a fair amount of effort in creating accurate models that give answers to a problem. From there, it is a common practice that if an algorithm works well try applying it to another task. Although this is similar, the social context sometimes changes. The design of the first model does not take into account the new environment in which you will work. Therefore, it is possible that it carries assumptions that in its new context become prejudices.
There is also a double preparation of the model when training it. The first thing is to test it and this is done with a data set. Then you train, which is done with another data set. But in reality this is the same set of information, which is divided for these two stages. Artificial intelligence is not exposed to great diversity and is therefore more vulnerable to algorithm bias.
Finally, we also enter the field of philosophy. What is neutrality? It’s actually a chimera to get results that are completely unbiased in a prediction. And here’s what artificial intelligence models do: predict the unknown.
Images: comfreak, insspirito