The Redmond company is working to develop a tool to identify biases in algorithms.
As the artificial intelligence avanza doubts arise associated with its adoption and possible malpractice. The problem with this technology is that its essence is automation. Just as the production chain was an expansion of manufacturing capacity, AI allows technology to be delegated tasks that previously required a person’s supervision.
This means that if they don’t now need this human oversight, tasks can be massively replicated. Staff have costs –you have to pay a worker who for example is monitoring the news of a newspaper to get those that have value for your company -, but an AI system hardly has them once developed , so that same software can monitor hundreds of newspapers with some small adjustments.
The problem here is that if every newspaper is monitored by one person, they will have their own judgment. Everyone will tend to make a kind of mistake because no one is perfect. But an AI software what it will do will be replicate the same bug across all those hundreds of tasks. Your mistake will always be the same or, rather, it will be the result of the same bias.
These kinds of deviations are what Microsoft wants to avoid. If, for example, software that monitors news gives less importance to those with black protagonists there is a problem. To tackle it, it is necessary to be able to identify biases in the algorithms. The Redmond-based company believes this is the first step, although then you have to be able to eliminate these biases.
A crusade to make technology more neutral
Recently, Facebook also announced a tool to identify biases in algorithms. And is that, this problem it has become a general concern in the industry. Artificial intelligence needs to be as neutral as possible for consumers to trust it.
Microsoft’s work is developed in this line. After all, algorithms are going to make more and more decisions for us. And these must not be imbued with discriminatory signs or bias.
Images: efes, Pixies