In DeepMind they have created an instrument to evaluate the reasoning capacity of artificial intelligences.
The Google subsidiary that deals with research in artificial intelligence, DeepMind, has created a tool to measure the ability to reason of this type of programs. But the goal implies something else. The test they have created tries to resemble to those we have to evaluate a person’s reasoning.
This means that DeepMind’s goal is to know how much general artificial intelligence an algorithm has. This type of intelligence is what allows you to carry out different tasks to the same program, for which you have not received training. It is, therefore, the concept that will allow multifaceted algorithms, which serve to solve problems of random matters.
In this way different programs will not be needed for different tasks but the realization of these it can be unified into a single artificial intelligence. For robotics it would also mean a big leap forward. This would bridge the software barrier and make the limitations primarily hardware issues.
But the truth is that we are still far from this artificial intelligence. This has been demonstrated by the test created in the bowels of DeepMind. AI specialists, authors of the program that won the best Go player in the world, have prepared a test based on abstraction factors to measure the reasoning capacity of a software.
Abstraction is one of the basic elements of our ability to reason. Addressing problems that we do not have in front of us and from which we have not received stimuli requires extrapolating our knowledge. It is a step that distinguishes us from other animals to the point that we have called ourselves “the rational animal”.
And when it comes to evaluating the rational scope of a program, abstraction is important. To do this, the researchers trained some algorithms in certain subjects, such as determining the shape of an object in an image. They then passed an exam asking about the shapes of objects in other images. The result was positive, with a hit rate of around 75%.
The problem came when the researchers launched the real test. The algorithms had to hit questions about the position of objects instead of his form. Here all the artificial intelligences collapsed and none of them got acceptable results.
The bottom line is that with training AI can achieve whatever their creators set out to do. But if they are not trained for that particular task, the programme will not be able to deal with it with guarantees. We are, therefore, far from general artificial intelligence by the way, one of DeepMind’s goals.
Images: ColiN00B, DeepMind