Discovering to what extent artificial intelligence systems can be evil for people is the goal of the organization OpenIA, which has now decided to confront two of them to prove it.
The debate on the intentions of the artificial intelligence it’s hotter than ever. Good? Bad? Eager to take down humans? There are many questions about their future and, therefore, a group of OpenAI experts wanted to outline a possible solution for judge the intentions of artificial intelligence systems through natural language.
An organization to prevent the AI revolution
Open it is a non-profit organization founded by renowned figures of Silicon Valley (USA), among which are, among others, the president of LinkedIn, Reid Hoffman; or the boss of Tesla and SpaceX, Elon Musk.
Born to try to prevent AI from rebelling against humanity, OpenIA poses a confrontation between two artificial intelligences supervised by a third party, a person able to judge the way of proceeding of both systems to reach a specific goal. In the end, a way to control and know the limits that machines should not pass to the detriment of human interests.
Two intelligences face to face
The complexity of artificial intelligence systems from their training process to their way of reasoning is what has led OpenIA researchers to facing two intelligences. And is that, the possibility that systems develop unexpected and unwanted habits makes it complex for the eyes of a person to monitor and detect certain changes.
That is why two intelligences will be the protagonists of a debate where the natural language it will be an essential condition so that the third party in discord, the person, can follow the conversation without any problem.
Only a couple of simple examples have been those developed by the researchers to date, since for the discussion between both artificial intelligences can be taken to important study scenarios still makes more advanced technology needs to be developed to the one that exists today.
An approach with guarantees?
Although there are many science fiction films that describe super powerful artificial intelligences that are imposed on humans, the truth is that, for that to happen, there is still a long way to go.
But in order to avoid possible deceptions of machines in the future and to make more responsible use of these intelligences, there are many advocates of this debate as a way of guaranteeing the no development of involuntary behaviors by technology. Although there are also those who look with suspicion at the guarantees of this debate and the ability of researchers to develop natural language in opposing intelligences is raised.
In a future where the burdens of complexity and difficulty of AI will grow exponentially, how can the human mind understand its ways of proceeding? Will it become necessary include human values on machines to ensure proper behaviors? And if so, how will these values be transferred?
A whole series of questions on which only time and the human mind, for the moment, will determine their future.