Facebook’s aritificial intelligence bots are being trained to replicate in context-based conversations. It is one of the great steps towards achieving true humanoids.
There’s still a long way to go before the robots look human and can pass between us without us noticing, like Westworld. There are thousands of stages and steps before reaching that humanoid state, and without a doubt, reactions and decision-making are among the most important. In that sense, the Artificial Intelligence Laboratory of Facebook has developed a bot that, after having been trained and having learned from hundreds of Skype conversations, is able to express.
With each video, the researchers ‘ goal was that bots not only learned the gestures, but the context behind the change in the face, so they could know that the answers have meaningful precedents. The algorithm behind the bot divided the faces into 68 relevant areas, so that it could detect blinking, independent mouth movements to speech, nods of head, etc.Thus, they are able to know when there is interaction and communication, or only when a person is for example watching a video without having to exist interpersonal communication.
After that, bots moved to a new level, showing a representation of themselves on screen and having to choose what reaction to perform when viewing a person in a video. In this way, if for example a smile was produced, the bot’s response was to open its mouth or gesticulate in a positive way.
After analyzing the conclusions that people drew from the animations of bots and people, the common message was that while the answers were correct and framed within a correct context, they were too simple to become considered substitutes for a true human intelligence. For experts, these these are very primitive cases., and if you want to achieve success, you have to get bots not only measure feelings, but also personality to be able to adapt the responses per individual.