A team of researchers have created an artificial hand with object recognition technology, to give agility to movements.
The design and construction of artificial limbs is constantly changing. 3D printing has lowered production costs, although aesthetics have yet to be polished. Meanwhile, robotics has advanced in the field of motor skills. Another ingredient is artificial intelligence which is progressing at giant steps. A robotic artificial hand is equipped with this type of software, because it needs to adapt its movements to the wishes of the user.
Normally a robotic member is guided by electrical impulses. The brain sends a signal to the muscles and the muscles communicate with the circuits of the bionic arm. The problem is that the response of these arms it is not fast and agility is improvable. It is perfectly understandable because the communication between the muscles and the robotic part is not as precise as you would like.
To solve this a team of researchers from the University of Newcastle has created an artificial hand with a camera. It is able to recognize objects using artificial intelligence. What these scientists achieve is that the member has greater agility.
And the artificial hand has your own image processing software from the camera. This way you can always know what environment you are in. Something quite useful when a user wants, for example, to grab a cup. It is first thought by the brain, which sends the signal to the muscles and they are ready to grab the cup. The camera detects the cup, recognizes its shape-in a database-and sends information to the robotic system on how to hold it.
This accelerates the process by which the user grabs the cup. The artificial hand is equipped with a simple camera, from Logitech. The key is in the image recognition displayed by the system. He is trained to recognize objects that may be susceptible to interacting with the artificial limb.
The training could be personalized, according to the researchers. If a user in his home has a type of cups with handles in the shape of a half moon, the system will have to be taught how to hold them. It won’t be the same as taking others with a different design. To achieve this we must present the image of the object to the camera , for then the user to make the move to grab it.
It’s actually a shortcut., a way to automate movement and it’s something similar to what our body does. When we go to grab a cup we already know how to do it because our sight gives us the clues. The problem with a robotic limb is that the sense of sight and motor skills are separate. So why not give vision to the member himself?