Robots are able to visualize objects using multi-touch

The researchers developed in the laboratory of Computer Science and artificial intelligence (CSAIL) at the Massachusetts Institute of Technology (MIT) a new system of artificial intelligence, can provide the robot the ability to connect multiple senses together.

The system can New learn vision through touch and feeling through Vision, which means that robots that can learn vision through touch, became in the hand.

The researchers described the artificial intelligence system capable of generating a visual representation of the objects, through touch gestures, and predict touch via visual data, in a paper published recently that will be held next week, in the conference about need vision, pattern recognition, in Long Beach, California.

It can sense the touch of the human capacity to sense the physical world, also help the eye to understand the full picture of this signals touch, but it wasn’t possible for robots programmed for the purpose of vision or feeling – to use these signals in reverse.

It became possible for the new system based on artificial intelligence to create signals are realistic about the way visual inputs, the prediction of any body, and the part that is touched directly by that of the input touch.

Related topics what you read now:

The system can predict what will touch an object once seen, as it can create a visual representation of an object through data touch just.

The rules of the new system in the presence of a more harmonious relationship between vision and robotics, especially for objects, grasping them, a better understanding of the scene, along with assistance in the seamless integration between man and machine in manufacturing environments.

Said one to me, Yunzhu Li, PhD student of laboratory of Computer Science and artificial intelligence at MIT: the growth model can help robots cope with the real things better, so that they can be for our model to make sense relating to touch a flat surface, or a sharp edge, by looking to the scene, as he can predict to interact with the environment through touch.

The group used a robot arm KUKA with a special sensor Touch allows (GelSight) – designed by another group at the Massachusetts Institute of technology – train the model, then they made the arm rig 200 pieces home 12 a thousand times, like tools; household products; and fabrics.

Scored a team – using a simple webcam – visual data, tactile, and around video clips 12 A and the clip to the fixed frames, and they find the so-called VisGel, a data set comprising more than three million visual and tactile.

He said to me: the process of combining the sense of vision and touch to reduce the data that we may need to do a task that involves manipulation of things and handle them, and the current data set only to examples of interactions in a controlled environment.

This can be used type of artificial intelligence to help robots to work efficiently and effectively in environments with low lighting, without the need for sensors to sophisticated, or as components of systems general when used with the techniques of simulation of other sensory.

The team hopes to improve this by collecting data in areas other than the organization, or using gloves to touch the new plant by the Massachusetts Institute of technology, in order to increase the size and diversity of the data set, and enhance the way in which she can translate convincingly between visual cues and tactile.

InternalYoutube

Leave a Reply

Your email address will not be published. Required fields are marked *