Think like a man: what if to give the machine a theory of consciousness

Last month a team consisting of samebecause AI players, suffered a spectacular defeat against professional competitive players. A show match held in the framework of the world championship on the game Dota 2’s The International revealed that the team strategic thinking while still allowing the person to defeat the machine.

Participating AI consists of several algorithms developed by OpenAI, one of the founders of which is Elon Musk. Team digital players, called OpenAI Five, he learned to play Dota 2 on their own, by trial and error, competing with each other.

In contrast, the same chess Board or puzzle games, popular and fast-growing multiplayer game Dota 2 is considered a much more serious field for the validation of artificial intelligence for strength. Overall the game difficulty is only one factor. It is not enough just to very quickly click the mouse and give commands to the character that you control. To win you must have the intuition and understanding of what to expect from the opponent next time and adequately to act according to this knowledge, to jointly reach a common goal — victory. The computer has this set of features no.

“The next big step in the development of AI – interaction,” says Dr. Jun Wang of University College London.

At the moment even the most important computer algorithm deep learning has no strategic thinking necessary to understand the purpose of the tasks your opponent, be it AI or other people.

According to Wang, in order for AI to be able to succeed, it is necessary to have a deep communicative skill, which originates from the important cognitive characteristics of a person – presence of mind.

Model mental States as a simulation

To four years children normally begin to understand one fundamental feature: their mind is like the mind of others. They begin to understand that everyone has what he believes, his desires, emotions and intentions. And, most importantly, presenting yourself in the place of others, they can begin to predict the future behavior of these people and explain to them. In a way, their brain begins to create within itself a multiple of the simulation itself, to substitute himself in the place of other people and put inside a different environment.

Model of the mental state is important in the knowledge of himself as a man, and also plays an important role in social interaction. Understanding others is the key to effective communication and common goals. However this ability can also be agents of false beliefs – ideas that lead us away from objective truth. Once disrupted the ability of using the model of the mental state, such as occurs in autism, the natural “human” skills, such as the possibility of explanation and imagination is also impaired.

According to Dr. Alan Winfield, Professor of robotice from the University of the West of England model of the mental state or “theory of mind” are a key feature, that will allow the AI to “understand” people, things, and other robots.

“The idea of introducing simulation into the robot – it’s actually a great opportunity to give it the ability to predict the future,” says Wingfield.

Instead of machine learning methods in which multiple layers of neural networks extract fragments of information and “studying” a huge database, Winston offers to use a different approach. Instead of relying on training, Winston offers to pre-program the AI with an internal model of itself and environment, which will allow you to answer simple questions “what if?”.

For example, imagine that the narrow corridor of moving two robots, their AI can simulate results of future actions that will prevent their collision: left, right or continue straight. This internal model would essentially act as a “mechanism of impact”, acting as a kind of “common sense” that will help direct the AI to further correct actions by predicting future developments.

In a study published earlier this year, Winston has demonstrated a prototype robot capable of achieving such results. Anticipating the behavior of others, the robot successfully passed through the corridor without collisions. In fact, this is not surprising, the author notes, but “attentive” robot that uses a simulated approach to solving the problem, the passage of the corridor took 50 percent more time. However, Winston has proven his method of internal simulation works: “this is a very powerful and interesting starting point in the development of the theory of artificial intelligence”, concluded the scientist.

Winston hopes that in the end the AI will have the ability to describe mentally reproduce the situation. The internal model of ourselves and others will allow this AI to simulate different scenarios and, more importantly, to identify specific goals and objectives for each of them.

This is significantly different from deep learning algorithms, which in principle is not able to explain why they came to that particular conclusion when solving a problem. Model “black box” when using deep learning is actually a real problem that stands in the way of confidence in such systems. Particularly acute this problem may become, for example, when developing robot nurses for the hospitals or for the elderly.

AI armed with a model of the mental state could put ourselves in the place of their masters, and to correctly understand what he should do. He would then determine appropriate decisions and explaining these decisions to the human would perform its assigned task. The less uncertainty in the decisions, the more it would be such robots trust.

Model mental States in a neural network

The company DeepMind uses a different approach. Instead of pre-programming algorithm of the mechanism of the effects, they have developed several neural networks, which demonstrate the similarity model kollektivnogo psychological behavior.

AI-algorithm “ToMnet” can learn actions by watching other neutron networks. He ToMNet is a collective of three neural networks: the first relies on the features of other AI according to their last action. The second forms the common concept of the current mindset – their beliefs and intentions at a particular point in time. The collective results of two neural networks comes the third, which predicts further action AI based on the situation. As with deep learning, ToMnet becomes more efficient with recruitment experience, watching others.

In one experiment, ToMnet “observed” that as the three AI-agent maneuvering in the digital room, collecting colorful boxes. Each of these AI had its own peculiarity: one was “blind” — could not determine the shape and arrangement of the room. The other one was “memento”: he could not remember his last steps. The third could see and remember.

After training ToMnet began to anticipate the preferences of each AI, watching his actions. For example, “blind” was constantly moving only along the walls. ToMnet is remembered. The algorithm was also able to correctly predict the future behavior of the AI and, more importantly, to understand when the AI is faced with a false representation of the environment.

In one of the tests, a team of scientists have programmed one of the AI on “short-sightedness” and changed the layout of the room. Agents with normal vision quickly adapted to the new layout, however “short-sighted” continued to follow its original route, falsely believing that he is still in the old environment. ToMnet quickly noted this feature and accurately predicted agent behavior, putting yourself in his place.

According to Dr. Alison Gopnik, a specialist in the field of developmental psychology University of California, Berkeley, not involved in these studies, but to read the conclusions, these results do show that neural networks have an amazing ability to learn a variety of skills independently, through the observation of others. At the same time, according to experts, it is too early to say that these AI have developed an artificial model of the mental state.

According to Dr. Josh Tenenbaum from mit, also did not participate in the study, “understanding” ToMnet closely connected with the context of the learning environment – the same room and specific AI agents, whose task was limited to collecting boxes. This stiffness in certain limits does ToMnet less effective in predicting behavior in radically new environments, in contrast to those children who can adapt to new situations. The algorithm, according to the scientist, can not cope with the simulation of actions of a completely different AI or human.

In any case, the work of Winston and company DeepMind shows that the computers start to show the beginnings of “understanding” each other, even if that understanding is only rudimentary. And as they will continue to improve this skill, getting better and better knowing each other, there will come a time when machines will be able to understand the complexity and confusion of our own consciousness.

What do you think, can a machine acquire the cognitive human skills? Share your opinion in our Telegram chat.

Leave a Reply

Your email address will not be published. Required fields are marked *