Threat humanity: why do we need intelligent artificial intelligence?

When Norbert Wiener, father of Cybernetics, wrote his book “the Human use of human beings” in 1950, vacuum tubes were still the main electronic building blocks and, in fact, there were only a few computers. However, he imagined the future we now observe, with unprecedented precision, made a mistake only in insignificant details.

Before any other philosopher of artificial intelligence, he realized that AI is not simply to imitate and replace human beings in many types of intellectual activity, but also change people in the process. “We are just whirlpools in the river forever flowing water,” he wrote. “We is not something that just lives, we are patterns that perpetuate themselves.”

For example, when there is a lot of attractive features, we’re willing to pay and accept a minor cost of doing business to access new opportunities. And very soon we become addicted to new tools, we lose the ability to exist without them. Options become mandatory.

This is a very old story of evolution, and a Chapter from her well known to us. Most mammals can synthesize their own vitamin C but primates, eating mostly fruit, have lost this built-in capability. Self-repeating patterns that we call people, now depend on garments, processed food, vitamins, syringes, credit cards, smartphones and Internet. And tomorrow, if not today, — from artificial intelligence.

Wiener foresaw several problems with this situation is that Alan Turing and other early optimists AI largely overlooked. The real threat, he said, was:

…that such machines, though helpless by themselves, may be used by the person or block of human beings to increase their control over the rest of the race, or political leaders can try to take control of its people, not the machines themselves, and political methods, stole as narrow and indifferent to the person as if they were invented mechanical.

Obviously, these dangers are now very relevant.

In the media, for example, innovations in digital audio and video allow us to pay a small price (in the eyes of audiophiles and movie lovers) for the rejection analog formats, and in return receive very simple — too simple — way to play records with almost no restrictions.

But there is a huge hidden price. The Ministry of truth, Orwell became a real possibility. Methods of AI to create virtually indistinguishable fake “records” of doing outdated tools that we used for the investigations of the last 150 years.

We just have to abandon the short period photographic evidence and return to the old world where a person’s memory and the trust was the gold standard. Or we can develop new methods of defense and attack in the battle for the truth. One of the exciting recent examples was the fact that destroy the reputation of a lot cheaper than the same reputation to earn and protect. Wiener saw this phenomenon is very wide: “In the long term there is no difference between arming ourselves and arming the enemy.” The information age was also the age of misinformation.

What can we do? The key is the observation of Wiener that “these cars” “helpless by themselves.” We create tools, not colleagues, and the real threat is that we don’t see the difference.

Artificial intelligence in its present manifestations is a parasite on human intelligence. He rather unceremoniously takes possession of all that was created by people-creators, and remove patterns — including our most secret habits. These machines do not yet have goals or strategies are not capable of self-criticism and innovation, they are only studying our databases without having your own thoughts and goals.

They are, says Wiener, helpless not in the sense that they are chained, or restrained, no, they are not agents — they have no opportunity to “act from reason,” as Kant would.

In the long term “strong AI” or artificial General intelligence is possible in principle but undesirable. Even more limited AI that is possible in practice today, will not be evil. But he is a threat — partly because it can be mistaken for strong AI.

How strong the artificial intelligence of today?

The gap between today’s systems and science-fiction system, flooding the popular imagination, is still huge, although many people, both Amateurs and professionals tend to underestimate it. Let’s look at Watson from IBM, which may well be worthy of respect in our time.

This supercomputer was the result of a very extensive process R&D (research and development), which was to involve many people and the development of design intelligence for many centuries, and it uses thousands of times more energy than the human brain. His victory in the Jeopardy! was a genuine triumph, which became possible due to formulary restrictions of the rules of Jeopardy!, but that he could take part, even these rules had to be revised. We had to abandon universality and add humanity to make the show.

Watson — bad company, despite the misleading advertising from IBM that promises conversational abilities AI on volume level, but turning Watson into a multi-faceted the most likely agent would be akin to a conversion calculator to Watson. Watson could be a good case for computing such an agent, but rather the cerebellum or the amygdala, but not the mind — at best, a subsystem of special forces, in a support role, but not nearly system for planning and goal-setting depending on the received spoken experiences.

And why would we want to create a thinking and creative agent of Watson? Perhaps a brilliant idea Turing’s famous Turing test — lured us into a trap: we become obsessed with creating at least the illusion of a real person sitting in front of a screen, bypassing the “uncanny valley”.

The danger is that since Turing presented his objective — which was, first and foremost, the task to deceive the judges — the creators of the AI tried to do it with funny humanoid puppets, “cartoon” version, which will enchant and disarm the uninitiated. ELIZA Joseph Weizenbaum, the first chatbot, was a shining example of creating such an illusion, and it is extremely simple algorithm that could convince people that they are sincere and heartfelt conversations with other people.

It was disturbed by the ease with which people are willing to believe it. And if we learned anything from the annual competition for passing a limited Turing test, the loebner prize is the fact that even the most intelligent people who are not competent in computer programming, are very easy to these simple tricks.

The attitude of the people in the field of AI to such methods varies from condemnation to encouragement, and the consensus is that all these tricks are not particularly deep, but can be useful. A shift in attitude, which would be very useful, is the sincere recognition that the painted dolls androids is false advertising, which should be condemned, not encouraged.

How to achieve this? Once we understand that people begin to make decisions of life and death, following the “advice” of the AI systems, the internal operation of which is practically inconceivable that we will see a good reason for those who call people to trust such systems, to begin to rely on rules of morality and law.

Artificial intelligence system — very powerful tools. So powerful that even the experts have a good reason not to trust my own judgment when there are “judgments” provided by these tools. But if these are users of tools are going to benefit, financial or other, from the popularization of these tools, they have to make sure that you know how to do this with all amount of responsibility, maximum control and justification.

Licensing and approval of the actions of the operators of such systems — just as we license pharmacists, operators of cranes and other professionals, errors and misperceptions which can have serious consequences — maybe with the support of insurance companies and other organizations, to oblige the creators of AI systems to go the long way, looking for weakness and flaws of their products and to educate those who are going to work with them.

You can imagine a kind of reverse Turing test in which the assessment will be the judge; until he finds a weakness, violation of boundaries, gaps in the system, licenses it receives. To obtain the certificate such a judge will require serious training. The desire to ascribe to the object the human capacity to think, as we usually do, meeting with a reasonable agent, is very, very strong.

In fact, the ability to resist the desire to see something of an anthropomorphic human — a strange thing. Many people would have found the nurturing of such talent is questionable, because even the most pragmatic system users periodically refer to their instruments “friendly”.

No matter how carefully the designers of artificial intelligence will eliminate the fake “human” touch in their products, we should expect the heyday of shortcuts, workarounds, and allowable distortions of the actual “understanding” how systems and their operators. Just as on television advertising drugs with a long list of side effects or alcohol, providing the movie with plenty of small print with all provided by law, the warnings, and the developers of artificial intelligence will abide by the law, but to Excel in the warnings.

Why do we need artificial intelligence?

We don’t need artificial conscious agents. There are lots of natural conscious agents, which is enough to perform any task intended for experts and a privileged few. We need smart tools. Tools have no rights and should not have feelings that can be hurt or which can be “abused”.

One of the reasons not to do artificial conscious agents is that although they may be Autonomous (and in principle they can be as Autonomous, Smoluchowski or samostoiatelnyi as anyone), they should not — without special permission — to share our natural conscious agents of our vulnerability and our mortality.

Daniel Dennett, Professor of philosophy from tufts University, once put before the students the task at the workshop on artificial agents and autonomy: give me the specifications of the robot that will be able to sign a contract with you, not a surrogate, which is owned by another person, and itself. It’s not a question of understanding the reasons or the manipulation of pen and paper, but rather the ownership and possession of the honored legal status and moral responsibility. Small children can’t sign contracts, such as persons with disabilities, the legal status which obliges them to be under the care and imposes a responsibility on the guardians.

The problem of robots that might want to obtain such a lofty status, that like Superman, they are too vulnerable to make such statements. If they refuse, what will happen? What is the penalty for a breach of promise? They will be locked in a cage or apart? Prison for artificial intelligence will not be inconvenience, if we only first download thirst for freedom, which cannot be ignored or disabled by the AI. Disassembly AI will not kill the information that is stored on the disk and in the software.

Easy digital recording and data transfer — a breakthrough that allowed software and data to, in essence, immortality makes robots invincible. If it doesn’t seem obvious, think about how you would change the morals of the people, if we could make “backups” of people every week. Jumping off a bridge without gum Sunday after the Friday backup can be a rash decision, you can then watch a recording of his untimely demise later.

That is why we are not creating a conscious — I would like to create humanoid agents, but rather a completely new type of creatures, some oracles, without consciousness, without the fear of death, without the distraction of a love and hate no personality: mirrors of truth, who will almost certainly be infected by human lies.

The human use of human beings is about to change — again — forever, but if we take responsibility for our evolutionary trajectory, we can avoid unnecessary dangers.

Don’t you agree? Tell us about your opinion in our chat in Telegram.

Leave a Reply

Your email address will not be published. Required fields are marked *