Build moral machines: who will be responsible for the ethics of self-driving cars?

You move along the highway, when suddenly on a busy road runs. Around you moving cars, and you have a split second to make a decision: try to avoid the person and create the risk of an accident? To continue in the hope that he will have time? Slow? How would you rate the chances, if you are in the back seat is buckled in a child? In many ways, this is a classic “moral dilemma,” the trolley problem. She has a million different options that can detect a human prejudice, but the essence is the same.

You are in a situation where at stake is life and death, simple choice and your decision, in fact, will determine who lives and who dies.

The dilemma of the trolley and artificial intelligence

The new MIT work published last week in the journal Nature, tries to come up with a working solution to the trolley problem, involving millions of volunteers. The experiment began in 2014 and was quite successful, receiving more than 40 million responses from 233 countries, making it one of the largest conducted ethical research.

People can make these decisions unconsciously. It is difficult to weigh the ethical and moral background when your car is racing down the road. But in our world, decisions tend to be made by algorithms, and computers can react faster than we are.

Hypothetical situation with self-driving cars is not the only moral decision to be taken by the algorithms. Medical algorithms will choose someone to receive treatment with limited resources. Automated drones will be to choose how much “collateral damage” acceptable in some military conflict.

Not all moral principles are equal

To “solve” the problem of trolleys is as diverse as the problems themselves. How machines will take moral decisions when the foundations of morality are not universal accepted and may not have solutions? Who is to determine right or wrong algorithm?

The crowdsourcing approach adopted by scientists of the Moral Machine, quite pragmatic. In the end, to make the public accept self-driving cars, it must accept the moral Foundation behind their decisions. Would not be very good, if ethicists or lawyers will come to the decision that would not be acceptable or unacceptable for ordinary drivers.

The results lead to the curious conclusion that the moral priorities (and therefore algorithmic decisions that can be taken by people) depend on which part of the world you are.

First of all, scientists recognize that it is impossible to know the frequency or nature of these situations in real life. The accident people often can’t tell what exactly happened, and the range of possible situations preclude easy classification. Therefore, to the problem became possible to track, it need to be broken in simplified scenarios, to search for universal moral rules and principles.

When you pass the survey, you are asked thirteen questions that require a simple choice: Yes or no, trying to narrow down the answers to nine factors.

Should the car roll into the other lane or to proceed? Do you save young people, not old? Women or men? Animals or humans? Should you try to save as many lives or one child is “worth” two of the elderly? To save the passengers in the car and not pedestrians? Those who are crossing the road not following the rules, or those rules? If you need to save the people who are more physically strong? What about people with higher social status such as doctors or businessmen?

In this harsh hypothetical world, someone must die, and you will answer each of these questions — with varying degrees of enthusiasm. However, the adoption of these solutions also reveals the deep-rooted cultural norms and biases.

Processing a huge dataset obtained by scientists during the survey, gives universal rules, and also a curious exception. The three most prevailing factor, averaged over the entire population, was expressed in the fact that everyone preferred to save more lives rather than less people, not animals, and young, not old.

Regional differences

You can agree with these points, but the more you think about them, the more disturbing are the moral insights. More respondents chose criminal instead of a cat, but in General chose to save the dog, not the criminal. The world average is estimated to be old higher than being homeless, but homeless people were saving less than fat.

And these rules were not universal: respondents from France, the UK and the US gave preference to the young, whereas respondents from China and Taiwan more rescued the elderly. Respondents from Japan preferred to save pedestrians, not passengers, and in China prefer passengers to pedestrians.

The researchers found that you can group responses by country in three categories: “West”, primarily North America and Europe, where morality is based primarily on Christian doctrine; “the East” — Japan, Taiwan, middle East, dominated by Confucianism and Islam; “Southern” countries, including Central and South America, along with strong French cultural influence. In the southern segment a stronger preference to donate to women than anywhere else. In the Eastern segment of the greater tendency to the salvation of young people.

Filtering on different attributes of the Respondent provides endless interesting options. “Very religious” respondents with little likely to prefer the salvation of the animal, but both religious and non-religious respondents expressed a roughly equal preference to the salvation of men with high social status (although you can say that it is contrary to some religious doctrines). Men and women prefer to save women, but men are still less prone to this.

Unanswered questions

No one claims that this study somehow “solves” all these weighty moral issues. The study authors noted that crowdsourcing online-all data is biased. But even with a large sample size the number of questions was limited. What would happen if the risks will vary depending on your decision? What if the algorithm can calculate that you only had a 50% chance to kill pedestrians, given the speed at which you were moving?

Edmond Awad, one of the study’s authors expressed caution against over-interpretation of the results. The debate, in his opinion, should flow through to risk analysis — who is more or less at risk — instead of having to decide who dies and who does not.

But the most important result of the study was the discussion that is raging on its soil. As algorithms begin to take more important decisions that affect people’s lives, it is essential to have a constant discussion of the ethics of AI. The design of “artificial conscience” should include everyone’s opinion. Although the answers are not always easy to find, it is better to try to form a moral framework for algorithms, allowing the algorithms to independently generate a world without human control.

Agree? Tell us in our chat in Telegram.

Leave a Reply

Your email address will not be published. Required fields are marked *