Self-driving cars cause new moral dilemmas

Submitted on Thu, 10/26/2017 - 13:50
A self-driving car
When new technology becomes available, it is not a part of people’s everyday experiences. According to Professor Arto Laitinen, this might lead to requiring a significantly better and safer driving performance from a self-controlled car than from a human driver. Photo: Jonne Renvall

Who has the moral responsibility if a self-driving car hits someone?

In the near future, self-driving cars will cause one of the most significant changes in traffic. Many major companies are spending huge sums on developing the self-driving technology. The new technology will hopefully streamline traffic and reduce accidents.

In addition to having great potential, self-driving cars also raise doubts. Arto Laitinen, professor of philosophy at the University of Tampere in Finland, has considered the philosophical questions related to self-driving cars. To begin with, Laitinen points out that the self-driving car must be proven very safe before it can be allowed into traffic.

“When we talk about liability, we should immediately count one thing out. The car itself can never be held responsible if an accident occurs. We also have to ensure that a responsibility gap is not created,” Laitinen says.

A responsibility gap is a state where nobody is responsible for what happens. Laitinen points out that responsibility is already distributed in many ways.

“For example, if the brakes are defective, the responsibility lies with the manufacturer. However, if the driver has neglected the proper maintenance of the car, he or she bears the responsibility. The important thing is that everyone knows the division of responsibility. When that is the case, people will know what they are committing to when they get in a car,” Laitinen explains.

Laitinen finds it possible that in some situations the passenger of a self-driving car may be held responsible for accidents, if that is what has been agreed to in advance.  In other words, by getting into the car the passenger accepts that he or she might be held responsible if the self-driving car ends up having an accident.

“Such scenarios typically involve insurance arrangements that will compensate third parties for the damages. Interestingly, the manufacturers of self-driving vehicles have been very keen to take on the responsibility in the phase of introducing the new technology. It makes economic sense to them,” Laitinen points out.

People are afraid of the new

New technology is often found intimidating. The English author Douglas Adams identified three age-related stages of reacting to technology: in the first stage, all technology that is in the world when a person is born is normal, ordinary and just a natural part of the way the world works. All new technology that is invented between a person’s ages of fifteen and thirty-five is new, exciting and revolutionary. Lastly, anything that is invented after a person’s age of thirty-five is dangerous, trifling and against the natural order of things.

Laitinen understands the human tendency to suspect new technologies and mentions lifts as a case in point.

“The same debate was already conducted years ago when operators ceased to run lifts. It was thought that people took a calculated risk when they entered a lift that had no operator in it so I totally understand that people are concerned about self-driving cars,” Laitinen explains.

When risks are a part of daily lives, people get used to them surprisingly easily. However, Laitinen points out that the number of traffic accidents is astonishingly low in comparison with the volume of traffic.

“I sometimes wonder how such a great number of cars drive around and how people walk so close to them completely untroubled. A large part of the time, airplanes also fly on autopilot,” Laitinen says.

Self-driving cars are equipped with artificial intelligence technology, which develops the cars’ actions on the basis of the information they are fed.

To begin with, traffic regulations are programmed on the car’s computer. Then comes the social learning phase where the car observes model performances on whose basis it develops its actions.

“All artificial intelligence is not capable of social learning. However, we can talk about artificial intelligence in the case of self-driving cars because they are capable of learning new ways to operate by following what human drivers get up to behind the wheel,” Laitinen explains.

How does a computer value human life?

Self-driving cars will inevitably end up deciding between two or more evils in traffic. Sometimes their computer programme must make a choice between two lives.

“The solution is simple if there is no clear dilemma: the car must save the human life. However, if the car is put into a position where it has to choose, there is a strong argument for saving the life of the innocent bystander,” Laitinen says.

People who have not made the conscious choice of getting into the car are innocent bystanders. The lives of all people are equal in the situation, but their roles have an impact when the moral issues are being solved.

Things get complicated if the life of one person is placed in opposition to the lives of several people. One of the most famous moral guidelines in philosophy is Immanuel Kant’s categorical imperative. According to the second formulation of his theory, people should not be treated as a means to an end but always as an end in themselves.

“If we have to sacrifice one or five bystanders, the principle is to minimise the number of casualties. However, if one person is treated only as a means to save the others, the situation is unacceptable. If one person is pushed under the wheels of a self-driving car in order to save five people, that person is treated as a means to an end. This is why saving five people by sacrificing one is not morally acceptable,” Laitinen explains.

According to Laitinen, there is good reason to be concerned about the safety of the cars and their problem-solving skills. Self-driving cars will never become completely safe, which means that people must accept reasonable safety standards.

Laitinen mentions bridge-building as an example. The builders must calculate the strength of the bridges and ensure that the bridges do not collapse beneath traffic. However, if a meteorite hits a bridge, the contractor is not regarded responsible for the collapse because getting hit by a meteorite is extremely unlikely and holding the contractor accountable would be unreasonable.

“The same principle applies to self-driving cars. In order to save the lives of bystanders, we can demand that reasonable measures are taken when the cars are programmed,” Laitinen says.

Making the risks everyday occurrences quiets the fears

“Many people die in traffic accidents every year, and we accept that as a part of life. The car-owning lifestyle is nowadays frowned upon more due to carbon dioxide emissions than because of human fatalities,” Laitinen says.

Professor Arto Laitinen. Photo: Jonne Renvall
"My guess is that self-driving cars will replace private cars first", Professor Arto Laitinen says. Photo: Jonne Renvall

When new technology becomes available, it is not a part of people’s everyday experiences. According to Laitinen, this might lead to requiring a significantly better and safer driving performance from a self-controlled car than from a human driver.

Self-driving cars are likely to profoundly change people’s everyday lives. However, the change is not likely to occur instantaneously.

“There is still much to do. My guess is that self-driving cars will replace private cars first. Further down the line all cars intended for passenger transport might become jointly owned,” Laitinen says.

Text: Jaakko Kinnunen