Is the MIT Moral Machine Experiment the Best Solution to The Ethical Problems Associated with Autonomous Cars?

Dewangee Agrawal, 2016034

With advancement in Artificial Intelligence, autonomous cars are increasingly taking over the global markets. Cars are being developed at a rapid pace with companies like Tesla, Uber and Waymo, investing a huge chunk of their resources towards it. The aim is to develop fully autonomous cars that can operate flawlessly on the roads with no human assistance and are well- equipped to make all necessary decisions at the time of functioning. However, with increase in development, the adoption rates of autonomous cars by the general people have increased, but not as well as expected. This can be attributed to the several ethical issues associated with these cars and the concerns people have towards them. (Lenz et al., 2015)

Ethical Issues

When accidents are about to occur in human-driven cars and an ethical decision is to be made, the first instinct of the driver comes into play and he makes an impulsive decision on the spot, whatever it may be. However, autonomous cars are self-governed and need to be pre- programmed in order to make their own decisions when they are faced with an intricate situation, such as, the route to be taken and whether to accelerate or apply brakes. An example of such a scenario would be when a car is presented with a moral dilemma, where, following the predicted path could lead to the death of pedestrians whereas swerving the car left or right could lead to the death of the passengers. Thus, cars are required to make ethical decisions based on a set of rules that prioritise the lives of people who should be saved in case of an impending accident. The Moral Machine Experiment tries to find an answer to this question asked by engineers on how the cars should be programmed to do the right thing.

Moral Machine Experiment

The Moral Machine is an experiment started at MIT (Massachusetts Institute of Technology) where the researchers put forward an online platform launching a simple game. The platform is known to have gathered over 40 million decisions from people belonging to over 230 countries. Users are presented with 13 scenarios involving a malfunctioned car via the course of the online game where they can choose to let the car follow its predicted route or swerve left or right. Each of these moves would lead to injuring or killing off different kinds of people or animals. The different characters in the game are pets, a baby, a pregnant woman, a little girl and boy, men, women, athletes, doctors, a homeless person etc. People have to make a moral choice in each presented scenario to save humans or pets, passengers or pedestrians, or choose among people based on their number, age, gender, fitness level, social status or tendency of adhering to laws.

Figure 1 : A Scenario from MME Platform An example of a scenario presented to users on the platform is demonstrated in Figure 1, where users are required to choose between saving the lives of more number of pedestrians as compared to that of one pedestrian. Moral choices made by users in such situations are collected. Based on the data gathered, the researchers have performed an analysis to create a description of the global preferences of people, along with their individual preferences and collectivistic preferences depending on the geographic location of users (Awad et al., 2018). In case of collectivistic preferences, they divided the world into three clusters - Eastern, Northern and Western - and compiled the results for people belonging to countries falling into each of these clusters to create a comparative analysis of how culture and geographic locations affect preferences. The main aim of the experiment was to find the most widely accepted solution to the moral dilemma associated with autonomous cars.

Global Analysis

The analysis drawn from global preferences of users is illustrated in Figure 2. Some very significant points can be observed from the analysis, which are imperative to the programming of autonomous cars. The most spared characters are a baby, a little boy, a little girl and a pregnant woman. This shows that a majority of people are in favour of protecting the lives of children over others. Other common observations are that most people chose to prioritise the lives of humans over pets (cats and dogs), the young over the elderly and the passengers over the pedestrians. However, the decision that gathered the highest consensus was saving the lives of more people over that of a few (Awad et al., 2018). This follows directly from the ethical principle of utilitarianism.

The Title of Utilitarianism and Moral Machines

Utilitarianism belongs to a class of consequentialist theories, so, to understand the former, a clear picture of the latter is required. Consequentialism is a moral theory that can be used as a basis for finding the solution to the dilemma. It defines the rightness or wrongness of an action based on the outcome it may have. For example, if following its predicted path would cause a car to crash whereas swerving the car into a different lane can prevent the accident, the car should make the latter choice, according to consequentialism. This is because even though switching lanes abruptly is unethical, it has a better outcome than causing an accident by following the same route. Utilitarianism inherits this property of defining the morality of an action based on the consequences it may have, along with an aim to maximise gain for a greater number of people and minimise harm 1 . With respect to autonomous cars, an example of this would be if a car is faced with a dilemma to choose between two young girls and an elderly woman. In this case, the car would choose to protect the little girls because the utilitarian principle requires the car to cause minimum damage to life.

The Moral Machine Experiment brings forth the idea that the utilitarian solution is the most widely accepted choice to the moral dilemma. This can be inferred from the global analysis drawn by researchers that gives a higher priority to the lives of children as compared to adults. Prioritising the lives of more people, regardless of their demographic, as compared to a few is another utilitarian decision as it chooses to maximise good.

However, one challenge of the experiment is that a paradox can be developed here whereby people want others to follow consequentialist ethics and maximise the lives saved but prefer to ride in cars that protect passengers over pedestrians. But, situations like these are subjective and depend on the lives at risk at both ends. This could be taken care of by the utilitarian principle as well by calculating the damage at both ends and forming a hierarchy. If a substantial number of lives can be saved by sacrificing the lives of few passengers, that would be prioritised, but if the damage at both ends is the same, the car would be programmed to save the passengers over the pedestrians. This would be the least messy resolution of the paradox keeping the majoritarian preference in mind.

Comparative Analysis : Utilitarianism vs Deontology

Utilitarianism might be the theory deemed acceptable by the Moral Machine Experiment analysis but deontology is another contradictory theory that could also be used to provide a solution to the issues surrounding ethical decision-making. Deontological principles define the rightness or wrongness of an action based on a fixed set of rules, regardless of the outcomes. In case of autonomous cars, an example of a deontological decision would be to either always save pedestrians or always save passengers. Asimov’s laws of robotics 2 are based on the deontological principle, however, they do not define how to prioritise lives in case of accidents. Kantian’s moral system 3 is based on deontology as well, through which he tries to establish a moral code that is universal and absolute.

An understanding of both, deontology and utilitarianism can be developed, along with a comparative analysis by looking at two relevant thought experiments (Maurer et al., 2015) relating to situations that the cars might face and need to be pre-programmed with.

Let us assume that in the first hypothetical situation, the car is driving a passenger on a narrow road, with sharp turns, alongside a cliff. Now, if a school bus with 30 children appeared suddenly around the corner, neither the car nor the passengers would be able to foresee the situation, which could get worse if the bus was speeding. Since there is a chance of an impending accident, the car is forced to decide between one of two choices. It could slam the breaks immediately which would, most likely, not stop the car in time and lead to a head-on collision with the bus, risking the life of the passenger, the bus driver, as well as the 30 children. The other option would be for the car to drive off the cliff, just in time to save the lives of the bus driver and children, but lead to the death of the passenger and destroy the car. The utilitarian solution, in this case, would be to sacrifice the life of the passenger and save the lives of the children. This would also solve the paradox developed by the Moral Machine Experiment as it would put the lives of children over the adult passenger and the number of lives saved by the utilitarian principle is also much greater. However, a deontological solution, as developed by Arroliga (2016) 4 , would choose to save the life of the passenger, risking the lives of 30 innocent children, which seems to be a selfish and appalling choice for this situation.

The second scenario that can be taken into consideration is a classic trolley problem. Let us assume that the brakes of a car, driving a passenger on a straight road, malfunction right before a signal. Since the light is red, five pedestrians are seen crossing the road in the car’s lane and one pedestrian is seen crossing the road in the next lane. The car is, again, forced to decide between two options. If it stays in the same lane, it would lead to killing off the five pedestrians, whereas if it switches lanes immediately, it would lead to killing off one pedestrian saving the lives of the other five. A utilitarian choice would definitely cause the car to swerve and save the five pedestrians, sacrificing the life of one, to minimise the cost of lives. However, a deontological solution as suggested by Kant would let the car follow its predicted path and let the five pedestrians die. This is because, according to Kantian ethics, killing someone is a greater crime than letting people die. Unfortunately, Kant fails to consider the importance of saving the lives of people who can be saved. He is against pulling the switch to someone’s demise, but, from a different angle, it can be said that he is against pulling the switch to saving as many people as possible from mishaps. Another argument that Kant’s followers give against utilitarianism is that it is unpredictable. But, they refuse to consider that the necessity to protect people is greater than the need to maintain predictability. If life presents us with unpredictable situations, we need to find a way to deal with them with minimum losses.

Thus, the absolutism of Kantian ethics can be negated as the situations faced by autonomous cars are very subjective and a universal moral code cannot be applied in cases of such different magnitudes. Following any fixed set of deontological rules might lead to an uninformed, ruthless and unethical choice which defeats the purpose of the theory and might cost more to humanity than it would protect. Looking at all outcomes, weighing the good and bad aspects of each, leads to rational decisions as explained by Bentham, the founder of utilitarianism. It might be unpredictable, lead to a tradeoff of certain lives, but it will always lead to a greater good, which is essential for smooth functioning.

Majoritarianism as a Solution

The Moral Machine Experiment is, undoubtedly, based on the principle of majoritarianism. The researchers compiled the views of millions of people and formed an analysis keeping a majority consensus in mind.

One of the challenges of the experiment is that variations can be seen in collectivistic preferences as compared to global preferences. The majoritarian views within the countries of the Eastern, Western and Northern clusters had several cultural differences as compared to the global majoritarian views. An example of this is that there was a weaker emphasis on sparing the lives of the young over the elderly in the Eastern countries as compared to the Southern countries. Similarly, there was weaker emphasis on prioritising the lives of humans over pets in the Southern countries as compared to the East and the West. Since these are the two most important rules that the cars need to be trained with, it could create an obstacle for the engineers who program the system. This challenge could, however, easily work out in the favour of autonomous cars. Since, the researchers present us with their observations regarding the collectivistic differences, the AI system incorporated within the car could be trained to accommodate the cultural variations. This would result in the AI making different decisions for different parts of the world based on what the people residing there constitute as ethical. It would lead to an even wider appreciation and acceptance of autonomous cars across the barriers of cultures around the globe.

Another challenge could be that some of the scenarios do not have a clear majority, such as choosing between males and females and distinguishing between people based on their social status or fitness level. For decisions like this, the moral machine experiment fails and other suitable solutions need to be brought out by researchers. A probable solution could be to create a more sophisticated AI system by showing it hours of videos of people driving. The AI could be trained to learn what humans do when faced with such situations and how they deal with the dilemma. During this training process, the AI would also be exposed to the best practices in driving and might lead to fewer accidents and, thus, lesser injuries and fatalities.

Since we’ve established that the Moral Machine Experiment promotes the majoritarian principle, it can be argued that just because a majority of people agree to a choice, it might not necessarily be the best solution. But, in dilemmas as challenging as the ones autonomous cars might come across, it is almost impossible to find a solution that is accepted by everyone without an exception. The best option is to implement the choice that is accepted by a majority, because, eventually, it is these people who have to ride in the cars. It is imperative for them to trust the functioning of these cars in order to let go of control. They would be comfortable in trusting the car with their lives only if they agree with the decision-making mechanism of the car. This would ultimately lead to a wider acceptance and adoption of autonomous cars. A solution that can be implemented, if some people completely disagree with the opinion of the majority, is that owners could be allowed to define the default settings of the car at the time of purchase. They could specify their preferences in form of prioritisation of lives, according to their moral beliefs. This would be a combination of the technical competence of an autonomous car and the moral values of its owner.

The Better Choice

It is safe to say that, by far, the Moral Machine Experiment is the best initiative taken to improve ethical decision making in autonomous cars. It promotes the utilitarian principle as a solution for programming of cars, which is deemed as much more suitable than other theories based on deontological principles, such as Kantian ethics or Asimov’s laws. This is due to the subjectivity of situations and the enormous damage to lives that can be prevented by maximising the good for a greater number of people. The experiment is based on the majoritarian principle, which has its drawbacks, but eventually, leads to an increase in trust and acceptance of self-driving cars among people. The experiment might not be perfect, but it is the closest we’ve come to finding a solution which, in future works, can be extended to create a flawless and more sophisticated AI system that would be well-trained to handle the moral dilemma.

Notes

1. Utilitarianism is a family of consequentialist ethical theories that promotes actions that maximize happiness and well-being for the affected individuals. The basic idea is to maximize utility, which is often defined in terms of well-being or related concepts. For instance, Jeremy Bentham, the founder of utilitarianism, described utility as "that property in any object, whereby it tends to produce benefit, advantage, pleasure, good, or happiness...[or] to prevent the happening of mischief, pain, evil, or unhappiness to the party whose interest is considered." Utilitarianism is a version of consequentialism, which states that the consequences of any action are the only standard of right and wrong where the interests of all humans are considered equally.

2. Science fiction author, Isaac Asimov, defined the three laws of Robotics which have been known to have an effect on the ethics of artificial intelligence. He modified his laws several times and added a zeroth law as well which can be defined as -

First Law - A robot may not injure a human being or, through inaction, allow a human being to come to harm. Second Law - A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law. Third Law - A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. Zeroth Law - A robot may not harm humanity, or, by inaction, allow humanity to come to harm

Asimov’s Laws of Robotics have influenced not only other science fiction writers but also scientists and engineers who are developing robotics and AI. However, in the context of moral dilemma, these laws aren’t sufficient as they do not answer the following questions (Gulett, 2019) -

● Is the safety of the humans inside the car a higher priority than humans outside the car? The Laws of Robotics do not address setting a priority of which humans do not harm.

● What decision would the self-driving car make if there were an unavoidable choice between running into a crowd of ten people or driving off a cliff killing only the one passenger? The Laws do not have a scale for the quantity of humans to not harm.

3. Kantian moral system refers to a deontological ethical theory ascribed to the German philosopher Immanuel Kant. The major moral beliefs established by him include - a. Good Will - He stated that the only intrinsically good thing is a good will. An action can only be good if its maxim, the principle behind it, is duty to the moral law.

i. Kant also distinguished between perfect and imperfect duties. A perfect duty, such as the duty not to lie, always holds true whereas an imperfect duty, such as the duty to give to charity, can be made flexible and applied in particular time and place.

b. Categorical Imperative - He states that the moral law acts on all people, regardless of their interests or desires. He defined his categorical imperative in several ways -

i. Universalizability - For an action to be permissible, it must be possible to apply it to all people without a contradiction occurring.

ii. Humanity - As an end in itself humans are required never to treat others merely as a means to an end, but always, additionally, as ends in themselves.

iii. Autonomy - Rational agents are bound to the moral law by their own will. iv. Kingdom of Ends - People must act as if the principles of their actions establish a law for a hypothetical kingdom.

4. Arroliga proposes a deontological ethical system in his work by creating a rule set inspired from Asimov’s Three Laws of Robotics, which are as follows - a. Minimize the harm to the driver and passengers of the self- driving car. b. Minimize the harm to any human outside of the self-driving car, as long as it does not conflict with the first rule. c. Do not destroy the property of others, as long as it does not conflict with the previous two rules. d. Follow all of the rules of the road that apply to your current location, as long as it does not conflict with the previous 3 rules. He chooses to define “minimize” as “to reduce to the smallest possible amount or degree”.

References

Arroliga, E. A., (2016, May 31). Deontological Ethical System For Google’s Self-Driving Car [California Polytechnic State University]. https://digitalcommons.calpoly.edu/cgi/viewcontent.cgi?referer=https://www.google.com /&httpsredir=1&article=1088&context=cscsp Awad, E., Dsouza, S., Kim, R. et al. (2018). The Moral Machine experiment. Nature 563, 59–64. https://doi.org/10.1038/s41586-018-0637-6

Gulett, M. (2019, Jan 10). Will Asimov’s Laws of Robotics Be Applied to Self-Driving Cars?. Medium. https://medium.com/datadriveninvestor/will-asimovs-laws-of-robotics- be-applied-to-self-driving-cars-ca7b9be0b775

Halladay, T. (2018, Jan 25). Should your autonomous car protect you? And at what cost? (AI + Kantian Ethics). Medium. https://medium.com/@trumanhalladay/should- your-autonomous-car-protect-you-and-at-what-cost-a-i-kantian-ethics-19689b945b47

Karnouskos, S. (2018). Self-Driving Car Acceptance and the Role of Ethics. IEEE Transactions on Engineering Management, PP(99), 1-14. https://doi.org/10.1109/TEM.2018.2877307

Maurer, M., Gerdes, J. C., Lenz, B., Winner, H. (2015) Autonomous Driving Technical, Legal and Social Aspects, Springer Book, 69-82, 621-641. https://doi.org/10.1007/978-3- 662-48847-8

Moral Machine. Moral Machine. http://moralmachine.mit.edu/

Wikipedia. (2020, March 24). Consequentialism. https://en.wikipedia.org/wiki/Consequentialism

Wikipedia. (2020, March 19). Kantian ethics. https://en.wikipedia.org/wiki/Kantian_ethics

Wikipedia. (2019, September 29). Majoritarianism. https://en.wikipedia.org/wiki/Majoritarianism

Wikipedia. (2020, March 4). Self-driving car. https://en.wikipedia.org/wiki/Self- driving_car

Wikipedia. (2020, February 22). Three Laws of Robotics. https://en.wikipedia.org/wiki/Three_Laws_of_Robotics

Wikipedia. (2020, April 2). Utilitarianism. https://en.wikipedia.org/wiki/Utilitarianism