Ethics

Machine Ethics

Introduction

Can a machine behave ethically? Which ethical theory should it subscribe to? Is it possible to program an ethical system? I will focus on the paradigm case of the self-driving vehicle. I will contend that to behave ethically in the kinds of situations a self-driving vehicle (and more generally any assistance based artificial intelligence) can be expected to operate in is, to a large extent, to behave legally. I take this position because the nature of the law is that it reflects and reinforces what is commonly accepted to be good and right within the community. I will not consider unjust laws because it will be assumed that the laws of the road are just, or at least uncontroversial enough to lay outside of the scope of this essay. If this is the case the problem then becomes how to design a system that can follow a set of defined rules and maintain a level of awareness of the situation at least on the level of an average human driver, that is to say a technical problem. I will consider some possible objections to this simple solution to the ethical obstacles but will leave the technical solution undiscussed.

The Ethical Problem

The crux of the problem seems to be that academics cannot agree on when and how ethical issues arise and which type of ethical framework best allows us to overcome them. The further suggestion is that if we are unable to decide which framework we should follow then we cannot decide which framework an AI should follow. (Brundage 2014) The discussion, particularly around self-driving vehicles, seems biased towards scenarios which make one stop and think ‘how can I possibly make this ethical decision?’ They are generally a contrived dichotomy in which one must make a choice between two outcomes, neither of which is pleasant. (Etzioni 2017) The standard trolley problem goes something like: There is an out of control trolley (train carriage) speeding towards five people tied to the track. You don’t have time to untie them but there is a fat man next to you who is large enough to halt the trolley. You can push him onto the track, killing him, in order to spare the five other people their fate. (Foot 1978)

To put this in terms that are more relevant to the focus here, the AI must decide between hitting a group of 5 people who have just stepped out into the road or swerving into a cliff killing you the passenger. So the major issue seems to be, whose life should the AI prioritise in the case of an unavoidable collision?

These problems are flawed and I offer a simple solution to avoiding most of the opportunities for the dichotomy to arise, and make a claim regarding moral responsibility in the exceedingly rare event that such a scenario does eventuate.

A Solution: The Law as Moral Heuristic

The purpose of the law seems to be to guide the actions of members of the community to ensure the safety of everyone. While it is not that case that the law makes a good foundation for morality (for evidence of this we need only look at World War II and consider all of the atrocities committed by those men who were just following orders) it is the case that when properly functioning a system of laws provides a framework within which one can be reasonably confident that one is acting ethically. In this way the law acts as a kind of heuristic which can be quickly consulted to consider whether an action is likely to be moral. Obviously there are considerations about actions that are not illegal but may be unethical. For a vehicle the most obvious is considerations about fuel consumption. For a moral agent who wishes to purchase clothing they may want to consider the origin of the materials and its impact on deforestation, or the working conditions of the people who manufacture it. These kinds of decisions though are not relevant to self-driving vehicles, or robots engaged in manufacture. Furthermore these kinds of decisions are a matter of personal preference. Etzioni (2017) suggests that if you wish for your AI to make these kinds of decisions you need only to train it in making the kinds of decisions that you would make. These questions however are not important to self-driving vehicle AI.

The kinds of questions that are relevant to self-driving vehicles are ones about whether to speed, whether to tailgate, whether to run a red light. All of these questions are answered by simply asking ‘does the desired action contravene any relevant local laws?’ If we design our systems to follow all of the laws related to its task, then we have done away with almost all of its necessary ethical decisions.

As for the self-driving trolley problem, we can further reduce the risk of such incidents ever occurring by training our AI, in addition to following the law, to maintain awareness of any and all potential risks. Much the way we train our human drivers to do this. Currently in South Australia, to acquire a provisional driver’s licence, you must have completed; 75 hours of supervised driving experience, a list of government mandated driver training tasks with an accredited instructor, and an examination of your ability to perform in a real life driving scenario. In addition to this you must complete a hazard perception test, demonstrating capacity to identify and avoid potential hazards such as pedestrians stepping onto the road or other vehicles behaving unexpectedly. (SAGov 2017)

If a driver, who was at the time obeying all of the laws related to the road situation they were in and had taken reasonable precautions to account for the actions of others, was to kill one or more persons who had not obeyed those laws, both the courts and the people would not judge him immoral. Thus a suitably trained vehicle should be judged the same.

To summarise my solution, in order for a machine to behave ethically it should:

1) Obey the laws and/or rules pertaining to its environment at all times.

2) Take reasonable precautions to account for the behaviour of other persons in its environment.

There remain however a few obstacles, what if the choice in the trolley problem is not between the passenger and a group of pedestrians or other drivers but between two or more groups of pedestrians or drivers? Should the AI prioritise the life of its passengers? And some inevitable objections to the solution.

Residual Obstacles

To answer the two questions I have just raised I will appeal to human instinct. Given a scenario where a pedestrian who was hiding behind some object has stepped out immediately in front of a driver, where that driver has no time to make an ethical consideration, collision avoidance will be the instinctive action that is taken. If the surroundings are such that the only options are to hit the pedestrian or hit a wall at enough speed that the driver is likely to die, that instinct will instead be to collide with the pedestrian. This is to say that the instinct of the driver will be self-preservation. In the case of the two or more groups of pedestrians, this same instinct will act to reduce the potential risk to the driver. This would mean, taking into account features of the environment and colliding with the group of pedestrians likely to result in the least risk. McGhee, et al. (1999) Showed that the majority of participants in their study related to collision avoidance tended to steer such that they were further from the point of expected impact.

Given the computational limitations of AI, it would be unreasonable to expect more of a computer than of a human. While we may think it commendable for a human driver to sacrifice themselves to save others, we can hardly consider it immoral to avoid harm to oneself when faced with a scenario where they are not at fault.

Other Objections

According to Friedman and Khan (1992), reliance on machines to make ethical decisions that we would normally make will lead to a situation in which we no longer make our own moral considerations, instead relying on machines to behave morally in our place. While this is more directly relevant to machines design to assist us in moral decisions, I think it’s important to note here that even where vehicles are not self-driving but merely assist with things like collision avoidance, that very assistance would be a marked improvement over our own performance. For truly self-driving vehicles it is the case that the vehicle would be replacing our need to make various decisions, but that replacing need not be thought of as detrimental. As long as the way that the vehicles operate results in a reduction in the number of incidents, whether fatal or not, I would argue that it would instead be immoral to require that humans continue making the decisions.

Intentions seem to hold some relevance in moral decision making, and they certainly hold a lot of relevance in legal decision making. What are the intentions of a program? If I collide with a pedestrian on purpose, then I have intended harm and will be judged, morally and legally, accordingly. If I collide with a pedestrian by accident or because I had no choice, I will be judged to the extent that I broke laws or did not take appropriate care to avoid the potential risk. A program, the purpose of which is to safely transport passengers, could not intend harm, so the only grounds on which it could be judged are the extent to which it follows laws and takes precautions to avoid unintentional harms.

The solution that I have described is one of “bounded morality”. (Wallach and Allen 2009, pp. 77) Wallach and Allen suggest that these theories suffer from issues of “…incompatible courses of action, or failing to recommend any course of action.” And that some ethical principles are “computationally intractable”. If these objections hold any force, it can only be as a result of the particular principles being encoded. When applied to a relatively small task domain as is being discussed here, there are far fewer opportunities for conflicts to arise. When the boundaries being set are the laws relating to the boundaries of the task domain, there can be no decision to be made that lies outside of that domain. It is the nature of the law that it is not contradictory and that it provides the relevant guidance for decision making.

 

Conclusion

So the obstacles for self-driving vehicle behaviour divide into two broad categories, the ethical and the technical. I have shown that the ethical obstacles can be overcome by appealing to the framework provided by the law and an additional guiding principle which acts to keep the behaviour within that framework and reduce potential risk. These principles can apply broadly to most scenarios in which we might like AI to behave ethically and amount to an overcoming of the vast majority of non-technical obstacles to machine ethics.

 

References

Brundage, M. (2014), “Limitations and Risks of Machine Ethics”, Journal of Experimental & Theoretical Artificial Intelligence 2, pp.355–372.

Etzioni, A. and Etzioni, O. (2017 forthcoming), “Incorporating Ethics Into Artificial Intelligence”, Journal of Ethics, pp.1-16.

Foot, P. (1978), “The Problem of Abortion and the Doctrine of Double Effect,” in Virtues and Vices and Other Essays, Berkeley, CA: University of California Press.

Friedman, B. and Khan, P. (1992), “Human Agency and Responsible Computing: Implications for Computer System Design”, Journal of Systems and Software 17

McGhee, D., Mazzae, E., and Baldwin, G. (1999), “Examination of Drivers’ Collision Avoidance Behavior Using Conventional and Antilock Brake Systems on the Iowa Driving Simulator”, Iowa Research Online

SAGov, (2017), “Steps to Getting Your Driver’s Licence”, https://www.sa.gov.au/topics/driving-and-transport/drivers-and-licences/new-drivers

Wallach, W. and Allen, C., (2009), “Moral Machines: Teaching Robots Right From Wrong”, Oxford University Press

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s