Introducing Ethical Thinking About Autonomous Vehicles Into an AI Course

0
536

A computer science faculty member and a philosophy faculty member collaborated in the development of a one-week introduction to ethics which was integrated into a traditional AI course. The goals were to: (1) encourage students to think about the moral complexities involved in developing accident algorithms for autonomous vehicles, (2) identify what issues need to be addressed in order to develop a satisfactory solution to the moral issues surrounding these algorithms, and (3) and to offer students an example of how computer scientists and ethicists must work together to solve a complex technical and moral problems. The course module introduced Utilitarianism and engaged students in considering the classic “Trolley Problem,” which has gained contemporary relevance with the emergence of autonomous vehicles.

Students used this introduction to ethics in thinking through the implications of their final projects. Results from the module indicate that students gained some fluency with Utilitarianism, including a strong understanding of the Trolley Problem. This short paper argues for the need of providing students with instruction in ethics in AI course. Given the strong alignment between AI’s decision-theoretic approaches and Utilitarianism, we highlight the difficulty of encouraging AI students to challenge these assumptions. The Need for Ethical Reasoning in AI The last decade has seen growing recognition of the individual and societal impact of human-designed AI and machine learning systems. However unwittingly, these algorithms seem to encode biases that many recognize as unfair.

There are many examples. Face recognition systems fail to work properly across all racial groups (Garcia 2016). Predictive algorithms determine individuals’ credit ratings, a hugely consequential judgment, and are typically not open to inspection (Citron and Pasquale 2014). An analysis of Google’s personalized “Ad Settings” system revealed that “setting the gender to female resulted in getting fewer instances of an ad related to high paying jobs” than setting it to male (Datta, Tschantz, and Datta 2015). Abuses abound. The newsmagazine ProPublica discovered pernicious racial biases in widely-used, proprietary software which predicts a criminal offender’s likelihood of Copyright c © 2018, Association for the Advancement of Artificial Intelligence. All rights reserved. recidivism—thus guiding judges in assignment of the individual’s prison sentence (Angwin et al. 2016). These are fundamentally hard problems. A related research study assessed several fairness criteria that are used to evaluate these “recidivism prediction instruments.”

The study determined that “the criteria cannot all be simultaneously satisfied when recidivism prevalence differs across groups” (Chouldechova 2017). In the field of autonomous robotics, self-driving cars have captured our attention. They have transformed from science fiction to reality in what seems a blink of the eye. Consumer systems are now in use; e.g., Tesla’s Autopilot is described as having “full self-driving capability” (Tesla 2017). Tesla further asserts that Autopilot drives its cars “at a safety level substantially greater than that of a human driver” (ibid). Research studies may well demonstrate this claim to be true, but legal and ethical questions remain. What happens when autonomous cars make mistakes? If an autonomous vehicle gets into an accident, who is at fault—the driver or the manufacturer? Further, to what lengths should an autonomous car go to protect its passengers? This question is an intriguing twist on what is known as the “Trolley Problem,” introduced shortly, which forms the basis for our in-class ethics activities with our students.1 Goldsmith and Burton (2017) elaborate why it is important to teach our students to reason about the ethical implications of the AI systems we create. As they summarize, our students must be prepared so that “they make ethical design and implementation choices, ethical career decisions, and that their software will be programmed to take into account the complexities of acting ethically in the world.” Teaching Ethics in AI In a 2016 survey, 40% of faculty reported teaching “ethics and social issues” in their undergraduate AI courses (Wollowski et al.). Compared with other topics in the survey, this is a strong commitment to this material. However, research on approaches for engaging students in understanding the The philosopher Philippa Foot described the Trolley Problem in “The Problem of Abortion and the Doctrine of the Double Effect,” in Virtues and Vices and Other Essays in Moral Philosophy, 1978, which originally appeared in the Oxford Review, No. 5, 1967.