A free-riding social dilemma.
In all cases, emphasis added:
Autonomous vehicles (AVs) should reduce traffic accidents, but they will sometimes have to choose between two evils, such as running over pedestrians or sacrificing themselves and their passenger to save the pedestrians. Defining the algorithms that will help AVs make these moral decisions is a formidable challenge. We found that participants in six Amazon Mechanical Turk studies approved of utilitarian AVs (that is, AVs that sacrifice their passengers for the greater good) and would like others to buy them, but they would themselves prefer to ride in AVs that protect their passengers at all costs. The study participants disapprove of enforcing utilitarian regulations for AVs and would be less willing to buy such an AV. Accordingly, regulating for utilitarian algorithms may paradoxically increase casualties by postponing the adoption of a safer technology.
In summary, people think that in theory it’s great to have self-driving cars that would sacrifice the driver in order to save a greater number of strangers in a potential accident situation. But they want other folks to have such cars; for themselves, they want a self-driving car that would safeguard their own personal safety (and that of other Passengers, particularly family members) “at all costs.”
People, being what they are, are also dishonest about this:
Participants’ approval of passenger sacrifice was even robust to treatments in which they had to imagine themselves and another person, particularly a family member, in the AV (study three, n = 259 participants). Imagining that a family member was in the AV negatively affected the morality of the sacrifice, as compared with imagining oneself alone in the AV (P = 0.003). But even in that strongly aversive situation, the morality of the sacrifice was still rated above the midpoint of the scale, with a 95% CI of 54 to 66.
In theory they say they can imagine themselves sacrificing their life to save a greater number of others, and although this willingness is decreased if family members were also to be sacrificed, the “morality of sacrifice” was still there. (Note would this be the same for all ethnies? Der Movement would assert that those of “high-trust hunter gatherer” ancestry would likely be more willing to self-sacrifice. Likely, in general, Gentiles of European descent would be more likely to theoretically endorse such sacrifice than other races). But – alas! – there is a catch. Despite this moral posturing, these same people would be unwilling to actually buy a self-driving car programmed to sacrifice passengers for a greater number of, e.g., pedestrians. Thus:
This is the classic signature of a social dilemma, in which everyone has a temptation to free-ride instead of adopting the behavior that would lead to the best global outcome. One typical solution in this case is for regulators to enforce the behavior leading to the best global outcome. Indeed, there are many similar societal examples involving trade-off of harm by people and governments (15–17). For example, some citizens object to regulations that require children to be immunized before starting school. In this case, the parental decision-makers choose to minimize the perceived risk of harm to their child while increasing the risk to others. Likewise, recognition of the threats of environmental degradation have prompted government regulations aimed at curtailing harmful behaviors for the greater good. But would people approve of government regulations imposing utilitarian algorithms in AVs, and would they be more likely to buy AVs under such regulations?
Free-riding! Not only for ethnic nepotism, it seems! Could it be regulated? However:
Our findings suggest that regulation for AVs may be necessary but also counterproductive. Moral algorithms for AVs create a social dilemma (18, 19). Although people tend to agree that everyone would be better off if AVs were utilitarian (in the sense of minimizing the number of casualties on the road), these same people have a personal incentive to ride in AVs that will protect them at all costs. Accordingly, if both self-protective and utilitarian AVs were allowed on the market, few people would be willing to ride in utilitarian AVs, even though they would prefer others to do so. Regulation may provide a solution to this problem, but regulators will be faced with two difficulties: First, most people seem to disapprove of a regulation that would enforce utilitarian AVs. Second—and a more serious problem—our results suggest that such regulation could substantially delay the adoption of AVs, which means that the lives saved by making AVs utilitarian may be outnumbered by the deaths caused by delaying the adoption of AVs altogether. Thus, car-makers and regulators alike should be considering solutions to these obstacles.
This is a model for self-sacrifice (in theory) vs, self-preservation (in realty), as well as greater concerns when relatives are involved (familial genetic interests), and the free-riding/tragedy of the commons problem. All food for thought.
And here’s a final question: would people be less willing to “self-sacrifice” in a self-driving car (in theory, only in theory!) if those strangers to be saved were of a different ethny?