Seven Recommendations for Designing Driverless Cars Ethically
“When you come to a fork in the road, take it.”
— Baseball Guru Yogi Berra
It’s not a matter of if, but when driverless cars will become the primary means of roadway transit. Certainly, accidents have occurred, but that’s not an insurmountable barrier to their development any more than it was for the first cars. Eventually, the technology will get so good that crashes are few and far between—perhaps even less than what it would be for human operators. Just this past October Elon Musk announced Tesla’s newest robotaxi, dubbed “Cybercab.” Currently, at least twenty-one states legally allow for driverless cars and no state fully bans the practice. But with any new technology comes both good and bad things. The internet allowed many people to continue working remotely during Covid lockdowns, enabled telehealth options, and also made information formerly found only in print media more widely available, but it also had numerous bad effects such as hacking, and other cybercrimes. So too the coming of driverless cars will produce good things but also new challenges, such as determining how to design them ethically.
In this article, I will outline seven major areas of concern for the ethical development of fully autonomous motor vehicles. My focus will primarily be on the design and programming of such cars. Let us examine each question in turn:
1.) How Should Driverless Cars be Programmed When Encountering No-Win Situations?
Imagine the following scenario: a driverless car enters a tunnel where the posted speed limit is 50 mph. There are access doorways every 200 feet on the sidewalls of the tunnel. Suddenly, as the car approaches within 30 feet of one of the doors, a person bursts out onto the roadway screaming, and behind him are four more persons wearing construction hats running away from something. The car cannot break on time. If the car continues forward, however hard it brakes four persons will die. If the car veers to the right into the tunnel wall, the driver may die. If the car veers to the left, it will collide with oncoming traffic likely resulting in a multicar pileup. What should be done? What should the car be programmed to do in such a scenario?
These are tough questions, and likely not everyone will be satisfied by my answer. But answer them we must. There’s no such thing as a driverless car without some design-decision input. I propose three rules then for such emergency swerving situations:
- Driverless cars should be programmed to hit property over persons—this would mean generally speaking it’s better to hit sidewalks, fencing, other cars, or even buildings over hurting other persons.
- Driverless cars should prioritize the lives of children over that of adults—meaning if the car can reasonably detect that it will either hit a child or an adult, the car generally should swerve to avoid hitting the child.
- Regarding situations where only adult persons will be likely be hit (no matter the course of action—whether only braking and not turning, or swerving to the right, or to the left), the car should be programmed to avoid hitting the greater number of persons—this would mean that if the car either hits three people or two people, the car should avoid hitting the three.
There’s an old principle in ethics that would apply to many of these scenarios called double effect. We ought never intend evil, but what if doing an action inevitably will result in two side effects, one evil and one good, the evil one not being intended? Take the example of either continuing straight and killing four persons or swerving to the left and likely killing only two. Say there is no other option, as the car cannot brake in time. Can we undertake the act in question? Yes. But provided four conditions are met: (i) that action itself in its object must be morally good or at least indifferent, (ii) the evil effect must not be intended, (iii) the evil effect must not be a means to attaining the good effect, and (iv) the good effect being brought about must outweigh the badness of the evil effect.
Regarding condition (i)—the act of swerving the car or pumping the brakes in itself isn’t morally bad. So, condition (i) is easily met. Condition (ii)—the evil effect not be intended—can also easily be met. The intention is to save as many lives as possible. The way to do this is to swerve away from the four people in front of the car while hitting the brakes. But that the car after swerving will hit and kill persons need not be intended, nor should it be. Regarding condition (iii),—that the evil effect should not be a means to attaining the good effect—the death of the two persons is not a means to bringing about the saving of the lives of the four. Instead, the pumping of the brakes and the swerving is. So, condition (iii) is met. Condition (iv)—that the good effect should outweigh the bad effect—is easily satisfied, as the saving of four persons is better than saving only two.
The application of the principle of double effect can be complicated at times, but its basic principles need not be. For more on it, I recommend reading Steven Jensen’s book Agent Relative Ethics where he discusses double effect.
2.) Should the Consumer Have Any Input?
What about a situation wherein a driverless car is near a cliff face, and either the car hits a person on the road in front of the car, or it swerves and falls off the cliff killing the passenger in the process? Perhaps, during vehicle setup and registration, a screen should pop up wherein the owner is asked whether he would prefer the car to prioritize other lives over his own. At the very least then the driver will have informed consent regarding any situation that might foreseeably cause his death due to the action of the AI in emergency situations.
3.) Should Driverless Cars be Connected to the Internet?
Connecting driverless cars to the internet may give them additional advantages in foreseeing upcoming road construction, traffic congestion and patterns, as well as areas typically more prone to deer or kangaroos jumping across the road. But this also makes them more prone to hacking and adds significantly higher disadvantages to citizens of the country in times of war. Imagine the USA went to war with China and the next morning all autonomous cars took off and drove off the nearest cliff—or worse into the nearest military base, hospital, or school. If possible, driverless cars should be designed to have a fully autonomous (and offline) option in cases of natural disaster or war.
4.) Who Should Be Responsible in Case of an Accident?
If the car is fully autonomous, that is, fully driverless, who should be responsible in the event of an accident? Generally, the company that designed it. This would mean the automakers and designers should be taking out the insurance policy on the car for accidents while driving (not the owner, except perhaps comprehensive insurance.)
But what about situations wherein there are important road or safety updates needed for the car to drive safely? This raises the question once again whether such vehicles should be connected to the internet. Say they are connected to the internet. In such a case, if the driver fails to install significant safety updates, and has had a reasonable opportunity to do so, then they should be responsible to a limited extent for any accidents that follow. What percent is debatable, but I think a 10-20% responsibility would be reasonable. Everything else should be the responsibility of the automaker, designer, or dealer.
5.) Should There Even Be Steering Wheels?
Having a backup is always good. We don’t generally need fire extinguishers. But it’s good to have one on hand just in case. We hopefully will never be subject to a home invasion robbery, but it’s not necessarily bad to have a shotgun handy for such emergency situations. Likewise, we would hope all driverless cars would work perfectly at all times, but inevitably there will be failures and perhaps total system malfunction upon startup. In such cases, a driver mode should be available and mandated in all such vehicles (at least for the next 60 years). The driver ought to be able to take over and command the wheel if necessary, or even desired.
6.) Should There Be Age Limits?
Some may argue that such vehicles should allow anyone to sit in the driver’s seat, as the vehicle is fully autonomous. I hold that that would be a mistake. Given that such vehicles may need commandeering in an emergency situation, thus an adult with a driver’s license should be required to sit behind the wheel at all times.
7.) Should There Be Expiration Dates on Fully Autonomous Mode?
Inevitably, software becomes outdated and materials break down. When a driverless car ages it becomes more likely to experience deterioration of the physical components, such as hardware, brakes, shocks, axles, engines, chassis, etc. After 15 years of use, I propose all driverless cars should be designed to revert to in-person driver-only mode. Perhaps, in some cases, the law could mandate driver-only mode after 15 years of use. But in the beginning, more caution is better. So, perhaps more frequent vehicle inspections should be required for driverless cars, or the driver-only mode should be mandated after as soon as 10 years.
Whatever the outcome, the future of driverless cars is coming. Engineers, programmers, and designers of fully autonomous vehicles should take heed and do their best to design them ethically.
John Skalko, PhD, is a philosopher living in Massachusetts, who has written on the ethics of ventilators, artificial nutrition & hydration, assisted suicide, and lying (among other topics). He has presented at conferences in Missouri, Michigan, Kansas, Louisiana, California, New York, and Massachusetts, as well as in Poland and Spain.