Categories
Artificial Intelligence

Trolley AI

I. What will be the vehicles code of ethics when confronted with a modern variation of “the trolley problem”?

            The trolley problem is a classic ethical dilemma with many variations. The basic question is this:

            You are standing by a track switch when you see a trolley headed towards 5 workers on the track. If you throw the switch, the trolley will careen into the one bystander on the other track. What do you do?

            According to Himmelreich, the trolley problem is “the systematic search for a principled answer” that explains what the difference is between the two choices in the question above. The variations of the question explore how people answer when presented with different changes. These changes involve knowing the social worth of the individuals, or perhaps taking an active versus passive stance. (Himmelreich, 2018).

            The difference between being an active entity and a passive entity may be central to the idea of autonomous vehicles. Active autonomous vehicles would choose to kill the single bystander in the trolley question above which may be interpreted as utilitarian. The aim in an active choice would be to minimize damage. Passive autonomous vehicles would elect not to make a decision which would be a consequentialist approach.

            The trolley problem is a popular thought experiment, but is the trolley problem applicable for day-to-day operations? Himmelreich presents several arguments against the utility of trolley problems.

            Autonomous vehicles are designed with the highest levels of data integration and sensors. These vehicles will prove themselves safer than human drivers by an order of magnitude. This brings about the first of Himmelreich’s arguments. This argument is that the trolley problem is oversimplified, and that such a situation is not a “technical possibility.” Autonomous vehicles do not have to worry about situations like the trolley problem because these situations will just not happen based on their technology. The combination of high-powered computers and sensors will mean that most, if not all, possible collisions can be mapped out in advance, especially collisions that would seriously put users and bystanders at risk. The risk of collision falls dramatically when all human drivers are removed from the roads, and only AV’s are driving.

            The trolley problem relies on putting the ethicist in a situation where the accident is unavoidable. The reality is that this situation should not occur in the real world if there is enough information to allow the AV to operate. The other issue with the trolley problem is that the ethicist (in this case the AV) must be able to allocate harm.

            Being able to allocate harm is difficult, both ethically and technically. To ethically allocate harm, the entity would have to have some metric to compare all the potential victims to. The metric changes based on the entity’s personal feelings as well as cultural ideals, etc. Some metrics could be education, age, sex, family status, criminal history, etc. Essentially the metric places a value on the victims’ utility. Assuming that an accurate metric could be placed on a victim based on the information available, the challenge is gathering that information, planning, and acting on that decision within a fraction of a second (the collision).

            Being able to allocate the harm and act on it, within a collision situation assumes certain conditions which make the AV unable to act. There are three situations that prevent AV’s from having a trolley situation:

            1. If an AV is moving slow enough, all accidents become avoidable. This is because the algorithm has enough time to make critical observations and decisions to avoid accidents.

            2. If the AV is moving at a high speed, the AV does not have time to analyze, decide, and act on a risk decision. There is no way to judge value or make moral decisions at a high enough speed.

            3. If the AV experiences a full system failure, there is no ability to allocate harm because there is no control at all. This would be a consequentialist situation, where the entity is unable to act and can only be a passive entity.

            The various situations above to not say answer how an AV should answer a trolley question. Merely that the trolley question is not applicable to AV’s. Trolley-like situations should be answered by the laws that govern vehicle use, not by individual moral choices. Trolley questions are also not very applicable to AV’s if safety is built into the design of AV’s, which all developments seem to indicate. With enough safety and data optimization, collision situations can be reduced and dealt with on a case by case basis. If the question is whether AV’s will act unethically, the question that should really be asked is “can an AV drive safer than a human” which I believe time will show that to be a resounding “yes”.

References:

Himmelreich, J. (2018). Never mind the trolley: The ethics of autonomous vehicles in mundane situations. Ethical Theory and Moral Practice, 21(3), 669-684. doi:http://dx.doi.org.ezproxy1.apus.edu/10.1007/s10677-018-9896-4

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.