Author | M. Martínez Euklidiadas
Cities of the future are expected to be full of all types of self-driving vehicles). The technology has practically been developed already, however, that is not the issue with self-driving cars in cities. The main problem is which ethics should be applied, how should they be programmed or how should responsibility be distributed in the event of an accident. Who is at fault if someone is run over?
Ethics for self-driving cars? Of course, but which ones?
All vehicle manufacturers working on self-driving vehicles coincide in that ethics are required when programming self-driving cars, particularly in city centers. Furthermore, some countries will require them before issuing vehicle registration documents. The United States, Singapore, Korea or China are some of the countries with vehicles in the test phase. The main problem we have as a society is that there is not even a single ethical framework and we are not exactly sure how to program it.
The theory of ethical decisions
Let us imagine that we have access to advanced mathematics and a way of translating ethical concepts into a software code. We are, therefore, capable of implementing machine ethics. The issue is choosing the particular ethics required. Can we minimize the damage? That of the driver or other users? These and a few thousand other associated questions need to be addressed in order to program thousands of cars to drive around cities.
To shed a little light on the debate, the MIT Media Lab designed an exercise called ‘Moral Machine’, which generated a series of scenarios in which spectators had to choose one. The result? There is not just one ethical principle, there are many. And one is no better than the rest. There are some that are regionally more accepted. But there is not a universal ethical principle.
How to program rational moral reasoning
Let’s suppose that we know which moral principle to program. How do we do it? Expert systems must be programmed to the very last detail and they are not flexible; and AI-based systems are black boxes in which it is not possible to determine how the car made the decision. Without that information, how will we know who is to blame in the event of an accident?
Accountability and responsibility
"The responsibility of program authors may be imposed. The traceability of the code development must be guaranteed", according to José Ignacio Latorre, an AI expert, in the book ‘Ética para las máquinas’ (Ethics for Machines) (2019). Accountability is imperative in self-driving.
A few years ago, the American justice system found a woman guilty of negligent homicide, when in March 2018, she was driving a Uber vehicle with the automatic driving functions disabled. The case was extremely complicated, but the driver was declared guilty of having ‘switched off’ the semi-autonomous car’s mind. And what if it was turned on?
It is likely that there will be some form of shared responsibility in self-driving vehicles similar to the system that exists in construction. In this area of responsibility, it is divided between the construction company, the engineering that developed the plans or the project management, among others. Because, if a self-driving car hits someone, the blame is distributed among the stakeholders:
● The road designers in cities
● The road executors
● If applicable, the designers of smart roads
● Vehicle manufacturer
● Vehicle designers
● AI programmers
● Drivers (if any)
● The behavior of the victim prior to the accident
These roles are subdivided, in turn, between hundreds or thousands of people, therefore, it is essential to govern, through laws, the degree of responsibility of each of the stakeholders when self-driving cars are driving around cities.
Images | Daesun Kim, Phuoc Anh Dang