Home Moral guidelines How will our ethical choices be determined by self-driving cars?

How will our ethical choices be determined by self-driving cars?

2
0

Artificial intelligence (AI) is already making decisions in business, healthcare, and manufacturing. But AI algorithms usually always get help from the people applying the checks and making the final call.

What if AI systems had to make independent decisions, which could mean life or death for humans?

Pop culture has long portrayed our general distrust of AI. In the 2004 sci-fi film I, Robot, Detective Del Spooner (played by Will Smith) is wary of robots after being rescued by one of them from a car crash, while a 12 year old girl drowned. He says:

I was the logical choice. He calculated that I had a 45% chance of survival. Sarah only had an 11% chance. It was someone’s baby – 11% is more than enough. A human would have known.

Unlike humans, robots have no moral conscience and follow “ethics” programmed into them. At the same time, human morality is very variable. The “right” thing to do in any situation will depend on who you ask.

In order for machines to help us reach their full potential, we need to make sure that they behave ethically. The question then becomes: how do the ethics of AI developers and engineers influence decisions made by AI?

The future of self-driving

Imagine a future with fully autonomous self-driving cars. If everything works as planned, the morning commute will be an opportunity to prepare for the day’s meetings, keep up with the news, or sit back and relax.

But what if things go wrong? The car approaches a traffic light, but suddenly the brakes fail and the computer has to make a decision in a split second. He can swerve into a nearby post and kill the passenger, or go ahead and kill the pedestrian ahead.

The computer controlling the car will only have access to limited information collected by the car’s sensors and will have to make a decision based on this. As dramatic as it may sound, we are only a few years away from potentially being faced with such dilemmas.

Self-driving cars will generally provide safer driving, but accidents will be inevitable, especially in the foreseeable future when these cars share the roads with human drivers and other road users.

Tesla does not yet produce fully autonomous cars, although it is considering doing so. In a collision, Tesla cars do not automatically operate or deactivate the Automatic Emergency Braking System (AEB) if a human driver is in control.

In other words, the actions of the driver are not disturbed, even if he himself is the cause of the collision. Instead, if the car detects a potential collision, it sends alerts to the driver for action.

In “autopilot” mode, however, the car should automatically brake for pedestrians. Some argue that if the car can prevent a collision, then there is a moral obligation for it to override the driver’s actions in every scenario. But would we want an autonomous car to make this decision?

What is a life worth?

What if a car’s computer could assess the relative “worth” of the passenger in their car and the pedestrian? If his decision took this value into account, technically he would only be doing a cost-benefit analysis.

It may sound alarming, but there are already technologies in development that could allow this to happen. For example, the recently renamed Meta (formerly Facebook) has very advanced facial recognition that can easily identify individuals in a scene.

If this data were incorporated into an autonomous vehicle’s AI system, the algorithm could assign a monetary value to each life. This possibility is described in a large 2018 study conducted by experts at the Massachusetts Institute of Technology and colleagues.

Using the Moral Machine experiment, researchers came up with various self-driving car scenarios that required participants to decide whether to kill a homeless pedestrian or an executive pedestrian.

The results revealed that participants’ choices depended on the level of economic inequality in their country, where more economic inequality meant they were more likely to sacrifice homelessness.

Although not as advanced, such data aggregation is already used with the Chinese social credit system, which decides on people’s social rights.

Another area where we’ll see AI making decisions that could save or harm humans is the healthcare industry. Experts are increasingly developing AI to detect abnormalities in medical imaging and to help physicians prioritize medical care.

For now, doctors have the final say, but as these technologies become more advanced, what will happen when a doctor and an AI algorithm don’t make the same diagnosis?

Another example is an automated medication reminder system. How should the system react if a patient refuses to take their medication? And how does this affect patient autonomy and the overall responsibility of the system?

AI-powered drones and weapons are of ethical concern as well, as they can make the decision to kill. There are divergent views on whether these technologies should be banned or regulated altogether. For example, the use of autonomous drones can be limited to surveillance.

Some have called for military robots to be ethically programmed. But it raises questions about the programmer’s responsibility in the event that a drone accidentally kills civilians.

Philosophical dilemmas

There have been many philosophical debates regarding the ethical decisions AI will need to make. The classic example is the cart problem.

People often find it difficult to make decisions that could change their lives. When evaluating how we react to such situations, one study reported that choices can vary depending on a range of factors, including the respondent’s age, gender and culture.

When it comes to AI systems, the processes of training algorithms are critical to their functioning in the real world. A system developed in one country can be influenced by the opinions, politics, ethics and morals of that country, rendering it unsuitable for use in another place and time.

If the system was controlling an aircraft or guiding a missile, you would want a high level of confidence that it was formed with data representative of the environment in which it is used.

Examples of failures and biases in the implementation of the technology include a racist soap dispenser and inappropriate automatic labeling of images.

AI is not “good” or “bad”. The effects it will have on people will depend on the ethics of its developers. So, to get the most out of it, we will need to come to a consensus on what we consider to be “ethical”.

While private companies, public organizations and research institutes have their own guidelines for ethical AI, the United Nations has recommended developing what they call “a global instrument for global standardization” to provide a global framework. ethical AI – and ensure the protection of human rights.


Jumana Abu-Khalaf, Computer and Security Researcher, Edith Cowan University and Paul Haskell-Dowland, Professor of Cybersecurity Practice, Edith Cowan University

This article is republished from The Conversation under a Creative Commons license. Read the original article.