Algorithms are best at pursuing a single mathematical objective — but humans often want multiple incompatible things by Karen Hao, senior AI reporter at MIT Technology Review.
Algorithms
are increasingly being used to make ethical decisions. Perhaps the best
example of this is a high-tech take on the ethical dilemma known as the
trolley problem: if a self-driving car cannot stop itself from killing
one of two pedestrians, how should the car’s control software choose who lives and who dies?Giving algorithms a sense of uncertainty could make them more ethical
Photo: Ms. Tech
n reality, this conundrum isn’t a very realistic depiction of how self-driving cars behave. But many other systems that are already here or not far off will have to make all sorts of real ethical trade-offs. Assessment tools currently used in the criminal justice system must consider risks to society against harms to individual defendants; autonomous weapons will need to weigh the lives of soldiers against those of civilians.
The problem is, algorithms were never designed to handle such tough choices. They are built to pursue a single mathematical goal, such as maximizing the number of soldiers’ lives saved or minimizing the number of civilian deaths. When you start dealing with multiple, often competing, objectives or try to account for intangibles like “freedom” and “well-being,” a satisfactory mathematical solution doesn’t always exist.
“We as humans want multiple incompatible things,” says Peter Eckersley, the director of research for the Partnership on AI, who recently released a paper that explores this issue...
Carla Gomes, a professor of computer science at Cornell University, has experimented with similar techniques in her work. In one project, she’s been developing an automated system to evaluate the impact of new hydroelectric dam projects in the Amazon River basin. The dams provide a source of clean energy. But they also profoundly alter sections of the river and disrupt wildlife ecosystems.
Source: Medium