Building public trust in artificial intelligence systems is essential by The editorial board.
“F*** the algorithm!” became one of the catchphrases of 2020, encapsulating the fear that humanity is being subordinated to technology. Whether it was British school students complaining about their A level grades or Stanford Medical Centre staff highlighting the unfairness of vaccination priorities, people understandably rail against the idea of faceless machines stripping humans of agency. This is an issue that will only grow in prominence as artificial intelligence becomes ubiquitous in the computer systems that power our modern world.
To some extent, these fears are based on a misconception. Humans are still the ones who exercise judgment and algorithms do exactly what they are designed to do: discriminate. Whether they do so in a positive or a negative way depends on the humans who write these algorithms and interpret and act upon their output. It may on occasion be convenient for a government official or an executive to blame some “rogue” algorithm for their mistakes. But we should not be fooled by this rhetoric. We should hold those who deploy AI systems legally and morally accountable for the outcomes they produce.
Artificial intelligence is no more than a technological tool, like any other. It is a powerful general purpose technology, akin to electricity, that enables other technologies to work more effectively...
Many tech companies publicly profess to take such data discrimination issues seriously and have published ethics codes governing the use of AI. But it is hard for outsiders to know how far they incorporate ethical considerations at the heart of their design and decision-making processes.
Source: Financial Times