Photo: |
Photo: whiteMocca/Shutterstock |
Unfortunately, new research finds that Twitter trolls aren't the only way that AI devices can learn racist language. In fact, any artificial intelligence that learns from human language is likely to come away biased in the same ways that humans are, according to the scientists.
The researchers experimented with a widely used machine-learning system called the Global Vectors for Word Representation (GloVe) and found that every sort of human bias they tested showed up in the artificial system. [Super-Intelligent Machines: 7 Robotic Futures]
"It was astonishing to see all the results that were embedded in these models," said Aylin Caliskan, a postdoctoral researcher in computer science at Princeton University. Even AI devices that are "trained" on supposedly neutral texts like Wikipedia or news articles came to reflect common human biases, she told Live Science...
Unbiasing AI
The new study, published online today (April 12) in the journal Science, is not surprising, said Sorelle Friedler, a computer scientist at Haverford College who was not involved in the research. It is, however, important, she said.
"This is using a standard underlying method that many systems are then built off of," Friedler told Live Science. In other words, biases are likely to infiltrate any AI that uses GloVe, or that learns from human language in general.
Friedler is involved in an emerging field of research called Fairness, Accountability and Transparency in Machine Learning. There are no easy ways to solve these problems, she said. In some cases, programmers might be able to explicitly tell the system to automatically disregard specific stereotypes, she said. In any case involving nuance, humans may need to be looped in to make sure the machine doesn't run amok. The solutions will likely vary, depending on what the AI is designed to do, Caliskan said — are they for search applications, for decision making or for something else?
Read more...
"There is growing concern that many of the algorithms that make decisions about our lives - from what we see on the internet to how likely we are to become victims or instigators of crime - are trained on data sets that do not include a diverse range of people", says Zoe Kleinman, Technology reporter, BBC News.
Source: Live Science