Follow on Twitter as @clairecm |
Claire Cain Miller, writer for The Upshot, the Times site about politics, economics and everyday life summarizes, "Algorithms
have become one of the most powerful arbiters in our lives. They make
decisions about the news we read, the jobs we get, the people we meet,
the schools we attend and the ads we see."
Yet there is growing evidence that algorithms and other types of software can discriminate.
The people who write them incorporate their biases, and algorithms
often learn from human behavior, so they reflect the biases we hold. For
instance, research has shown that ad-targeting algorithms have shown
ads for high-paying jobs to men but not women, and ads for high-interest
loans to people in low-income neighborhoods.
Cynthia Dwork,
a computer scientist at Microsoft Research in Silicon Valley, is one of
the leading thinkers on these issues. In an Upshot interview, which has
been edited, she discussed how algorithms learn to discriminate, who’s
responsible when they do, and the trade-offs between fairness and
privacy.
Q: Some people have argued that algorithms eliminate discrimination because they make decisions based on data, free of human bias. Others say algorithms reflect and perpetuate human biases. What do you think?
A: Algorithms
do not automatically eliminate bias. Suppose a university, with
admission and rejection records dating back for decades and faced with
growing numbers of applicants, decides to use a machine learning
algorithm that, using the historical records, identifies candidates who
are more likely to be admitted. Historical biases in the training data
will be learned by the algorithm, and past discrimination will lead to
future discrimination.
Source: New York Times