Photo: Kate Crawford |
But
this hand-wringing is a distraction from the very real problems with
artificial intelligence today, which may already be exacerbating
inequality in the workplace, at home and in our legal and judicial
systems. Sexism, racism and other forms of discrimination are being
built into the machine-learning algorithms that underlie the technology
behind many “intelligent” systems that shape how we are categorized and
advertised to.
Take
a small example from last year: Users discovered that Google’s photo
app, which applies automatic labels to pictures in digital photo albums,
was classifying images of black people as gorillas. Google apologized; it was unintentional.
But similar errors have emerged in Nikon’s camera software, which misread images of Asian people as blinking, and in Hewlett-Packard’s web camera software, which had difficulty recognizing people with dark skin tones.
This
is fundamentally a data problem. Algorithms learn by being fed certain
images, often chosen by engineers, and the system builds a model of the
world based on those images. If a system is trained on photos of people
who are overwhelmingly white, it will have a harder time recognizing
nonwhite faces.
A very serious example was revealed in an investigation published last month by
ProPublica. It found that widely used software that assessed the risk
of recidivism in criminals was twice as likely to mistakenly flag black
defendants as being at a higher risk of committing future crimes. It was
also twice as likely to incorrectly flag white defendants as low risk.
The
reason those predictions are so skewed is still unknown, because the
company responsible for these algorithms keeps its formulas secret —
it’s proprietary information. Judges do rely on machine-driven risk
assessments in different ways — some may even discount them entirely —
but there is little they can do to understand the logic behind them.
Read more...
Source: New York Times
Read more...