Translate to multiple languages

Subscribe to my Email updates

https://feedburner.google.com/fb/a/mailverify?uri=helgeScherlundelearning
Enjoy what you've read, make sure you subscribe to my Email Updates

Wednesday, April 17, 2019

The artificial intelligence field is too white and too male, researchers say | Tech - The Verge

A new report explores AI’s ‘diversity crisis’, insist Colin Lecher, Senior Reporter at The Verge. 
 
Photo: Alex Castro / The Verge

The artificial intelligence industry is facing a “diversity crisis,” researchers from the AI Now Institute said in a report released today, raising key questions about the direction of the field.

Women and people of color are deeply underrepresented, the report found, noting studies finding that about 80 percent of AI professors are men, while just 15 percent of AI research staff at Facebook and 10 percent at Google are women. People of color are also sidelined, making up only a fraction of staff at major tech companies. The result is a workforce frequently driven by white and male perspectives, building tools that often affect other groups of people. “This is not the diversity of people that are being affected by these systems,” AI Now Institute co-director Meredith Whittaker says.

Worse, plans to improve the problem by fixing the “pipeline” of potential job candidates has largely failed. “Despite many decades of ‘pipeline studies’ that assess the flow of diverse job candidates from school to industry, there has been no substantial progress in diversity in the AI industry,” the researchers write...

Diversity, while a hurdle across the tech industry, presents specific dangers in AI, where potentially biased technology, like facial recognition, can disproportionately affect historically marginalized groups. Tools like a program that scans faces to determine sexuality, introduced in 2017, echo injustices of the past, the researchers write. Rigorous testing is needed. But more than that, the makers of AI tools have to be willing to not build the riskiest projects. “We need to know that these systems are safe as well as fair,” AI Now Institute co-director Kate Crawford says.  

Source: The Verge