Translate to multiple languages

Subscribe to my Email updates

https://feedburner.google.com/fb/a/mailverify?uri=helgeScherlundelearning
Enjoy what you've read, make sure you subscribe to my Email Updates

Tuesday, October 08, 2019

Removing Human Bias from Predictive Modeling | Technology - Knowledge@Wharton

Predictive modeling is supposed to be neutral, a way to help remove personal prejudices from decision-making. But the algorithms are packed with the same biases that are built into the real-world data used to create them. Wharton statistics professor James Johndrow has developed a method to remove those biases. His latest research, “An Algorithm for Removing Sensitive Information: Application to Race-independent Recidivism Prediction,” focuses on removing information on race in data that predicts recidivism, but the method can be applied beyond the criminal justice system. He spoke to Knowledge@Wharton about his paper, which is co-authored with his wife, Kristian Lum, lead statistician with the Human Rights Data Analysis Group.

Photo: Knowledge@Wharton

Knowledge@Wharton: Predictive modeling is becoming an increasingly popular way to assist human decision-makers, but it’s not perfect. What are some of the drawbacks?

James Johndrow: There has been a lot more attention lately about it, partly because things are being automated so much. There’s just more and more interest in having automatic scoring, automatic decision-making, or at least partly automatic decision-making. The area that I have been especially interested in — and this is a lot of work that I do with my wife — is criminal justice.
In criminal justice, there is a lot of use of algorithms for things like who will need to post bail to get out of jail pre-trial versus who will just be let out on their own recognizance, for example. At the heart of this is this idea of risk assessment and trying to see who is most likely, for example, to show up to their court dates. The potential problems with this are just that these algorithms are trained on data that is found in the real world. That data can be the by-product of this person’s history of interaction with the court systems or history of interaction with police. The algorithms and their predictions can bake in all of this human stuff that is going on, so there has been a lot more attention lately to making sure that certain groups aren’t discriminated against by these algorithms. For example, is the algorithm less accurate for certain groups, or is the algorithm recommending that minorities are released less often than white people? Those are the kinds of things that people pay a lot of attention to, and there was a particular literature on this where we were operating when we thought of writing his paper.
Read more... 

Source: Knowledge@Wharton