close

The Water Cooler

A blog of fresh ideas and findings from organizational leaders and researchers on how they’re making work better, shared regularly.

Algorithms can help combat, not cure, unconscious bias

Filed under: Unbiasing
Algorithms can help combat, not cure, unconscious bias
Our brains are great at filling in the gaps, but sometimes our unconscious biases can get in the way. The power of algorithms to impartially inform decisions, like hiring, is promising. But these artificial systems can inherit unintended biases from their human creators.

Ifeoma Ajunwa, assistant professor at the University of the District of Columbia David A. Clarke School of Law, spoke at Google’s re:Work 2016 event about the potential and pitfalls of algorithmically-assisted perpetuating biases.

There have been many studies showing how unconscious biases can influence our decision making, especially when it comes to assessing people. Silicon Valley is turning to technological solutions to make more impartial decisions, but Ajunwa cautioned that these types of approaches are not foolproof. “As the research of Solon Barocas and Andrew Selbst has also shown, algorithms are not without bias. Algorithms can and do inherit the biases of their human creators,” Ajunwa explained. “Even if we can get hiring algorithms to behave in perfectly predictable ways, we must question whether their results are truly accurate or whether they're merely replicating a biased status quo.”

Ajunwa and her co-authors examined this problem in their paper "Hiring By Algorithm." “The solution when companies use algorithms in the hiring process is to evaluate the training data to see if it will return a result that has a disparate impact on certain groups of people,” Ajunwa explained. “If it does, then the training data for the algorithm can actually be repaired to prevent that disparate impact before the algorithm is applied to it. The benefit of this type of data repair is that the hiring decision can then be assured to be both fair and accurate.”

Ajunwa believes that algorithmically-assisted decision making holds promise, but it is not without tradeoffs. By considering how the underlying data may yield biased results, organizations can actively “repair” the data, correcting for historical inequities and resulting in non-discriminatory results.