Racial Bias and Artificial Intelligence

Racial Bias and Artificial Intelligence

November 21, 2017

CFJ's Response to CBC Members' Letter to Tech Industry 

 

Two members of the Congressional Black Caucus are urging the Internet Association "to stop the spread of racial and gender bias through technology and adopt practices and policies to hold members accountable when it comes to the use of Artificial Intelligence (AI) technology."

 

In a November 20 letter, Representatives Emanuel Cleaver II (D-MO) and Bonnie Watson Coleman (D-NJ) write that "As AI is leading to autonomy … issues of ethics, safety, racial and gender bias, knowledge sharing, and privacy are at a critical point." They warn that "If these issues go unchecked Congress will be left with few options and will demand increased regulations to address these issues."

 

Cleaver and Watson Coleman express a set of disparate concerns, but one of those concerns is clearly that automated decision making systems based on artificial intelligence will have a negative impact on minorities. Committee for Justice president Curt Levey and co-author Ryan Hagemann described these concerns in a November 13 Wall Street Journal op-ed:

 

"Concerns about why a machine-learning system reaches a particular decision are greatest when the stakes are highest. For example, risk-assessment models relying on artificial intelligence are being used in criminal sentencing and bail determinations in Wisconsin and other states. Former Attorney General Eric Holder and others worry that such models disproportionately hurt racial minorities."

 

Because AI systems, like humans, rely on correlations in the real world – for example, the higher crime rates in low-income neighborhood – they can have a disparate impact on particular groups despite the lack of any explicit or intentional bias. However, critics should be wary of restricting AI-based decision-making precisely because the alternative is human decision-making. It is relatively easy to ensure that AI systems are free of explicit or intentional bias, but the same cannot be said of humans.

 

Critics should also consider that AI can often produce more accurate decision-making, which benefits us all. For example, New Jersey's automated system for recommending bail determinations has resulted in bail being set for far fewer arrestees because the system allows more accurate prediction of who is likely to show up for their next court appearance.

 

Many of those concerned with bias in AI systems believe the solution is government-mandated "transparency," often including public disclosure of these systems’ parameters or computer code. Levey and Hagemann take issue with the heavy-handed regulation suggested by the Representatives' letter and with mandated transparency in particular:

 

"Transparency sounds nice, but it’s not necessarily helpful and may be harmful. … A better solution is to make artificial intelligence accountable. … [T]he former does not involve disclosure of a system’s inner workings. Instead, accountability should include explainability, confidence measures, procedural regularity, and responsibility. … One of us, Curt Levey, had experience with this two decades ago as a scientist at HNC Software … [where] he developed a patented technology providing reasons and confidence measures for the decisions made by neural networks."

 

Levey and Hagemann conclude that "Further advances in artificial intelligence promise many more benefits for mankind, but only if society avoids strangling this burgeoning technology with burdensome and unnecessary transparency regulations."

SHARE
TWEET
SHARE
Please reload

Related Posts
Please reload

Contact Us

1629 K St. NW
Suite #300
Washington, DC 20006 
 
Phone:  (202) 270-7748
Email: contact@committeeforjustice.org

Support Our Mission

We are only able to accomplish our mission through your generous support.
Please consider making a donation today. 

Follow Us Online 

Copyright (c) 2019 by The Committee for Justice