Racial Bias and Artificial Intelligence
top of page
  • The Committee for Justice

Racial Bias and Artificial Intelligence

CFJ's Response to CBC Members' Letter to Tech Industry



Two members of the Congressional Black Caucus are urging the Internet Association "to stop the spread of racial and gender bias through technology and adopt practices and policies to hold members accountable when it comes to the use of Artificial Intelligence (AI) technology."


In a November 20 letter, Representatives Emanuel Cleaver II (D-MO) and Bonnie Watson Coleman (D-NJ) write that "As AI is leading to autonomy … issues of ethics, safety, racial and gender bias, knowledge sharing, and privacy are at a critical point." They warn that "If these issues go unchecked Congress will be left with few options and will demand increased regulations to address these issues."


Cleaver and Watson Coleman express a set of disparate concerns, but one of those concerns is clearly that automated decision making systems based on artificial intelligence will have a negative impact on minorities. Committee for Justice president Curt Levey and co-author Ryan Hagemann described these concerns in a November 13 Wall Street Journal op-ed:


"Concerns about why a machine-learning system reaches a particular decision are greatest when the stakes are highest. For example, risk-assessment models relying on artificial intelligence are being used in criminal sentencing and bail determinations in Wisconsin and other states. Former Attorney General Eric Holder and others worry that such models disproportionately hurt racial minorities."


Because AI systems, like humans, rely on correlations in the real world – for example, the higher crime rates in low-income neighborhood – they can have a disparate impact on particular groups despite the lack of any explicit or intentional bias. However, critics should be wary of restricting AI-based decision-making precisely because the alternative is human decision-making. It is relatively easy to ensure that AI systems are free of explicit or intentional bias, but the same cannot be said of humans.


Critics should also consider that AI can often produce more accurate decision-making, which benefits us all. For example, New Jersey's automated system for recommending bail determinations has resulted in bail being set for far fewer arrestees because the system allows more accurate prediction of who is likely to show up for their next court appearance.


Many of those concerned with bias in AI systems believe the solution is government-mandated "transparency," often including public disclosure of these systems’ parameters or computer code. Levey and Hagemann take issue with the heavy-handed regulation suggested by the Representatives' letter and with mandated transparency in particular:


"Transparency sounds nice, but it’s not necessarily helpful and may be harmful. … A better solution is to make artificial intelligence accountable. … [T]he former does not involve disclosure of a system’s inner workings. Instead, accountability should include explainability, confidence measures, procedural regularity, and responsibility. … One of us, Curt Levey, had experience with this two decades ago as a scientist at HNC Software … [where] he developed a patented technology providing reasons and confidence measures for the decisions made by neural networks."


Levey and Hagemann conclude that "Further advances in artificial intelligence promise many more benefits for mankind, but only if society avoids strangling this burgeoning technology with burdensome and unnecessary transparency regulations."

bottom of page
Mastodon