You can also download the episode on the Federalist Society website.
Description:
Federalist Society Teleforum: Is Artificial Intelligence Biased? And What Should We Do About It?
Journalists and academics seem convinced that artificial intelligence is often biased against women and racial minorities. If Washington’s new facial recognition law is a guide, legislators see the same problem. But is it true? It’s not hard to find patterns in AI decisions that have a disparate impact on protected groups. Is this bias? And if so, whose?
Do we assume the worst about decisions with a disparate impact – applying a kind of misanthropomorphism to the machine – or can we objectively analyze the factors behind the decisions? If bias boils down to not producing proportionate results for each protected class, is the only remedy to impose a “proportionate result” constraint on AI processing – essentially imposing racial, ethnic, and gender quotas on every corner of life that is touched by AI?
Sponsor: The Federalist Society Regulatory Transparency Project
Featuring:
Curt Levey, President, The Committee for Justice
Stewart Baker, Partner, Steptoe & Johnson LLP
Nicholas Weaver, Researcher, International Computer Science Institute and Lecturer, UC Berkeley