Algorithms With Minds of Their Own
What is the best policy for holding artificial intelligence systems accountable? In an op-ed for The Wall Street Journal, Committee For Justice president Curt Levey and Niskanen Center director of technology policy Ryan Hagemann write about how regulators can learn to stop worrying and love artificial intelligence.
They write that: “Everyone wants to know: Will artificial intelligence doom mankind—or save the world? But this is the wrong question. In the near future, the biggest challenge to human control and acceptance of artificial intelligence is the technology’s complexity and opacity, not its potential to turn against us like HAL in ‘2001: A Space Odyssey.’ This ‘black box’ problem arises from the trait that makes artificial intelligence so powerful: its ability to learn and improve from experience without explicit instructions.”
They conclude, “[u]ntil recently the success of systems like Falcon went underreported. Artificial-intelligence pioneer John McCarthy noted decades ago, “As soon as it works, no one calls it AI anymore.” Further advances in artificial intelligence promise many more benefits for mankind, but only if society avoids strangling this burgeoning technology with burdensome and unnecessary transparency regulations.”
Read more in The Wall Street Journal.