If humans can't even agree on what they value, how can we have truly ethical AI's?

Great podcast from the Future of Life institute. Blurb below:

What does it mean to create beneficial artificial intelligence? How can we expect to align AIs with human values if humans can’t even agree on what we value? Building safe and beneficial AI involves tricky technical research problems, but it also requires input from philosophers, ethicists, and psychologists on these fundamental questions. How can we ensure the most effective collaboration?

Meia Chita-Tegmark and Lucas Perry talk about the value alignment problem: the challenge of aligning the goals and actions of AI systems with the goals and intentions of humans. 

Topics discussed in this episode include:

  • how AGI (artificial general intelligence) can inform human values,
  • the role of psychology in value alignment,
  • how the value alignment problem includes ethics, technical safety research, and international coordination,
  • and the possibility of creating suffering risks (s-risks).

There's so much activity in this area. See the ten commandments of ethical AI (given by a clergyman). Or this from Android Authority on the complexities of ethics and AI. Wikipedia's coverage of the issue is comprehensive. 

And see this typically authoritative blog entry from Bookforum's Omnivore blog, at the end of last year: 

John Salvatier and Katja Grace, Allan Dafoe, and Owain Evans (Oxford) and Baobao Zhang (Yale): When Will AI Exceed Human Performance? Evidence from AI Experts. Andy Fitch interviews Nick Bostrom, author of Superintelligence: Paths, Dangers, Strategies. Seth D. Baum on social choice ethics in artificial intelligence. Will AI enable the third stage of life on Earth? Artificial intelligence is getting more powerful, and it’s about to be everywhere. Silicon Valley luminaries are busily preparing for when robots take over. Stunning AI breakthrough takes us one step closer to the Singularity. SingularityNET’s Ben Goertzel has a grand vision for the future of AI.

How worried should we be about artificial intelligence? Self-driving cars, rogue nuke launches, evil AI: What tech threats you should (and shouldn’t) worry about. Hackers have already started to weaponize artificial intelligence. Phil Torres on why superintelligence is a threat that should be taken seriously. The dark secret at the heart of AI: No one really knows how the most advanced algorithms do what they do — that could be a problem. How artificial intelligence learns to be racist: Simple — it’s mimicking us. The darkness at the of the tunnel: Shuja Haider on artificial intelligence and neoreaction. An AI god will emerge by 2042 and write its own bible— will you worship it?

The rise of AI is sparking an international arms race: Sean Illing interviews Peter W. Singer, author of Wired for War: The Robotics Revolution and Conflict in the 21st Century. China’s AI awakening: The West shouldn’t fear China’s artificial-intelligence revolution — it should copy it. AI power will lead to world domination, says Vladimir Putin (and more). Maureen Dowd on Elon Musk’s billion-dollar crusade to stop the A.I. apocalypse. The real danger to civilization isn’t AI — it’s runaway capitalism.

Ryan Calo (Washington): Artificial Intelligence Policy: A Primer and Roadmap. Oren Etzioni on how to regulate artificial intelligence.