AI: A Future for Humans?
Science sleuths Adam Rutherford and Hannah Fry unwrap some of the biggest ideas presented by AI visionary Stuart Russell in this year's ΒιΆΉΤΌΕΔ Reith Lectures.
As huge tech companies race to develop ever more powerful AI systems, the creation of super-intelligent machines seems almost inevitable. But what happens when, one day, we set these advanced AIs loose? How can we be sure theyβll have humanityβs best interests in their cold silicon hearts?
Inspired by Stuart Russellβs fourth and final Reith lecture, AI-expert Hannah Fry and AI-curious Adam Rutherford imagine how we might build an artificial mind that knows whatβs good for us and always does the right thing.
Can we βprogrammeβ machine intelligence to always be aligned with the values of its human creators? Will it be suitably governed by a really, really long list of rules - or will it need a set of broad moral principles to guide its behaviour? If so, whose morals should we pick?
On hand to help Fry and Rutherford unpick the ethical quandaries of our fast-approaching future are Adrian Weller, Programme Director for AI at The Alan Turing Institute, and Brian Christian, author of The Alignment Problem.
Producer - Melanie Brown
Assistant Producer - Ilan Goodman
Last on
More episodes
Previous
Next
You are at the last episode
Broadcasts
- Wed 22 Dec 2021 11:00ΒιΆΉΤΌΕΔ Radio 4
- Sun 2 Jan 2022 21:30ΒιΆΉΤΌΕΔ Radio 4