Decision theorist Eliezer Yudkowsky has a simple message: superintelligent AI could probably kill us all. So the question becomes: Is it possible to build powerful artificial minds that are obedient, even benevolent? In a fiery talk, Yudkowsky explores why we need to act immediately to ensure smarter-than-human AI systems don’t lead to our extinction.
If you love watching TED Talks like this one, become a TED Member to support our mission of spreading ideas: https://ted.com/membership.
Follow TED!
Twitter: https://twitter.com/TEDTalks.
Instagram: https://www.instagram.com/ted.
Facebook: https://facebook.com/TED
LinkedIn: https://www.linkedin.com/company/ted-conferences.
TikTok: https://www.tiktok.com/@tedtoks.
The TED Talks channel features talks, performances and original series from the world’s leading thinkers and doers. Subscribe to our channel for videos on Technology, Entertainment and Design — plus science, business, global issues, the arts and more. Visit https://TED.com to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more.
Watch more: https://go.ted.com/eliezeryudkowsky.
Comments are closed.