Artwork

内容由Ellie Anderson, Ph.D. and David Peña-Guzmán, Ph.D., Ellie Anderson, and David Peña-Guzmán提供。所有播客内容(包括剧集、图形和播客描述)均由 Ellie Anderson, Ph.D. and David Peña-Guzmán, Ph.D., Ellie Anderson, and David Peña-Guzmán 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal
Player FM -播客应用
使用Player FM应用程序离线!

AI Safety with Shazeda Ahmed

57:06
 
分享
 

Manage episode 411571458 series 2828065
内容由Ellie Anderson, Ph.D. and David Peña-Guzmán, Ph.D., Ellie Anderson, and David Peña-Guzmán提供。所有播客内容(包括剧集、图形和播客描述)均由 Ellie Anderson, Ph.D. and David Peña-Guzmán, Ph.D., Ellie Anderson, and David Peña-Guzmán 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal

Welcome your robot overlords! In episode 101 of Overthink, Ellie and David speak with Dr. Shazeda Ahmed, specialist in AI Safety, to dive into the philosophy guiding artificial intelligence. With the rise of LLMs like ChatGPT, the lofty utilitarian principles of Effective Altruism have taken the tech-world spotlight by storm. Many who work on AI safety and ethics worry about the dangers of AI, from how automation might put entire categories of workers out of a job to how future forms of AI might pose a catastrophic “existential risk” for humanity as a whole. And yet, optimistic CEOs portray AI as the beginning of an easy, technology-assisted utopia. Who is right about AI: the doomers or the utopians? And whose voices are part of the conversation in the first place? Is AI risk talk spearheaded by well-meaning experts or investor billionaires? And, can philosophy guide discussions about AI toward the right thing to do?

Check out the episode's extended cut here!


Nick Bostrom, Superintelligence
Adrian Daub, What Tech Calls Thinking
Virginia Eubanks, Automating Inequality
Mollie Gleiberman, “Effective Altruism and the strategic ambiguity of ‘doing good’”
Matthew Jones and Chris Wiggins, How Data Happened
William MacAskill, What We Owe the Future
Toby Ord, The Precipice
Inioluwa Deborah Raji et al., “The Fallacy of AI Functionality”
Inioluwa Deborah Raji and Roel Dobbe, “Concrete Problems in AI Safety, Revisted”
Peter Singer, Animal Liberation
Amia Srinivisan, “Stop The Robot Apocalypse”

Support the show

Patreon | patreon.com/overthinkpodcast
Website | overthinkpodcast.com
Instagram & Twitter | @overthink_pod
Email | dearoverthink@gmail.com
YouTube | Overthink podcast

  continue reading

112集单集

Artwork

AI Safety with Shazeda Ahmed

Overthink

146 subscribers

published

icon分享
 
Manage episode 411571458 series 2828065
内容由Ellie Anderson, Ph.D. and David Peña-Guzmán, Ph.D., Ellie Anderson, and David Peña-Guzmán提供。所有播客内容(包括剧集、图形和播客描述)均由 Ellie Anderson, Ph.D. and David Peña-Guzmán, Ph.D., Ellie Anderson, and David Peña-Guzmán 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal

Welcome your robot overlords! In episode 101 of Overthink, Ellie and David speak with Dr. Shazeda Ahmed, specialist in AI Safety, to dive into the philosophy guiding artificial intelligence. With the rise of LLMs like ChatGPT, the lofty utilitarian principles of Effective Altruism have taken the tech-world spotlight by storm. Many who work on AI safety and ethics worry about the dangers of AI, from how automation might put entire categories of workers out of a job to how future forms of AI might pose a catastrophic “existential risk” for humanity as a whole. And yet, optimistic CEOs portray AI as the beginning of an easy, technology-assisted utopia. Who is right about AI: the doomers or the utopians? And whose voices are part of the conversation in the first place? Is AI risk talk spearheaded by well-meaning experts or investor billionaires? And, can philosophy guide discussions about AI toward the right thing to do?

Check out the episode's extended cut here!


Nick Bostrom, Superintelligence
Adrian Daub, What Tech Calls Thinking
Virginia Eubanks, Automating Inequality
Mollie Gleiberman, “Effective Altruism and the strategic ambiguity of ‘doing good’”
Matthew Jones and Chris Wiggins, How Data Happened
William MacAskill, What We Owe the Future
Toby Ord, The Precipice
Inioluwa Deborah Raji et al., “The Fallacy of AI Functionality”
Inioluwa Deborah Raji and Roel Dobbe, “Concrete Problems in AI Safety, Revisted”
Peter Singer, Animal Liberation
Amia Srinivisan, “Stop The Robot Apocalypse”

Support the show

Patreon | patreon.com/overthinkpodcast
Website | overthinkpodcast.com
Instagram & Twitter | @overthink_pod
Email | dearoverthink@gmail.com
YouTube | Overthink podcast

  continue reading

112集单集

所有剧集

×
 
Loading …

欢迎使用Player FM

Player FM正在网上搜索高质量的播客,以便您现在享受。它是最好的播客应用程序,适用于安卓、iPhone和网络。注册以跨设备同步订阅。

 

快速参考指南