Artwork

内容由Zain Raza提供。所有播客内容(包括剧集、图形和播客描述)均由 Zain Raza 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal
Player FM -播客应用
使用Player FM应用程序离线!

Keeping AI Safe: The UPStarts Podcast Episode 159

16:40
 
分享
 

Manage episode 310572844 series 3062099
内容由Zain Raza提供。所有播客内容(包括剧集、图形和播客描述)均由 Zain Raza 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal
There are plenty of reasons to be scared of AI: automation of labor displacing workers, a self aware robot becoming an existential threat to humanity, etc. Most of these are fears that live in the world of science fiction, at least for now. Nonetheless, today on episode 160 of The UPStarts Podcast, Zain Raza discusses one fear that is all too real: the risks of hackers taking over our AI systems and machines. It's a problem that is all too common to not be an issue. Just as hackers have gone after WhatsApp, banks, and other institutions, what is to stop them from going after our AI systems, once we have become all too dependent upon them as well, making decisions with our data that put our lives at risk (i.e. on the road, in business, or in the operating room)? As such, today Zain Raza discusses a major flaw in deep learning algorithms which can currently be exploited by hackers, how it is exploited, and new research from MIT that discusses a possible path to protecting against this weakness. As always, enjoy the episode! ****CONNECT WITH ZAIN RAZA:******** Twitter: @ZainRaz14 Instagram: zraza_theupstart Linkedin: https://www.linkedin.com/in/zain-raza-989817143/ *****HELPFUL LINKS:**** Link to MIT Technology Review, for the article: https://www.technologyreview.com/s/613555/how-we-might-protect-ourselves-from-malicious-ai/ --- Send in a voice message: https://podcasters.spotify.com/pod/show/upstarts/message Support this podcast: https://podcasters.spotify.com/pod/show/upstarts/support
  continue reading

193集单集

Artwork
icon分享
 
Manage episode 310572844 series 3062099
内容由Zain Raza提供。所有播客内容(包括剧集、图形和播客描述)均由 Zain Raza 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal
There are plenty of reasons to be scared of AI: automation of labor displacing workers, a self aware robot becoming an existential threat to humanity, etc. Most of these are fears that live in the world of science fiction, at least for now. Nonetheless, today on episode 160 of The UPStarts Podcast, Zain Raza discusses one fear that is all too real: the risks of hackers taking over our AI systems and machines. It's a problem that is all too common to not be an issue. Just as hackers have gone after WhatsApp, banks, and other institutions, what is to stop them from going after our AI systems, once we have become all too dependent upon them as well, making decisions with our data that put our lives at risk (i.e. on the road, in business, or in the operating room)? As such, today Zain Raza discusses a major flaw in deep learning algorithms which can currently be exploited by hackers, how it is exploited, and new research from MIT that discusses a possible path to protecting against this weakness. As always, enjoy the episode! ****CONNECT WITH ZAIN RAZA:******** Twitter: @ZainRaz14 Instagram: zraza_theupstart Linkedin: https://www.linkedin.com/in/zain-raza-989817143/ *****HELPFUL LINKS:**** Link to MIT Technology Review, for the article: https://www.technologyreview.com/s/613555/how-we-might-protect-ourselves-from-malicious-ai/ --- Send in a voice message: https://podcasters.spotify.com/pod/show/upstarts/message Support this podcast: https://podcasters.spotify.com/pod/show/upstarts/support
  continue reading

193集单集

所有剧集

×
 
Loading …

欢迎使用Player FM

Player FM正在网上搜索高质量的播客,以便您现在享受。它是最好的播客应用程序,适用于安卓、iPhone和网络。注册以跨设备同步订阅。

 

快速参考指南