Artwork

内容由Anton Chuvakin提供。所有播客内容(包括剧集、图形和播客描述)均由 Anton Chuvakin 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal
Player FM -播客应用
使用Player FM应用程序离线!

EP144 LLMs: A Double-Edged Sword for Cloud Security? Weighing the Benefits and Risks of Large Language Models

29:04
 
分享
 

Manage episode 380593266 series 2892548
内容由Anton Chuvakin提供。所有播客内容(包括剧集、图形和播客描述)均由 Anton Chuvakin 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal

Guest:

  • Kathryn Shih, Group Product Manager, LLM Lead in Google Cloud Security

Topics:

  • Could you give our audience the quick version of what is an LLM and what things can they do vs not do? Is this “baby AGI” or is this a glorified “autocomplete”?

  • Let’s talk about the different ways to tune the models, and when we think about tuning what are the ways that attackers might influence or steal our data?

  • Can you help our security listener leaders have the right vocabulary and concepts to reason about the risk of their information a) going into an LLM and b) getting regurgitated by one?

  • How do I keep the output of a model safe, and what questions do I need to ask a vendor to understand if they’re a) talking nonsense or b) actually keeping their output safe?

  • Are hallucinations inherent to LLMs and can they ever be fixed?

  • So there are risks to data and new opportunities for attacks and hallucinations. How do we know good opportunities in the area given the risks?

Resources:

  continue reading

172集单集

Artwork
icon分享
 
Manage episode 380593266 series 2892548
内容由Anton Chuvakin提供。所有播客内容(包括剧集、图形和播客描述)均由 Anton Chuvakin 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal

Guest:

  • Kathryn Shih, Group Product Manager, LLM Lead in Google Cloud Security

Topics:

  • Could you give our audience the quick version of what is an LLM and what things can they do vs not do? Is this “baby AGI” or is this a glorified “autocomplete”?

  • Let’s talk about the different ways to tune the models, and when we think about tuning what are the ways that attackers might influence or steal our data?

  • Can you help our security listener leaders have the right vocabulary and concepts to reason about the risk of their information a) going into an LLM and b) getting regurgitated by one?

  • How do I keep the output of a model safe, and what questions do I need to ask a vendor to understand if they’re a) talking nonsense or b) actually keeping their output safe?

  • Are hallucinations inherent to LLMs and can they ever be fixed?

  • So there are risks to data and new opportunities for attacks and hallucinations. How do we know good opportunities in the area given the risks?

Resources:

  continue reading

172集单集

所有剧集

×
 
Loading …

欢迎使用Player FM

Player FM正在网上搜索高质量的播客,以便您现在享受。它是最好的播客应用程序,适用于安卓、iPhone和网络。注册以跨设备同步订阅。

 

快速参考指南