Artwork

内容由Data Driven提供。所有播客内容(包括剧集、图形和播客描述)均由 Data Driven 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal
Player FM -播客应用
使用Player FM应用程序离线!

Niv Braun on AI Security Measures and Emerging Threats

53:11
 
分享
 

Manage episode 461131881 series 1450892
内容由Data Driven提供。所有播客内容(包括剧集、图形和播客描述)均由 Data Driven 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal

In today's episode, we're thrilled to have Niv Braun, co-founder and CEO of Noma Security, join us as we tackle some pressing issues in AI security.

With the rapid adoption of generative AI technologies, the landscape of data security is evolving at breakneck speed. We'll explore the increasing need to secure systems that handle sensitive AI data and pipelines, the rise of AI security careers, and the looming threats of adversarial attacks, model "hallucinations," and more. Niv will share his insights on how companies like Noma Security are working tirelessly to mitigate these risks without hindering innovation.

We'll also dive into real-world incidents, such as compromised open-source models and the infamous PyTorch breach, to illustrate the critical need for improved security measures. From the importance of continuous monitoring to the development of safer formats and the adoption of a zero trust approach, this episode is packed with valuable advice for organizations navigating the complex world of AI security.

So, whether you're a data scientist, AI engineer, or simply an enthusiast eager to learn more about the intersection of AI and security, this episode promises to offer a wealth of information and practical tips to help you stay ahead in this rapidly changing field. Tune in and join the conversation as we uncover the state of AI security and what it means for the future of technology.

Quotable Moments

00:00 Security spotlight shifts to data and AI.

03:36 Protect against misconfigurations, adversarial attacks, new risks.

09:17 Compromised model with undetectable data leaks.

12:07 Manual parsing needed for valid, malicious code detection.

15:44 Concerns over Agiface models may affect jobs.

20:00 Combines self-developed and third-party AI models.

20:55 Ensure models don't use sensitive or unauthorized data.

25:55 Zero Trust: mindset, philosophy, implementation, security framework.

30:51 LLM attacks will have significantly higher impact.

34:23 Need better security awareness, exposed secrets risk.

35:50 Be organized with visibility and governance.

39:51 Red teaming for AI security and safety.

44:33 Gen AI primarily used by consumers, not businesses.

47:57 Providing model guardrails and runtime protection services.

50:53 Ensure flexible, configurable architecture for varied needs.

52:35 AI, security, innovation discussed by Niamh Braun.

  continue reading

301集单集

Artwork
icon分享
 
Manage episode 461131881 series 1450892
内容由Data Driven提供。所有播客内容(包括剧集、图形和播客描述)均由 Data Driven 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal

In today's episode, we're thrilled to have Niv Braun, co-founder and CEO of Noma Security, join us as we tackle some pressing issues in AI security.

With the rapid adoption of generative AI technologies, the landscape of data security is evolving at breakneck speed. We'll explore the increasing need to secure systems that handle sensitive AI data and pipelines, the rise of AI security careers, and the looming threats of adversarial attacks, model "hallucinations," and more. Niv will share his insights on how companies like Noma Security are working tirelessly to mitigate these risks without hindering innovation.

We'll also dive into real-world incidents, such as compromised open-source models and the infamous PyTorch breach, to illustrate the critical need for improved security measures. From the importance of continuous monitoring to the development of safer formats and the adoption of a zero trust approach, this episode is packed with valuable advice for organizations navigating the complex world of AI security.

So, whether you're a data scientist, AI engineer, or simply an enthusiast eager to learn more about the intersection of AI and security, this episode promises to offer a wealth of information and practical tips to help you stay ahead in this rapidly changing field. Tune in and join the conversation as we uncover the state of AI security and what it means for the future of technology.

Quotable Moments

00:00 Security spotlight shifts to data and AI.

03:36 Protect against misconfigurations, adversarial attacks, new risks.

09:17 Compromised model with undetectable data leaks.

12:07 Manual parsing needed for valid, malicious code detection.

15:44 Concerns over Agiface models may affect jobs.

20:00 Combines self-developed and third-party AI models.

20:55 Ensure models don't use sensitive or unauthorized data.

25:55 Zero Trust: mindset, philosophy, implementation, security framework.

30:51 LLM attacks will have significantly higher impact.

34:23 Need better security awareness, exposed secrets risk.

35:50 Be organized with visibility and governance.

39:51 Red teaming for AI security and safety.

44:33 Gen AI primarily used by consumers, not businesses.

47:57 Providing model guardrails and runtime protection services.

50:53 Ensure flexible, configurable architecture for varied needs.

52:35 AI, security, innovation discussed by Niamh Braun.

  continue reading

301集单集

所有剧集

×
 
Loading …

欢迎使用Player FM

Player FM正在网上搜索高质量的播客,以便您现在享受。它是最好的播客应用程序,适用于安卓、iPhone和网络。注册以跨设备同步订阅。

 

快速参考指南

边探索边听这个节目
播放