使用Player FM应用程序离线!
EP185 SAIF-powered Collaboration to Secure AI: CoSAI and Why It Matters to You
Manage episode 433820956 series 2892548
Guest:
David LaBianca, Senior Engineering Director, Google
Topics:
The universe of AI risks is broad and deep. We’ve made a lot of headway with our SAIF framework: can you give us a) a 90 second tour of SAIF and b) share how it’s gotten so much traction and c) talk about where we go next with it?
The Coalition for Secure AI (CoSAI) is a collaborative effort to address AI security challenges. What are Google's specific goals and expectations for CoSAI, and how will its success be measured in the long term?
Something we love about CoSAI is that we involved some unexpected folks, notably Microsoft and OpenAI. How did that come about?
How do we plan to work with existing organizations, such as Frontier Model Forum (FMF) and Open Source Security Foundation (OpenSSF)? Does this also complement emerging AI security standards?
AI is moving quickly. How do we intend to keep up with the pace of change when it comes to emerging threat techniques and actors in the landscape?
What do we expect to see out of CoSAI work and when? What should people be looking forward to and what are you most looking forward to releasing from the group?
We have proposed projects for CoSAI, including developing a defender's framework and addressing software supply chain security for AI systems. How can others use them? In other words, if I am a mid-sized bank CISO, do I care? How do I benefit from it?
An off-the-cuff question, how to do AI governance well?
Resources:
190集单集
Manage episode 433820956 series 2892548
Guest:
David LaBianca, Senior Engineering Director, Google
Topics:
The universe of AI risks is broad and deep. We’ve made a lot of headway with our SAIF framework: can you give us a) a 90 second tour of SAIF and b) share how it’s gotten so much traction and c) talk about where we go next with it?
The Coalition for Secure AI (CoSAI) is a collaborative effort to address AI security challenges. What are Google's specific goals and expectations for CoSAI, and how will its success be measured in the long term?
Something we love about CoSAI is that we involved some unexpected folks, notably Microsoft and OpenAI. How did that come about?
How do we plan to work with existing organizations, such as Frontier Model Forum (FMF) and Open Source Security Foundation (OpenSSF)? Does this also complement emerging AI security standards?
AI is moving quickly. How do we intend to keep up with the pace of change when it comes to emerging threat techniques and actors in the landscape?
What do we expect to see out of CoSAI work and when? What should people be looking forward to and what are you most looking forward to releasing from the group?
We have proposed projects for CoSAI, including developing a defender's framework and addressing software supply chain security for AI systems. How can others use them? In other words, if I am a mid-sized bank CISO, do I care? How do I benefit from it?
An off-the-cuff question, how to do AI governance well?
Resources:
190集单集
Alle Folgen
×欢迎使用Player FM
Player FM正在网上搜索高质量的播客,以便您现在享受。它是最好的播客应用程序,适用于安卓、iPhone和网络。注册以跨设备同步订阅。