Artwork

内容由MLSecOps.com提供。所有播客内容(包括剧集、图形和播客描述)均由 MLSecOps.com 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal
Player FM -播客应用
使用Player FM应用程序离线!

ML Model Fairness: Measuring and Mitigating Algorithmic Disparities; With Guest: Nick Schmidt

35:33
 
分享
 

Manage episode 375063826 series 3461851
内容由MLSecOps.com提供。所有播客内容(包括剧集、图形和播客描述)均由 MLSecOps.com 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal

Send us a text

This week we’re talking about the role of fairness in AI/ML. It is becoming increasingly apparent that incorporating fairness into our AI systems and machine learning models while mitigating bias and potential harms is a critical challenge. Not only that, it’s a challenge that demands a collective effort to ensure the responsible, secure, and equitable development of AI and machine learning systems.

But what does this actually mean in practice? To find out, we spoke with Nick Schmidt, the Chief Technology and Innovation Officer at SolasAI. In this week’s episode, Nick reviews some key principles related to model governance and fairness, from things like accountability and ownership all the way to model deployment and monitoring.

He also discusses real life examples of when machine learning algorithms have demonstrated bias and disparity, along with how those outcomes could be harmful to individuals or groups.
Later in the episode, Nick offers some insightful advice for organizations who are assessing their AI security risk related to algorithmic disparities and unfair models.

Additional tools and resources to check out:
AI Radar
ModelScan
NB Defense
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

  continue reading

41集单集

Artwork
icon分享
 
Manage episode 375063826 series 3461851
内容由MLSecOps.com提供。所有播客内容(包括剧集、图形和播客描述)均由 MLSecOps.com 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal

Send us a text

This week we’re talking about the role of fairness in AI/ML. It is becoming increasingly apparent that incorporating fairness into our AI systems and machine learning models while mitigating bias and potential harms is a critical challenge. Not only that, it’s a challenge that demands a collective effort to ensure the responsible, secure, and equitable development of AI and machine learning systems.

But what does this actually mean in practice? To find out, we spoke with Nick Schmidt, the Chief Technology and Innovation Officer at SolasAI. In this week’s episode, Nick reviews some key principles related to model governance and fairness, from things like accountability and ownership all the way to model deployment and monitoring.

He also discusses real life examples of when machine learning algorithms have demonstrated bias and disparity, along with how those outcomes could be harmful to individuals or groups.
Later in the episode, Nick offers some insightful advice for organizations who are assessing their AI security risk related to algorithmic disparities and unfair models.

Additional tools and resources to check out:
AI Radar
ModelScan
NB Defense
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

  continue reading

41集单集

所有剧集

×
 
Loading …

欢迎使用Player FM

Player FM正在网上搜索高质量的播客,以便您现在享受。它是最好的播客应用程序,适用于安卓、iPhone和网络。注册以跨设备同步订阅。

 

快速参考指南

边探索边听这个节目
播放