Artwork

内容由Enrico Bertini and Moritz Stefaner, Enrico Bertini, and Moritz Stefaner提供。所有播客内容(包括剧集、图形和播客描述)均由 Enrico Bertini and Moritz Stefaner, Enrico Bertini, and Moritz Stefaner 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal
Player FM -播客应用
使用Player FM应用程序离线!

156  |  Visualizing Fairness in Machine Learning with Yongsu Ahn and Alex Cabrera

43:04
 
分享
 

Manage episode 255291234 series 2313435
内容由Enrico Bertini and Moritz Stefaner, Enrico Bertini, and Moritz Stefaner提供。所有播客内容(包括剧集、图形和播客描述)均由 Enrico Bertini and Moritz Stefaner, Enrico Bertini, and Moritz Stefaner 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal

In this episode we have PhD students Yongsu Ahn and Alex Cabrera to talk about two separate data visualization systems they developed to help people analyze machine learning models in terms of potential biases they may have. The systems are called FairSight and FairVis and have slightly different goals. FairSight focuses on models that generate rankings (e.g., in school admissions) and FairVis more on comparison of fairness metrics. With them we explore the world of “machine bias” trying to understand what it is and how visualization can play a role in its detection and mitigation.

[Our podcast is fully listener-supported. That’s why you don’t have to listen to ads! Please consider becoming a supporter on Patreon or sending us a one-time donation through Paypal. And thank you!]

Enjoy the show!

Links:


Related episodes

  continue reading

章节

1. Welcome to Data Stories! (00:00:33)

2. Our podcast is listener-supported, please consider making a donation (00:01:07)

3. Our topic today: Bias and fairness in machine learning (00:01:41)

4. Our guests: Alex Cabrera (00:02:48)

5. and Yongsu Ahn (00:03:14)

6. How to define 'fairness' and 'bias' in machine learning? (00:03:54)

7. Examples of discriminitation in machine learning (00:08:49)

8. What is FairSight? (00:13:22)

9. What is FairVis? (00:17:00)

10. Do you have advice on how to get started with the topic? (00:38:32)

11. Get in touch with us and support us on Patreon (00:52:10)

170集单集

Artwork
icon分享
 
Manage episode 255291234 series 2313435
内容由Enrico Bertini and Moritz Stefaner, Enrico Bertini, and Moritz Stefaner提供。所有播客内容(包括剧集、图形和播客描述)均由 Enrico Bertini and Moritz Stefaner, Enrico Bertini, and Moritz Stefaner 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal

In this episode we have PhD students Yongsu Ahn and Alex Cabrera to talk about two separate data visualization systems they developed to help people analyze machine learning models in terms of potential biases they may have. The systems are called FairSight and FairVis and have slightly different goals. FairSight focuses on models that generate rankings (e.g., in school admissions) and FairVis more on comparison of fairness metrics. With them we explore the world of “machine bias” trying to understand what it is and how visualization can play a role in its detection and mitigation.

[Our podcast is fully listener-supported. That’s why you don’t have to listen to ads! Please consider becoming a supporter on Patreon or sending us a one-time donation through Paypal. And thank you!]

Enjoy the show!

Links:


Related episodes

  continue reading

章节

1. Welcome to Data Stories! (00:00:33)

2. Our podcast is listener-supported, please consider making a donation (00:01:07)

3. Our topic today: Bias and fairness in machine learning (00:01:41)

4. Our guests: Alex Cabrera (00:02:48)

5. and Yongsu Ahn (00:03:14)

6. How to define 'fairness' and 'bias' in machine learning? (00:03:54)

7. Examples of discriminitation in machine learning (00:08:49)

8. What is FairSight? (00:13:22)

9. What is FairVis? (00:17:00)

10. Do you have advice on how to get started with the topic? (00:38:32)

11. Get in touch with us and support us on Patreon (00:52:10)

170集单集

所有剧集

×
 
Loading …

欢迎使用Player FM

Player FM正在网上搜索高质量的播客,以便您现在享受。它是最好的播客应用程序,适用于安卓、iPhone和网络。注册以跨设备同步订阅。

 

快速参考指南