Artwork

内容由Integrity Institute提供。所有播客内容(包括剧集、图形和播客描述)均由 Integrity Institute 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal
Player FM -播客应用
使用Player FM应用程序离线!

GPT4: Eldritch abomination or intern? A discussion with OpenAI

1:18:15
 
分享
 

Manage episode 362454565 series 3449584
内容由Integrity Institute提供。所有播客内容(包括剧集、图形和播客描述)均由 Integrity Institute 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal

OpenAI, creators of ChatGPT, join the show! In November 2022, ChatGPT upended the tech (and larger) world with a chatbot that passes not only the Turing test, but the bar exam. In this episode, we talk with Dave Willner and Todor Markov, integrity professionals at OpenAI, about how they make large language models safer for all.

Dave Willner is the Head of Trust and Safety at OpenAI. He previously was Head of Community Policy at both Airbnb and Facebook, where he built the teams that wrote the community guidelines and oversaw the internal policies to enforce them.

Todor Markov is a deep learning researcher at OpenAI. He builds content moderation tools for ChatGPT and GPT4. He graduated from Stanford with a Master’s in Statistics and a Bachelor’s in Symbolic Systems.

Alice Hunsberger hosts the episode. She is the VP of Customer Experience at Grindr. She leads Customer support, insights and trust and safety. Previously, she worked at OKCupid as Director & Global Head of Customer Experience.

Sahar Massachi is a visiting host today. He is the co-founder and Executive Director of the Integrity Institute. A past fellow of the Berkman Klein Center, Sahar is currently an advisory committee member for the Louis D. Brandeis Legacy Fund for Social Justice, a StartingBloc fellow, and a Roddenbery Fellow.
They discuss what content moderation looks like for ChatGPT, why T&S stands for Tradeoffs and Sadness, and how integrity workers can help OpenAI.

They also chat about the red-teaming process for GPT4, overlaps between platform integrity and AI integrity, their favorite GPT jailbreaks and how moderating GPTs is basically like teaching an Eldritch Abomination.

Disclaimer: The views in this episode only represent the views of the people involved in the recording of the episode. They do not represent Meta’s or any other entity’s views.

  continue reading

43集单集

Artwork
icon分享
 
Manage episode 362454565 series 3449584
内容由Integrity Institute提供。所有播客内容(包括剧集、图形和播客描述)均由 Integrity Institute 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal

OpenAI, creators of ChatGPT, join the show! In November 2022, ChatGPT upended the tech (and larger) world with a chatbot that passes not only the Turing test, but the bar exam. In this episode, we talk with Dave Willner and Todor Markov, integrity professionals at OpenAI, about how they make large language models safer for all.

Dave Willner is the Head of Trust and Safety at OpenAI. He previously was Head of Community Policy at both Airbnb and Facebook, where he built the teams that wrote the community guidelines and oversaw the internal policies to enforce them.

Todor Markov is a deep learning researcher at OpenAI. He builds content moderation tools for ChatGPT and GPT4. He graduated from Stanford with a Master’s in Statistics and a Bachelor’s in Symbolic Systems.

Alice Hunsberger hosts the episode. She is the VP of Customer Experience at Grindr. She leads Customer support, insights and trust and safety. Previously, she worked at OKCupid as Director & Global Head of Customer Experience.

Sahar Massachi is a visiting host today. He is the co-founder and Executive Director of the Integrity Institute. A past fellow of the Berkman Klein Center, Sahar is currently an advisory committee member for the Louis D. Brandeis Legacy Fund for Social Justice, a StartingBloc fellow, and a Roddenbery Fellow.
They discuss what content moderation looks like for ChatGPT, why T&S stands for Tradeoffs and Sadness, and how integrity workers can help OpenAI.

They also chat about the red-teaming process for GPT4, overlaps between platform integrity and AI integrity, their favorite GPT jailbreaks and how moderating GPTs is basically like teaching an Eldritch Abomination.

Disclaimer: The views in this episode only represent the views of the people involved in the recording of the episode. They do not represent Meta’s or any other entity’s views.

  continue reading

43集单集

Όλα τα επεισόδια

×
 
Loading …

欢迎使用Player FM

Player FM正在网上搜索高质量的播客,以便您现在享受。它是最好的播客应用程序,适用于安卓、iPhone和网络。注册以跨设备同步订阅。

 

快速参考指南