Artwork

内容由New Thinking提供。所有播客内容(包括剧集、图形和播客描述)均由 New Thinking 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal
Player FM -播客应用
使用Player FM应用程序离线!

On the front lines of the disinformation fight with Áine Kerr

39:55
 
分享
 

Manage episode 333235278 series 3352155
内容由New Thinking提供。所有播客内容(包括剧集、图形和播客描述)均由 New Thinking 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal

In the fight against disinformation, the last line of defense between audiences and malicious falsehoods are the “trust and safety” teams, also known as content moderators. Some of them are employed by social media platforms like Facebook and Spotify, but increasingly the platforms outsource the work of identifying and countering dangerous lies to fact-checking organizations like the fast-growing Irish company, Kinzen.

In this episode of In Reality, host Eric Schurenberg sits down with Áine Kerr, co-Founder, and COO of Kinzen. Áine is a serial risk-taker with extensive experience in the intersection of journalism and technology, most recently as the global head of journalism partnerships at Facebook.

Kinzen helps platforms, policymakers, and other defenders “get ahead and stay ahead” of false and hateful content in video, podcast, and text platforms. The company uses artificial intelligence to sniff out objectionable content and then when needed, invites human readers to judge for context and nuance. What Kinzen calls “human in the loop technology” minimizes errors while still allowing for fact-checking at social media scale.

In the recent Brazilian elections, for example, Áine explains that disinformation actors came to realize that phrases like “election fraud” and “rigged election” were alerting content moderators who could take down their false claims. So, the actors began substituting seemingly innocuous phrases like “we are campaigning for clean elections.” Kinzen’s human moderators spotted the changes and helped authorities intercept the false messages.

Áine and Eric also dive into the many reasons that someone may participate in sharing harmful content online, ranging from sheer amoral greed to ideological commitment. She ends with a warning that the spreaders of disinformation currently have the upper hand. It is always easier to spread lies than to counteract them. The allies of truth–researchers, social media platforms, entrepreneurs, and fact-checking organizations like hers–need to get better at coordinating their efforts to fight back, or democracy will remain an existential risk around the world.

Website - free episode transcripts
www.in-reality.fm

Produced by Sound Sapien
soundsapien.com

Alliance for Trust in Media
alliancefortrust.com

  continue reading

54集单集

Artwork
icon分享
 
Manage episode 333235278 series 3352155
内容由New Thinking提供。所有播客内容(包括剧集、图形和播客描述)均由 New Thinking 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal

In the fight against disinformation, the last line of defense between audiences and malicious falsehoods are the “trust and safety” teams, also known as content moderators. Some of them are employed by social media platforms like Facebook and Spotify, but increasingly the platforms outsource the work of identifying and countering dangerous lies to fact-checking organizations like the fast-growing Irish company, Kinzen.

In this episode of In Reality, host Eric Schurenberg sits down with Áine Kerr, co-Founder, and COO of Kinzen. Áine is a serial risk-taker with extensive experience in the intersection of journalism and technology, most recently as the global head of journalism partnerships at Facebook.

Kinzen helps platforms, policymakers, and other defenders “get ahead and stay ahead” of false and hateful content in video, podcast, and text platforms. The company uses artificial intelligence to sniff out objectionable content and then when needed, invites human readers to judge for context and nuance. What Kinzen calls “human in the loop technology” minimizes errors while still allowing for fact-checking at social media scale.

In the recent Brazilian elections, for example, Áine explains that disinformation actors came to realize that phrases like “election fraud” and “rigged election” were alerting content moderators who could take down their false claims. So, the actors began substituting seemingly innocuous phrases like “we are campaigning for clean elections.” Kinzen’s human moderators spotted the changes and helped authorities intercept the false messages.

Áine and Eric also dive into the many reasons that someone may participate in sharing harmful content online, ranging from sheer amoral greed to ideological commitment. She ends with a warning that the spreaders of disinformation currently have the upper hand. It is always easier to spread lies than to counteract them. The allies of truth–researchers, social media platforms, entrepreneurs, and fact-checking organizations like hers–need to get better at coordinating their efforts to fight back, or democracy will remain an existential risk around the world.

Website - free episode transcripts
www.in-reality.fm

Produced by Sound Sapien
soundsapien.com

Alliance for Trust in Media
alliancefortrust.com

  continue reading

54集单集

Semua episode

×
 
Loading …

欢迎使用Player FM

Player FM正在网上搜索高质量的播客,以便您现在享受。它是最好的播客应用程序,适用于安卓、iPhone和网络。注册以跨设备同步订阅。

 

快速参考指南