Artwork

内容由The EPAM Continuum Podcast Network and EPAM Continuum提供。所有播客内容(包括剧集、图形和播客描述)均由 The EPAM Continuum Podcast Network and EPAM Continuum 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal
Player FM -播客应用
使用Player FM应用程序离线!

The Resonance Test 90: Responsible AI with David Goodis and Martin Lopatka

32:48
 
分享
 

Manage episode 414432519 series 3215634
内容由The EPAM Continuum Podcast Network and EPAM Continuum提供。所有播客内容(包括剧集、图形和播客描述)均由 The EPAM Continuum Podcast Network and EPAM Continuum 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal
Responsible AI isn’t about laying down the law. Creating responsible AI systems and policies is necessarily an iterative, longitudinal endeavor. Doing it right requires constant conversation among people with diverse kinds of expertise, experience, and attitudes. Which is exactly what today’s episode of *The Resonance Test* embodies. We bring to the virtual table David Goodis, Partner at INQ Law, and Martin Lopatka, Managing Principal of AI Consulting at EPAM, and ask them to lay down their cards. Turns out, they are holding insights as sharp as diamonds. This well-balanced pair begins by talking about definitions. Goodis mentions the recent Canadian draft legislation to regulate AI, which asks “What is harm?” because, he says, “What we're trying to do is minimize harm or avoid harm.” The legislation casts harm as physical or psychological harm, damage to a person's property (“Suppose that could include intellectual property,” Goodis says), and any economic loss to a person. This leads Lopatka to wonder whether there should be “a differentiation in the way that we legislate fully autonomous systems that are just part of automated pipelines.” What happens, he wonders, when there is an inherently symbiotic system between AI and humans, where “the design is intended to augment human reasoning or activities in any way”? Goodis is comforted when a human is looped in and isn’t merely saying: “Hey, AI system, go ahead and make that decision about David, can he get the bank loan, yes or no?” This nudges Lopatka to respond: “The inverse is, I would say, true for myself. I feel like putting a human in the loop can often be a way to shunt off responsibility for inherent choices that are made in the way that AI systems are designed.” He wonders if more scrutiny is needed in designing the systems that present results to human decision-makers. We also need to examine how those systems operate, says Goodis, pointing out that while an AI system might not be “really making the decision,” it might be “*steering* that decision or influencing that decision in a way that maybe we're not comfortable with.” This episode will prepare you to think about informed consent (“It's impossible to expect that people have actually even read, let alone *comprehended,* the terms of services that they are supposedly accepting,” says Lopatka), the role of corporate oversight, the need to educate users about risk, and the shared obligation involved in building responsible AI. One fascinating exchange centered on the topic of autonomy, toward which Lopatka suggests that a user might have mixed feelings. “Maybe I will object to one use [of personal data] but not another and subscribe to the value proposition that by allowing an organization to process my data in a particular way, there is an upside for me in terms of things like personalized services or efficiency gains for myself. But I may have a conscientious objection to [other] things.” To which Goodis reasonably asks: “I like your idea, but how do you implement that?” There is no final answer, obviously, but at one point, Goodis suggests a reasonable starting point: “Maybe it is a combination of consent versus ensuring organizations act in an ethical manner.” This is a conversation for everyone to hear. So listen, and join Goodis and Lopatka in this important dialogue. Host: Alison Kotin Engineer: Kyp Pilalas Producer: Ken Gordon
  continue reading

165集单集

Artwork
icon分享
 
Manage episode 414432519 series 3215634
内容由The EPAM Continuum Podcast Network and EPAM Continuum提供。所有播客内容(包括剧集、图形和播客描述)均由 The EPAM Continuum Podcast Network and EPAM Continuum 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal
Responsible AI isn’t about laying down the law. Creating responsible AI systems and policies is necessarily an iterative, longitudinal endeavor. Doing it right requires constant conversation among people with diverse kinds of expertise, experience, and attitudes. Which is exactly what today’s episode of *The Resonance Test* embodies. We bring to the virtual table David Goodis, Partner at INQ Law, and Martin Lopatka, Managing Principal of AI Consulting at EPAM, and ask them to lay down their cards. Turns out, they are holding insights as sharp as diamonds. This well-balanced pair begins by talking about definitions. Goodis mentions the recent Canadian draft legislation to regulate AI, which asks “What is harm?” because, he says, “What we're trying to do is minimize harm or avoid harm.” The legislation casts harm as physical or psychological harm, damage to a person's property (“Suppose that could include intellectual property,” Goodis says), and any economic loss to a person. This leads Lopatka to wonder whether there should be “a differentiation in the way that we legislate fully autonomous systems that are just part of automated pipelines.” What happens, he wonders, when there is an inherently symbiotic system between AI and humans, where “the design is intended to augment human reasoning or activities in any way”? Goodis is comforted when a human is looped in and isn’t merely saying: “Hey, AI system, go ahead and make that decision about David, can he get the bank loan, yes or no?” This nudges Lopatka to respond: “The inverse is, I would say, true for myself. I feel like putting a human in the loop can often be a way to shunt off responsibility for inherent choices that are made in the way that AI systems are designed.” He wonders if more scrutiny is needed in designing the systems that present results to human decision-makers. We also need to examine how those systems operate, says Goodis, pointing out that while an AI system might not be “really making the decision,” it might be “*steering* that decision or influencing that decision in a way that maybe we're not comfortable with.” This episode will prepare you to think about informed consent (“It's impossible to expect that people have actually even read, let alone *comprehended,* the terms of services that they are supposedly accepting,” says Lopatka), the role of corporate oversight, the need to educate users about risk, and the shared obligation involved in building responsible AI. One fascinating exchange centered on the topic of autonomy, toward which Lopatka suggests that a user might have mixed feelings. “Maybe I will object to one use [of personal data] but not another and subscribe to the value proposition that by allowing an organization to process my data in a particular way, there is an upside for me in terms of things like personalized services or efficiency gains for myself. But I may have a conscientious objection to [other] things.” To which Goodis reasonably asks: “I like your idea, but how do you implement that?” There is no final answer, obviously, but at one point, Goodis suggests a reasonable starting point: “Maybe it is a combination of consent versus ensuring organizations act in an ethical manner.” This is a conversation for everyone to hear. So listen, and join Goodis and Lopatka in this important dialogue. Host: Alison Kotin Engineer: Kyp Pilalas Producer: Ken Gordon
  continue reading

165集单集

所有剧集

×
 
Loading …

欢迎使用Player FM

Player FM正在网上搜索高质量的播客,以便您现在享受。它是最好的播客应用程序,适用于安卓、iPhone和网络。注册以跨设备同步订阅。

 

快速参考指南