Artwork

内容由Scott Logic提供。所有播客内容(包括剧集、图形和播客描述)均由 Scott Logic 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal
Player FM -播客应用
使用Player FM应用程序离线!

Will we ever be able to secure GenAI?

35:21
 
分享
 

Manage episode 427835266 series 3322243
内容由Scott Logic提供。所有播客内容(包括剧集、图形和播客描述)均由 Scott Logic 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal

In this episode, Oliver Cronk, Doro Hinrichs and Kira Clark from Scott Logic are joined by Peter Gostev, Head of AI at Moonpig. Together, they explore whether we can ever really trust and secure Generative AI (GenAI), while sharing stories from the front line about getting to grips with this rapidly evolving technology.

With its human-like, non-deterministic nature, GenAI frustrates traditional pass/fail approaches to software testing. The panellists explore ways to tackle this, and discuss Scott Logic’s Spy Logic project which helps development teams investigate defensive measures against prompt injection attacks on a Large Language Model.

Looking to the future, they ask whether risk mitigation measures will ever be effective – and what impact this will have on product and service design – before offering pragmatic advice on what organisations can do to navigate this terrain.

Links from this episode

  continue reading

21集单集

Artwork

Will we ever be able to secure GenAI?

Beyond the Hype

0-10 subscribers

published

icon分享
 
Manage episode 427835266 series 3322243
内容由Scott Logic提供。所有播客内容(包括剧集、图形和播客描述)均由 Scott Logic 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal

In this episode, Oliver Cronk, Doro Hinrichs and Kira Clark from Scott Logic are joined by Peter Gostev, Head of AI at Moonpig. Together, they explore whether we can ever really trust and secure Generative AI (GenAI), while sharing stories from the front line about getting to grips with this rapidly evolving technology.

With its human-like, non-deterministic nature, GenAI frustrates traditional pass/fail approaches to software testing. The panellists explore ways to tackle this, and discuss Scott Logic’s Spy Logic project which helps development teams investigate defensive measures against prompt injection attacks on a Large Language Model.

Looking to the future, they ask whether risk mitigation measures will ever be effective – and what impact this will have on product and service design – before offering pragmatic advice on what organisations can do to navigate this terrain.

Links from this episode

  continue reading

21集单集

所有剧集

×
 
Loading …

欢迎使用Player FM

Player FM正在网上搜索高质量的播客,以便您现在享受。它是最好的播客应用程序,适用于安卓、iPhone和网络。注册以跨设备同步订阅。

 

快速参考指南