内容由Risk Insights: Yusuf Moolla提供。所有播客内容(包括剧集、图形和播客描述)均由 Risk Insights: Yusuf Moolla 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal
Player FM -播客应用
使用Player FM应用程序离线!
icon Daily Deals

0. Introduction

1:41
 
分享
 

Manage episode 435677379 series 3594717
内容由Risk Insights: Yusuf Moolla提供。所有播客内容(包括剧集、图形和播客描述)均由 Risk Insights: Yusuf Moolla 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal

A brief intro to the podcast.
If you have suggestions for topics you'd like me to cover, feel free to reach out to me via email. yusuf@riskinsights.com.au
About this podcast
A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI.
Hosted by Yusuf Moolla.
Produced by Risk Insights (riskinsights.com.au).

  continue reading

27集单集

icon分享
 
Manage episode 435677379 series 3594717
内容由Risk Insights: Yusuf Moolla提供。所有播客内容(包括剧集、图形和播客描述)均由 Risk Insights: Yusuf Moolla 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal

A brief intro to the podcast.
If you have suggestions for topics you'd like me to cover, feel free to reach out to me via email. yusuf@riskinsights.com.au
About this podcast
A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI.
Hosted by Yusuf Moolla.
Produced by Risk Insights (riskinsights.com.au).

  continue reading

27集单集

ทุกตอน

×
 
A
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing podcast artwork
 
Spoken by a human version of this article. TL;DR (TL;DL?) Testing is a core basic step for algorithmic integrity. Testing involves various stages, from developer self-checks to UAT. Where these happen will depend on whether the system is built in-house or bought. Testing needs to cover several integrity aspects, including accuracy, fairness, security, privacy, and performance. Continuous testing is needed for AI systems, differing from traditional testing due to the way these newer systems change (without code changes). About this podcast A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI. Hosted by Yusuf Moolla . Produced by Risk Insights (riskinsights.com.au).…
 
A
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing podcast artwork
 
Spoken by a human version of this article. One question that comes up often is “How do we obtain assurance about third party products or services?” Depending on the nature of the relationship, and what you need assurance for, this can vary widely. This article attempts to lay out the options, considerations, and key steps to take. TL;DR (TL;DL?) Third-party assurance for algorithm integrity varies based on the nature of the relationship and specific needs, with several options. Key factors to consider include the importance and risk level of the service/product, regulatory expectations, complexity, transparency, and frequency of updates. Standardised assurance frameworks for algorithm integrity are still emerging; adopt a risk-based approach, and consider sector-specific standards like CPS230(Australia). About this podcast A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI. Hosted by Yusuf Moolla . Produced by Risk Insights (riskinsights.com.au).…
 
A
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing podcast artwork
 
Navigating AI Audits with Dr. Shea Brown Dr. Shea Brown is Founder and CEO of BABL AI BABL specializes in auditing and certifying AI systems, consulting on responsible AI practices, and offering online education. Shea shares his journey from astrophysics to AI auditing, the core services provided by BABL AI including compliance audits, technical testing, and risk assessments, and the importance of governance in AI. He also addresses the challenges posed by generative AI, the need for continuous upskilling in AI literacy, and the role of organizations like the IAAA and For Humanity in building consensus and standards in AI auditing. Finally, Shea provides insights on third-party risks, in-house AI developments, and key skills needed for effective AI governance. Chapter Markers 00:00 Introduction to Dr. Shea Brown and BABL AI 00:36 The Journey from Astrophysics to AI Auditing 02:22 Core Services and Compliance Audits at BABL 03:57 Educational Initiatives and AI Literacy 05:48 Collaborations and Professional Organizations 08:57 Approach to AI Audits and Readiness 17:29 Challenges with Generative AI in Audits 29:21 Trends in AI Deployment and Risk Assessment 34:53 Skills and Training for AI Governance 40:15 Conclusion and Contact Information About this podcast A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI. Hosted by Yusuf Moolla . Produced by Risk Insights (riskinsights.com.au).…
 
A
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing podcast artwork
 
Spoken by a human version of this article. AI literacy is growing in importance (e.g., EU AI Act, IAIS). AI literacy needs vary across roles. Even "AI professionals" need AI Risk training. Links EU AI Act : The European Union Artificial Intelligence Act - specific expectation about “AI literacy”. IAIS: The International Association of Insurance Supervisors is developing a guidance paper on the supervision of AI . About this podcast A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI. Hosted by Yusuf Moolla . Produced by Risk Insights (riskinsights.com.au).…
 
A
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing podcast artwork
 
Navigating AI Governance and Compliance Patrick Sullivan is Vice President of Strategy and Innovation at A-LIGN and an expert in cybersecurity and AI compliance with over 25 years of experience. Patrick shares his career journey, discusses his passion for educating executives and directors on effective governance, and explains the critical role of management systems like ISO 42001 in AI compliance. We discuss the complexities of AI governance, risk assessment, and the importance of clear organizational context. Patrick also highlights the challenges and benefits of AI assurance and offers insights into the changing landscape of AI standards and regulations. Chapter Markers 00:00 Introduction 00:23 Patrick's Career Journey 02:31 Focus on AI Governance 04:19 Importance of Education and Internal Training 08:08 Involvement in Industry Associations 14:13 AI Standards and Governance 20:06 Challenges with preparing for AI Certification 28:04 Future of AI Assurance About this podcast A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI. Hosted by Yusuf Moolla . Produced by Risk Insights (riskinsights.com.au).…
 
A
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing podcast artwork
 
Mitigating AI Risks Ryan Carrier is founder and executive director of ForHumanity , a non-profit focused on mitigating the risks associated with AI, autonomous, and algorithmic systems. With 25 years of experience in financial services, Ryan discusses ForHumanity's mission to analyze and mitigate the downside risks of AI to benefit society. The conversation includes insights on the foundation of ForHumanity, the role of independent AI audits, educational programs offered by the ForHumanity AI Education and Training Center, AI governance, and the development of audit certification schemes. Ryan also highlights the importance of AI literacy, stakeholder management, and the future of AI governance and compliance. Chapter Markers 00:00 Introduction to Ryan Carrier and ForHumanity 00:57 Ryan's Background and Journey to AI 02:10 Founding ForHumanity: Mission and Early Challenges 05:15 Developing Independent Audits for AI 08:02 ForHumanity's Role and Activities 17:26 Education Programs and Certifications 29:21 AI Literacy and Future of Independent Audits 42:06 Getting Involved with ForHumanity About this podcast A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI. Hosted by Yusuf Moolla . Produced by Risk Insights (riskinsights.com.au).…
 
A
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing podcast artwork
 
Spoken (by a human) version of this article. Public AI audit reports aren't universally required; they mainly apply to high-risk applications and/or specific jurisdictions. The push for transparency primarily concerns independent audits, not internal reviews. Prepare by implementing ethical AI practices and conducting regular reviews. Note: High-risk AI systems in banking and insurance are subject to specific requirements Links AI and algorithm audit guidelines vary widely and are not universally applicable. We discussed this in a previous article , outlining how the appropriateness of audit guidance depends on your circumstances. Audit vs Review: we explored this topic in depth in a previous article . About this podcast A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI. Hosted by Yusuf Moolla . Produced by Risk Insights (riskinsights.com.au).…
 
A
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing podcast artwork
 
Spoken by a human version of this article. Knowing the basics of substantive testing vs. controls testing can help you determine if the review will meet your needs. Substantive testing directly identifies errors or unfairness, while controls testing evaluates governance effectiveness. The results/conclusions are different. Understanding these differences can also help you anticipate the extent of your team's involvement during the review process. Links This article details a (largely) substantive testing method for accuracy reviews. About this podcast A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI. Hosted by Yusuf Moolla . Produced by Risk Insights (riskinsights.com.au).…
 
A
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing podcast artwork
 
Spoken by a human version of this article. Ongoing education helps everyone understand their role in responsibly developing and using algorithmic systems. Regulators and standard-setting bodies emphasise the need for AI literacy across all organisational levels. Links ForHumanity - join the growing community here . ForHumanity - free courses here . IAIS: The International Association of Insurance Supervisors is developing a guidance paper on the supervision of AI . DNB: De Nederlandsche Bank - 6 general principles for the use of AI in the financial sector . ASIC: The Australian Securities & Investments Commission - report . NIST: The National Institute of Standards and Technology - AI Risk Management Framework . EU AI Act : The European Union Artificial Intelligence Act - specific expectation about “AI literacy”. About this podcast A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI. Hosted by Yusuf Moolla . Produced by Risk Insights (riskinsights.com.au).…
 
A
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing podcast artwork
 
Spoken by a human version of this article. The terminology – “audit” vs “review” - is important, but clarity about deliverables is more important when commissioning algorithm integrity assessments. Audits are formal, with an opinion or conclusion that can often be shared externally. Reviews come in various forms and typically produce recommendations, for internal use. Regardless of the terminology you use, when commissioning an assessment, clearly define and document the expected deliverable, including the report content and intended distribution, to ensure expectations are met. About this podcast A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI. Hosted by Yusuf Moolla . Produced by Risk Insights (riskinsights.com.au).…
 
A
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing podcast artwork
 
Spoken (by a human) version of this article. Outcome-focused accuracy reviews directly verify results, offering more robust assurance than process-focused methods. This approach can catch translation errors, unintended consequences, and edge cases that process reviews might miss. While more time-consuming and complex, outcome-focused reviews provide deeper insights into system reliability and accuracy. This article explains why verifying outcomes is preferred over tracing through processes, and how it works. About this podcast A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI. Hosted by Yusuf Moolla . Produced by Risk Insights (riskinsights.com.au).…
 
A
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing podcast artwork
 
Spoken (by a human) version of this article. Documentation makes it easier to consistently maintain algorithm integrity. This is well known. But there are lots of types of documents to prepare, and often the first hurdle is just thinking about where to start. So this simple guide is meant to help do exactly that – get going. About this podcast A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI. Hosted by Yusuf Moolla . Produced by Risk Insights (riskinsights.com.au).…
 
A
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing podcast artwork
 
Spoken (by a human) version of this article. Banks and insurers are increasingly using external data; using them beyond their intended purpose can be risky (e.g. discriminatory). Emerging regulations and regulatory guidance emphasise the need for active oversight by boards, senior management to ensure responsible use of external data. Keeping the customer top of mind, asking the right questions, and focusing on the intended purpose of the data, can help reduce the risk. Law and guideline mentioned in the article: Colorado's External Consumer Data and Information Sources (ECDIS) law New York's proposed circular letter . About this podcast A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI. Hosted by Yusuf Moolla . Produced by Risk Insights (riskinsights.com.au).…
 
A
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing podcast artwork
 
Spoken (by a human) version of this article . Banks and insurers sometimes lose sight of their customer-centric purpose when assessing AI/algorithm risks, focusing instead on regular business risks and regulatory concerns. Regulators are noticing this disconnect. This article aims to outline why the disconnect happens and how we can fix it. Report mentioned in the article: ASIC, REP 798 Beware the gap: Governance arrangements in the face of AI innovation. About this podcast A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI. Hosted by Yusuf Moolla . Produced by Risk Insights (riskinsights.com.au).…
 
A
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing podcast artwork
 
Spoken (by a human) version of this article . With algorithmic systems, an change can trigger a cascade of unintended consequences, potentially compromising fairness, accountability, and public trust. So, managing changes is important. But if you use the wrong framework, your change control process may tick the boxes, but be both ineffective and inefficient. This article outlines a potential solution: a risk focused, principles-based approach to change control for algorithmic systems. Resource mentioned in the article: ISA 315 guideline for general IT controls. About this podcast A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI. Hosted by Yusuf Moolla . Produced by Risk Insights (riskinsights.com.au).…
 
A
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing podcast artwork
 
Spoken (by a human) version of this article . The integrity of algorithmic systems goes beyond accuracy and fairness. In Episode 4, we outlined 10 key aspects of algorithm integrity. Number 5 in that list (not in order of importance) is Security: the algorithmic system needs to be protected from unauthorised access, manipulation and exploitation. In this episode, we explore one important sub-component of this: deprovisioning user access. Link from article: U.S. National Coordinator for Critical Infrastructure Security and Resilience (CISA) advisory. About this podcast A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI. Hosted by Yusuf Moolla . Produced by Risk Insights (riskinsights.com.au).…
 
A
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing podcast artwork
 
Spoken (by a human) version of this article . When we're checking for fairness in our algorithmic systems (incl. processes, models, rules) , we often ask: What are the personal characteristics or attributes that, if used, could lead to discrimination? This article provides a basic framework for identifying and categorising these attributes. About this podcast A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI. Hosted by Yusuf Moolla . Produced by Risk Insights (riskinsights.com.au).…
 
A
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing podcast artwork
 
Spoken (by a human) version of this article . Legislation isn't the silver bullet for algorithmic integrity. Are they useful? Sure. They help provide clarity and can reduce ambiguity. And once a law is passed, we must comply. However: existing legislation may already apply new algorithm-focused laws can be too narrow or quickly outdated standards can be confusing, and may not cover what we need "best practice" frameworks help, but they're not always the best (and there are several, so they can't all be "best"). In short, they are helpful. But we need to know what we're getting - what they cover, don't cover, etc. About this podcast A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI. Hosted by Yusuf Moolla . Produced by Risk Insights (riskinsights.com.au).…
 
A
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing podcast artwork
 
Spoken (by a human) version of this article . Even in discussions among AI governance professionals, there seems to be a silent “gen” before AI. With rapid progress - or rather prominence – of generative AI capabilities, these have taken centre stage. Amidst this excitement, we mustn't lose sight of the established algorithms and data-enabled workflows driving core business decisions. These range from simple rules-based systems to complex machine learning models, each playing a crucial role in our operations. In this episode, we'll examine why we need to keep an eye on established algorithmic systems, and how. About this podcast A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI. Hosted by Yusuf Moolla . Produced by Risk Insights (riskinsights.com.au).…
 
A
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing podcast artwork
 
Spoken (by a human) version of this article . In a previous article, we discussed algorithmic fairness, and how seemingly neutral data points can become proxies for protected attributes. In this article, we'll explore a concrete example of a proxy used in insurance and banking algorithms: postcodes. We've used Australian terminology and data. But the concept will apply to most countries. Using Australian Bureau of Statistics (ABS) Census data, it aims to demonstrate how postcodes can serve as hidden proxies for gender, disability status and citizenship. About this podcast A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI. Hosted by Yusuf Moolla . Produced by Risk Insights (riskinsights.com.au).…
 
A
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing podcast artwork
 
Spoken (by a human) version of this article . When we talk about security in algorithmic systems, it's easy to focus solely on keeping the bad guys out. But there's another side to this coin that's just as important: making sure the right people can get in. This article aims to explain how security and access work together for better algorithm integrity. About this podcast A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI. Hosted by Yusuf Moolla . Produced by Risk Insights (riskinsights.com.au).…
 
A
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing podcast artwork
 
Spoken (by a human) version of this article . Fairness in algorithmic systems is a multi-faceted, and developing, topic. In episode 4, we explored ten key aspects to consider when scoping an algorithm integrity audit. One aspect was fairness, with this in the description: "...The design ensures equitable treatment..." This raises an important question. Shouldn't we aim for equal, rather than equitable treatment? This episode aims to shed light on the distinctions between equal and equitable treatment in algorithmic systems, while acknowledging that our understanding of fairness is still developing and subject to ongoing debate. About this podcast A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI. Hosted by Yusuf Moolla . Produced by Risk Insights (riskinsights.com.au).…
 
A
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing podcast artwork
 
Spoken (by a human) version of this article . In Episode 1, we explored the challenges of placing undue reliance on audits. One potential solution that we outlined is a clear scope, particularly regarding the audit objective. In this episode, we focus on algorithm integrity as the broad audit objective. While it’s easy to assert that an algorithm has integrity, confirming this assertion is a bit more complex. To help simplify this, this episode breaks it down into a set of key areas to consider. About this podcast A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI. Hosted by Yusuf Moolla . Produced by Risk Insights (riskinsights.com.au).…
 
A
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing podcast artwork
 
Spoken (by a human) version of this article . AI and algorithm audits help ensure ethical and accurate data processing, preventing harm and disadvantage. However, the guidelines are not yet mature, and quite disparate. This can make the audit process confusing, and quite daunting - how do you wade through it all to find the information that you need, in deciding how to commission your audit? Fortunately, there is a solution - narrowing the guidelines down, based on relevance. Not all existing guidelines are universally applicable. This can vary based on your situation, including: The specific context of your industry The nature of your deployment The characteristics of the system being audited Whether the audit is internal or external Who produces the guidance and for whom it is intended This article will help you distinguish between audit guidance that applies to your situation and guidance that may not be relevant to your industry, deployment, or organizational needs. https://riskinsights.com.au/blog/audit-guidance-context-matters About this podcast A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI. Hosted by Yusuf Moolla . Produced by Risk Insights (riskinsights.com.au).…
 
A
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing podcast artwork
 
Spoken (by a human) version of this article . The motivation(s) for commissioning a review can determine how effective it will be. Consider a personal health check-up: Sometimes we undergo medical check-ups because we don’t have a choice. We need to - for example for workplace requirements or for insurance. At other times, we choose to undergo such check-ups. We want to maintain optimal health and catch any potential issues early. Often, our approach differs depending on whether we are forced to, or choose to. The motivations – need vs want – can define how we prioritise them, what our interactions with the medical professional are and how we view the results. Our engagement and satisfaction levels are generally higher when we choose (than when we are forced). The same holds for reviews and audits. https://riskinsights.com.au/blog/need-vs-want-an-audit About this podcast A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI. Hosted by Yusuf Moolla . Produced by Risk Insights (riskinsights.com.au).…
 
A
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing podcast artwork
 
Spoken (by a human) version of this article . One common issue with audits is undue reliance . Can you rely on the audit report to tell you what you need to know? Could you be relying on it too much ? https://riskinsights.com.au/blog/reliable-audits About this podcast A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI. Hosted by Yusuf Moolla . Produced by Risk Insights (riskinsights.com.au).…
 
A
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing podcast artwork
 
A brief intro to the podcast. If you have suggestions for topics you'd like me to cover, feel free to reach out to me via email. yusuf@riskinsights.com.au About this podcast A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI. Hosted by Yusuf Moolla . Produced by Risk Insights (riskinsights.com.au).…
 
Loading …

欢迎使用Player FM

Player FM正在网上搜索高质量的播客,以便您现在享受。它是最好的播客应用程序,适用于安卓、iPhone和网络。注册以跨设备同步订阅。

 

icon Daily Deals
icon Daily Deals
icon Daily Deals

快速参考指南

边探索边听这个节目
播放