False Diagnosis Risk? Hallucinations in OpenAI’s Whisper Pose Serious Issue
Manage episode 447796458 series 3603843
00:00
OpenAI's Whisper has a tendency to produce "hallucinations," or fabricated content, including inflammatory language and false medical advice.
One machine learning engineer found hallucinations in about half of the 100 hours of Whisper transcriptions they reviewed, while another developer reported hallucinations in nearly all of 26,000 transcripts created with Whisper.
04:01
Whisper is now being applied in the medical field, with Nabla using it to transcribe and summarize patient-doctor interactions. Nabla's tool, however, deletes original audio for data security, making it impossible to verify transcriptions against recordings, which could be concerning if clinicians cannot check the accuracy of transcripts.
05:51
California legislator Rebecca Bauer-Kahan recently refused to sign a form allowing her child’s medical network to share consultation audio with vendors like Microsoft Azure, highlighting privacy concerns in AI-powered healthcare tools.
07:06
Experts, including former OpenAI employees, have called for government oversight of AI to prevent misuse.
OpenAI has acknowledged the hallucination problem and is working to address it, though the issue persists.
The application of Whisper in sensitive areas, especially healthcare, raises significant concerns about accuracy, privacy, and the need for regulatory oversight.
36集单集