Artwork

内容由Machine Learning Street Talk (MLST)提供。所有播客内容(包括剧集、图形和播客描述)均由 Machine Learning Street Talk (MLST) 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal
Player FM -播客应用
使用Player FM应用程序离线!

Speechmatics CTO - Next-Generation Speech Recognition

1:46:23
 
分享
 

Manage episode 446535795 series 2803422
内容由Machine Learning Street Talk (MLST)提供。所有播客内容(包括剧集、图形和播客描述)均由 Machine Learning Street Talk (MLST) 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal

Will Williams is CTO of Speechmatics in Cambridge. In this sponsored episode - he shares deep technical insights into modern speech recognition technology and system architecture. The episode covers several key technical areas:

* Speechmatics' hybrid approach to ASR, which focusses on unsupervised learning methods, achieving comparable results with 100x less data than fully supervised approaches. Williams explains why this is more efficient and generalizable than end-to-end models like Whisper.

* Their production architecture implementing multiple operating points for different latency-accuracy trade-offs, with careful latency padding (up to 1.8 seconds) to ensure consistent user experience. The system uses lattice-based decoding with language model integration for improved accuracy.

* The challenges and solutions in real-time ASR, including their approach to diarization (speaker identification), handling cross-talk, and implicit source separation. Williams explains why these problems remain difficult even with modern deep learning approaches.

* Their testing and deployment infrastructure, including the use of mirrored environments for catching edge cases in production, and their strategy of maintaining global models rather than allowing customer-specific fine-tuning.

* Technical evolution in ASR, from early days of custom CUDA kernels and manual memory management to modern frameworks, with Williams offering interesting critiques of current PyTorch memory management approaches and arguing for more efficient direct memory allocation in production systems.

Get coding with their API! This is their URL:

https://www.speechmatics.com/

DO YOU WANT WORK ON ARC with the MindsAI team (current ARC winners)?

MLST is sponsored by Tufa Labs:

Focus: ARC, LLMs, test-time-compute, active inference, system2 reasoning, and more.

Interested? Apply for an ML research position: benjamin@tufa.ai

TOC

1. ASR Core Technology & Real-time Architecture

[00:00:00] 1.1 ASR and Diarization Fundamentals

[00:05:25] 1.2 Real-time Conversational AI Architecture

[00:09:21] 1.3 Neural Network Streaming Implementation

[00:12:49] 1.4 Multi-modal System Integration

2. Production System Optimization

[00:29:38] 2.1 Production Deployment and Testing Infrastructure

[00:35:40] 2.2 Model Architecture and Deployment Strategy

[00:37:12] 2.3 Latency-Accuracy Trade-offs

[00:39:15] 2.4 Language Model Integration

[00:40:32] 2.5 Lattice-based Decoding Architecture

3. Performance Evaluation & Ethical Considerations

[00:44:00] 3.1 ASR Performance Metrics and Capabilities

[00:46:35] 3.2 AI Regulation and Evaluation Methods

[00:51:09] 3.3 Benchmark and Testing Challenges

[00:54:30] 3.4 Real-world Implementation Metrics

[01:00:51] 3.5 Ethics and Privacy Considerations

4. ASR Technical Evolution

[01:09:00] 4.1 WER Calculation and Evaluation Methodologies

[01:10:21] 4.2 Supervised vs Self-Supervised Learning Approaches

[01:21:02] 4.3 Temporal Learning and Feature Processing

[01:24:45] 4.4 Feature Engineering to Automated ML

5. Enterprise Implementation & Scale

[01:27:55] 5.1 Future AI Systems and Adaptation

[01:31:52] 5.2 Technical Foundations and History

[01:34:53] 5.3 Infrastructure and Team Scaling

[01:38:05] 5.4 Research and Talent Strategy

[01:41:11] 5.5 Engineering Practice Evolution

Shownotes:

https://www.dropbox.com/scl/fi/d94b1jcgph9o8au8shdym/Speechmatics.pdf?rlkey=bi55wvktzomzx0y5sic6jz99y&st=6qwofv8t&dl=0

  continue reading

198集单集

Artwork
icon分享
 
Manage episode 446535795 series 2803422
内容由Machine Learning Street Talk (MLST)提供。所有播客内容(包括剧集、图形和播客描述)均由 Machine Learning Street Talk (MLST) 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal

Will Williams is CTO of Speechmatics in Cambridge. In this sponsored episode - he shares deep technical insights into modern speech recognition technology and system architecture. The episode covers several key technical areas:

* Speechmatics' hybrid approach to ASR, which focusses on unsupervised learning methods, achieving comparable results with 100x less data than fully supervised approaches. Williams explains why this is more efficient and generalizable than end-to-end models like Whisper.

* Their production architecture implementing multiple operating points for different latency-accuracy trade-offs, with careful latency padding (up to 1.8 seconds) to ensure consistent user experience. The system uses lattice-based decoding with language model integration for improved accuracy.

* The challenges and solutions in real-time ASR, including their approach to diarization (speaker identification), handling cross-talk, and implicit source separation. Williams explains why these problems remain difficult even with modern deep learning approaches.

* Their testing and deployment infrastructure, including the use of mirrored environments for catching edge cases in production, and their strategy of maintaining global models rather than allowing customer-specific fine-tuning.

* Technical evolution in ASR, from early days of custom CUDA kernels and manual memory management to modern frameworks, with Williams offering interesting critiques of current PyTorch memory management approaches and arguing for more efficient direct memory allocation in production systems.

Get coding with their API! This is their URL:

https://www.speechmatics.com/

DO YOU WANT WORK ON ARC with the MindsAI team (current ARC winners)?

MLST is sponsored by Tufa Labs:

Focus: ARC, LLMs, test-time-compute, active inference, system2 reasoning, and more.

Interested? Apply for an ML research position: benjamin@tufa.ai

TOC

1. ASR Core Technology & Real-time Architecture

[00:00:00] 1.1 ASR and Diarization Fundamentals

[00:05:25] 1.2 Real-time Conversational AI Architecture

[00:09:21] 1.3 Neural Network Streaming Implementation

[00:12:49] 1.4 Multi-modal System Integration

2. Production System Optimization

[00:29:38] 2.1 Production Deployment and Testing Infrastructure

[00:35:40] 2.2 Model Architecture and Deployment Strategy

[00:37:12] 2.3 Latency-Accuracy Trade-offs

[00:39:15] 2.4 Language Model Integration

[00:40:32] 2.5 Lattice-based Decoding Architecture

3. Performance Evaluation & Ethical Considerations

[00:44:00] 3.1 ASR Performance Metrics and Capabilities

[00:46:35] 3.2 AI Regulation and Evaluation Methods

[00:51:09] 3.3 Benchmark and Testing Challenges

[00:54:30] 3.4 Real-world Implementation Metrics

[01:00:51] 3.5 Ethics and Privacy Considerations

4. ASR Technical Evolution

[01:09:00] 4.1 WER Calculation and Evaluation Methodologies

[01:10:21] 4.2 Supervised vs Self-Supervised Learning Approaches

[01:21:02] 4.3 Temporal Learning and Feature Processing

[01:24:45] 4.4 Feature Engineering to Automated ML

5. Enterprise Implementation & Scale

[01:27:55] 5.1 Future AI Systems and Adaptation

[01:31:52] 5.2 Technical Foundations and History

[01:34:53] 5.3 Infrastructure and Team Scaling

[01:38:05] 5.4 Research and Talent Strategy

[01:41:11] 5.5 Engineering Practice Evolution

Shownotes:

https://www.dropbox.com/scl/fi/d94b1jcgph9o8au8shdym/Speechmatics.pdf?rlkey=bi55wvktzomzx0y5sic6jz99y&st=6qwofv8t&dl=0

  continue reading

198集单集

Усі епізоди

×
 
Loading …

欢迎使用Player FM

Player FM正在网上搜索高质量的播客,以便您现在享受。它是最好的播客应用程序,适用于安卓、iPhone和网络。注册以跨设备同步订阅。

 

快速参考指南

边探索边听这个节目
播放