Artwork

内容由EDGE AI FOUNDATION提供。所有播客内容(包括剧集、图形和播客描述)均由 EDGE AI FOUNDATION 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal
Player FM -播客应用
使用Player FM应用程序离线!

Generative AI on NXP Microprocessors

28:44
 
分享
 

Manage episode 523414600 series 3574631
内容由EDGE AI FOUNDATION提供。所有播客内容(包括剧集、图形和播客描述)均由 EDGE AI FOUNDATION 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal

Stepping into a future where AI doesn't require the cloud, NXP is revolutionizing edge computing by bringing generative AI directly to microprocessors. Alberto Alvarez offers an illuminating journey through NXP's approach to private, secure, and efficient AI inference that operates entirely at the edge.
The heart of NXP's innovation is their EAQ GenAI Flow, a comprehensive software pipeline designed for iMX SoCs that enables both fine-tuning and optimization of AI models. This dual capability allows developers to adapt openly available Large Language Models for specific use cases without compromising data privacy, while also tackling the challenge of memory footprint through quantization techniques that maintain model accuracy. The conversational AI implementation creates a seamless experience by combining wake word detection, speech recognition, language processing with retrieval-augmented generation, and natural speech synthesis—all accelerated by NXP's Neutron NPU.
Most striking is NXP's partnership with Kinara, which introduces truly groundbreaking multimodal AI capabilities running entirely at the edge. Their demonstration of the LAVA model—combining LLAMA3's 8 billion parameters with CLIP vision encoding—showcases the ability to process both images and language queries without any cloud connectivity. Imagine industrial systems analyzing visual scenes, detecting subtle anomalies like water spills, and providing spoken reports—all while keeping sensitive data completely private. With quantization reducing these massive models to manageable 4-bit and 8-bit precision, NXP is making previously impossible edge AI applications practical reality.
Ready to experience the future of edge intelligence? Explore NXP's application code hub to start building with EIQ GenAI resources on compatible hardware and discover how your next project can harness the power of generative AI without surrendering privacy or security to the cloud.

Send us a text

Support the show

Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

  continue reading

章节

1. Introduction to GenAI at NXP (00:00:00)

2. EAQ GenAI Flow Pipeline Overview (00:01:12)

3. ASR, LLM and RAC Components (00:04:45)

4. Multimodal GenAI Edge Demo with Kinara (00:07:57)

5. LAVA Model Performance & Application (00:14:10)

6. Future of Edge AI at NXP (00:24:45)

71集单集

Artwork
icon分享
 
Manage episode 523414600 series 3574631
内容由EDGE AI FOUNDATION提供。所有播客内容(包括剧集、图形和播客描述)均由 EDGE AI FOUNDATION 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal

Stepping into a future where AI doesn't require the cloud, NXP is revolutionizing edge computing by bringing generative AI directly to microprocessors. Alberto Alvarez offers an illuminating journey through NXP's approach to private, secure, and efficient AI inference that operates entirely at the edge.
The heart of NXP's innovation is their EAQ GenAI Flow, a comprehensive software pipeline designed for iMX SoCs that enables both fine-tuning and optimization of AI models. This dual capability allows developers to adapt openly available Large Language Models for specific use cases without compromising data privacy, while also tackling the challenge of memory footprint through quantization techniques that maintain model accuracy. The conversational AI implementation creates a seamless experience by combining wake word detection, speech recognition, language processing with retrieval-augmented generation, and natural speech synthesis—all accelerated by NXP's Neutron NPU.
Most striking is NXP's partnership with Kinara, which introduces truly groundbreaking multimodal AI capabilities running entirely at the edge. Their demonstration of the LAVA model—combining LLAMA3's 8 billion parameters with CLIP vision encoding—showcases the ability to process both images and language queries without any cloud connectivity. Imagine industrial systems analyzing visual scenes, detecting subtle anomalies like water spills, and providing spoken reports—all while keeping sensitive data completely private. With quantization reducing these massive models to manageable 4-bit and 8-bit precision, NXP is making previously impossible edge AI applications practical reality.
Ready to experience the future of edge intelligence? Explore NXP's application code hub to start building with EIQ GenAI resources on compatible hardware and discover how your next project can harness the power of generative AI without surrendering privacy or security to the cloud.

Send us a text

Support the show

Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

  continue reading

章节

1. Introduction to GenAI at NXP (00:00:00)

2. EAQ GenAI Flow Pipeline Overview (00:01:12)

3. ASR, LLM and RAC Components (00:04:45)

4. Multimodal GenAI Edge Demo with Kinara (00:07:57)

5. LAVA Model Performance & Application (00:14:10)

6. Future of Edge AI at NXP (00:24:45)

71集单集

所有剧集

×
 
Loading …

欢迎使用Player FM

Player FM正在网上搜索高质量的播客,以便您现在享受。它是最好的播客应用程序,适用于安卓、iPhone和网络。注册以跨设备同步订阅。

 

快速参考指南

版权2025 | 隐私政策 | 服务条款 | | 版权
边探索边听这个节目
播放