Artwork

内容由David Such提供。所有播客内容(包括剧集、图形和播客描述)均由 David Such 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal
Player FM -播客应用
使用Player FM应用程序离线!

LLMs - Fancy Autocorrect or can they actually Reason?

14:58
 
分享
 

Manage episode 469876863 series 3620285
内容由David Such提供。所有播客内容(包括剧集、图形和播客描述)均由 David Such 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal

Send us a text

In this episode, we discuss the limitations of Large Language Models (LLMs) in areas like deductive reasoning, analogy-making, and ethical judgment. While today’s AI models excel at recognizing statistical patterns in vast datasets, they lack genuine understanding or an internal model of the world. Researchers are tackling these challenges through innovations such as causal AI, inference-time computing, and neuro-symbolic approaches, all aimed at enabling AI to move beyond mere pattern recognition towards true reasoning.

We explore how these emerging technologies, including causal inference, inference-time computing, and neuro-symbolic integration, are pushing AI closer to human-like, “System 2” reasoning. Will these advancements finally bridge the gap between AI imitation and genuine reasoning? Tune in as we dive into the future of artificial intelligence and explore what it will take for machines to truly think.

Support the show

If you are interested in learning more then please subscribe to the podcast or head over to https://medium.com/@reefwing, where there is lots more content on AI, IoT, robotics, drones, and development. To support us in bringing you this material, you can buy me a coffee or just provide feedback. We love feedback!

  continue reading

69集单集

Artwork
icon分享
 
Manage episode 469876863 series 3620285
内容由David Such提供。所有播客内容(包括剧集、图形和播客描述)均由 David Such 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal

Send us a text

In this episode, we discuss the limitations of Large Language Models (LLMs) in areas like deductive reasoning, analogy-making, and ethical judgment. While today’s AI models excel at recognizing statistical patterns in vast datasets, they lack genuine understanding or an internal model of the world. Researchers are tackling these challenges through innovations such as causal AI, inference-time computing, and neuro-symbolic approaches, all aimed at enabling AI to move beyond mere pattern recognition towards true reasoning.

We explore how these emerging technologies, including causal inference, inference-time computing, and neuro-symbolic integration, are pushing AI closer to human-like, “System 2” reasoning. Will these advancements finally bridge the gap between AI imitation and genuine reasoning? Tune in as we dive into the future of artificial intelligence and explore what it will take for machines to truly think.

Support the show

If you are interested in learning more then please subscribe to the podcast or head over to https://medium.com/@reefwing, where there is lots more content on AI, IoT, robotics, drones, and development. To support us in bringing you this material, you can buy me a coffee or just provide feedback. We love feedback!

  continue reading

69集单集

Alle episoder

×
 
Loading …

欢迎使用Player FM

Player FM正在网上搜索高质量的播客,以便您现在享受。它是最好的播客应用程序,适用于安卓、iPhone和网络。注册以跨设备同步订阅。

 

快速参考指南

版权2025 | 隐私政策 | 服务条款 | | 版权
边探索边听这个节目
播放