Artwork

内容由The Gradient提供。所有播客内容(包括剧集、图形和播客描述)均由 The Gradient 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal
Player FM -播客应用
使用Player FM应用程序离线!

Venkatesh Rao: Protocols, Intelligence, and Scaling

2:18:35
 
分享
 

Manage episode 405178384 series 2975159
内容由The Gradient提供。所有播客内容(包括剧集、图形和播客描述)均由 The Gradient 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal

“There is this move from generality in a relative sense of ‘we are not as specialized as insects’ to generality in the sense of omnipotent, omniscient, godlike capabilities. And I think there's something very dangerous that happens there, which is you start thinking of the word ‘general’ in completely unhinged ways.”

In episode 114 of The Gradient Podcast, Daniel Bashir speaks to Venkatesh Rao.

Venkatesh is a writer and consultant. He has been writing the widely read Ribbonfarm blog since 2007, and more recently, the popular Ribbonfarm Studio Substack newsletter. He is the author of Tempo, a book on timing and decision-making, and is currently working on his second book, on the foundations of temporality. He has been an independent consultant since 2011, supporting senior executives in the technology industry. His work in recent years has focused on AI, semiconductor, sustainability, and protocol technology sectors. He holds a PhD in control theory (2003) from the University of Michigan. He is currently based in the Seattle area, and enjoys dabbling in robotics in his spare time. You can learn more about his work at venkateshrao.com

Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pub

Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (01:38) Origins of Ribbonfarm and Venkat’s academic background

* (04:23) Voice and recurring themes in Venkat’s work

* (11:45) Patch models and multi-agent systems: integrating philosophy of language, balancing realism with tractability

* (21:00) More on abstractions vs. tractability in Venkat’s work

* (29:07) Scaling of industrial value systems, characterizing AI as a discipline

* (39:25) Emergent science, intelligence and abstractions, presuppositions in science, generality and universality, cameras and engines

* (55:05) Psychometric terms

* (1:09:07) Inductive biases (yes I mentioned the No Free Lunch Theorem and then just talked about the definition of inductive bias and not the actual theorem 🤡)

* (1:18:13) LLM training and efficiency, comparing LLMs to humans

* (1:23:35) Experiential age, analogies for knowledge transfer

* (1:30:50) More clarification on the analogy

* (1:37:20) Massed Muddler Intelligence and protocols

* (1:38:40) Introducing protocols and the Summer of protocols

* (1:49:15) Evolution of protocols, hardness

* (1:54:20) LLMs, protocols, time, future visions, and progress

* (2:01:33) Protocols, drifting from value systems, friction, compiling explicit knowledge

* (2:14:23) Directions for ML people in protocols research

* (2:18:05) Outro

Links:

* Venkat’s Twitter and homepage

* Mediocre Computing

* Summer of Protocols and 2024 Call for Applications (apply!)

* Essays discussed

* Patch models and their applications to multivehicle command and control

* From Mediocre Computing

* Text is All You Need

* Magic, Mundanity, and Deep Protocolization

* A Camera, Not an Engine

* Massed Muddler Intelligence

* On protocols

* The Unreasonable Sufficiency of Protocols

* Protocols Don’t Build Pyramids

* Protocols in (Emergency) Time

* Atoms, Institutions, Blockchains


Get full access to The Gradient at thegradientpub.substack.com/subscribe
  continue reading

131集单集

Artwork
icon分享
 
Manage episode 405178384 series 2975159
内容由The Gradient提供。所有播客内容(包括剧集、图形和播客描述)均由 The Gradient 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal

“There is this move from generality in a relative sense of ‘we are not as specialized as insects’ to generality in the sense of omnipotent, omniscient, godlike capabilities. And I think there's something very dangerous that happens there, which is you start thinking of the word ‘general’ in completely unhinged ways.”

In episode 114 of The Gradient Podcast, Daniel Bashir speaks to Venkatesh Rao.

Venkatesh is a writer and consultant. He has been writing the widely read Ribbonfarm blog since 2007, and more recently, the popular Ribbonfarm Studio Substack newsletter. He is the author of Tempo, a book on timing and decision-making, and is currently working on his second book, on the foundations of temporality. He has been an independent consultant since 2011, supporting senior executives in the technology industry. His work in recent years has focused on AI, semiconductor, sustainability, and protocol technology sectors. He holds a PhD in control theory (2003) from the University of Michigan. He is currently based in the Seattle area, and enjoys dabbling in robotics in his spare time. You can learn more about his work at venkateshrao.com

Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pub

Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (01:38) Origins of Ribbonfarm and Venkat’s academic background

* (04:23) Voice and recurring themes in Venkat’s work

* (11:45) Patch models and multi-agent systems: integrating philosophy of language, balancing realism with tractability

* (21:00) More on abstractions vs. tractability in Venkat’s work

* (29:07) Scaling of industrial value systems, characterizing AI as a discipline

* (39:25) Emergent science, intelligence and abstractions, presuppositions in science, generality and universality, cameras and engines

* (55:05) Psychometric terms

* (1:09:07) Inductive biases (yes I mentioned the No Free Lunch Theorem and then just talked about the definition of inductive bias and not the actual theorem 🤡)

* (1:18:13) LLM training and efficiency, comparing LLMs to humans

* (1:23:35) Experiential age, analogies for knowledge transfer

* (1:30:50) More clarification on the analogy

* (1:37:20) Massed Muddler Intelligence and protocols

* (1:38:40) Introducing protocols and the Summer of protocols

* (1:49:15) Evolution of protocols, hardness

* (1:54:20) LLMs, protocols, time, future visions, and progress

* (2:01:33) Protocols, drifting from value systems, friction, compiling explicit knowledge

* (2:14:23) Directions for ML people in protocols research

* (2:18:05) Outro

Links:

* Venkat’s Twitter and homepage

* Mediocre Computing

* Summer of Protocols and 2024 Call for Applications (apply!)

* Essays discussed

* Patch models and their applications to multivehicle command and control

* From Mediocre Computing

* Text is All You Need

* Magic, Mundanity, and Deep Protocolization

* A Camera, Not an Engine

* Massed Muddler Intelligence

* On protocols

* The Unreasonable Sufficiency of Protocols

* Protocols Don’t Build Pyramids

* Protocols in (Emergency) Time

* Atoms, Institutions, Blockchains


Get full access to The Gradient at thegradientpub.substack.com/subscribe
  continue reading

131集单集

所有剧集

×
 
Loading …

欢迎使用Player FM

Player FM正在网上搜索高质量的播客,以便您现在享受。它是最好的播客应用程序,适用于安卓、iPhone和网络。注册以跨设备同步订阅。

 

快速参考指南