Artwork

内容由Machine Learning Street Talk (MLST)提供。所有播客内容(包括剧集、图形和播客描述)均由 Machine Learning Street Talk (MLST) 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal
Player FM -播客应用
使用Player FM应用程序离线!

Neel Nanda - Mechanistic Interpretability (Sparse Autoencoders)

3:42:36
 
分享
 

Manage episode 454360565 series 2803422
内容由Machine Learning Street Talk (MLST)提供。所有播客内容(包括剧集、图形和播客描述)均由 Machine Learning Street Talk (MLST) 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal

Neel Nanda, a senior research scientist at Google DeepMind, leads their mechanistic interpretability team. In this extensive interview, he discusses his work trying to understand how neural networks function internally. At just 25 years old, Nanda has quickly become a prominent voice in AI research after completing his pure mathematics degree at Cambridge in 2020.

Nanda reckons that machine learning is unique because we create neural networks that can perform impressive tasks (like complex reasoning and software engineering) without understanding how they work internally. He compares this to having computer programs that can do things no human programmer knows how to write. His work focuses on "mechanistic interpretability" - attempting to uncover and understand the internal structures and algorithms that emerge within these networks.

SPONSOR MESSAGES:

***

CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments.

https://centml.ai/pricing/

Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on ARC and AGI, they just acquired MindsAI - the current winners of the ARC challenge. Are you interested in working on ARC, or getting involved in their events? Goto https://tufalabs.ai/

***

SHOWNOTES, TRANSCRIPT, ALL REFERENCES (DONT MISS!):

https://www.dropbox.com/scl/fi/36dvtfl3v3p56hbi30im7/NeelShow.pdf?rlkey=pq8t7lyv2z60knlifyy17jdtx&st=kiutudhc&dl=0

We riff on:

* How neural networks develop meaningful internal representations beyond simple pattern matching

* The effectiveness of chain-of-thought prompting and why it improves model performance

* The importance of hands-on coding over extensive paper reading for new researchers

* His journey from Cambridge to working with Chris Olah at Anthropic and eventually Google DeepMind

* The role of mechanistic interpretability in AI safety

NEEL NANDA:

https://www.neelnanda.io/

https://scholar.google.com/citations?user=GLnX3MkAAAAJ&hl=en

https://x.com/NeelNanda5

Interviewer - Tim Scarfe

TOC:

1. Part 1: Introduction

[00:00:00] 1.1 Introduction and Core Concepts Overview

2. Part 2: Outside Interview

[00:06:45] 2.1 Mechanistic Interpretability Foundations

3. Part 3: Main Interview

[00:32:52] 3.1 Mechanistic Interpretability

4. Neural Architecture and Circuits

[01:00:31] 4.1 Biological Evolution Parallels

[01:04:03] 4.2 Universal Circuit Patterns and Induction Heads

[01:11:07] 4.3 Entity Detection and Knowledge Boundaries

[01:14:26] 4.4 Mechanistic Interpretability and Activation Patching

5. Model Behavior Analysis

[01:30:00] 5.1 Golden Gate Claude Experiment and Feature Amplification

[01:33:27] 5.2 Model Personas and RLHF Behavior Modification

[01:36:28] 5.3 Steering Vectors and Linear Representations

[01:40:00] 5.4 Hallucinations and Model Uncertainty

6. Sparse Autoencoder Architecture

[01:44:54] 6.1 Architecture and Mathematical Foundations

[02:22:03] 6.2 Core Challenges and Solutions

[02:32:04] 6.3 Advanced Activation Functions and Top-k Implementations

[02:34:41] 6.4 Research Applications in Transformer Circuit Analysis

7. Feature Learning and Scaling

[02:48:02] 7.1 Autoencoder Feature Learning and Width Parameters

[03:02:46] 7.2 Scaling Laws and Training Stability

[03:11:00] 7.3 Feature Identification and Bias Correction

[03:19:52] 7.4 Training Dynamics Analysis Methods

8. Engineering Implementation

[03:23:48] 8.1 Scale and Infrastructure Requirements

[03:25:20] 8.2 Computational Requirements and Storage

[03:35:22] 8.3 Chain-of-Thought Reasoning Implementation

[03:37:15] 8.4 Latent Structure Inference in Language Models

  continue reading

214集单集

Artwork
icon分享
 
Manage episode 454360565 series 2803422
内容由Machine Learning Street Talk (MLST)提供。所有播客内容(包括剧集、图形和播客描述)均由 Machine Learning Street Talk (MLST) 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal

Neel Nanda, a senior research scientist at Google DeepMind, leads their mechanistic interpretability team. In this extensive interview, he discusses his work trying to understand how neural networks function internally. At just 25 years old, Nanda has quickly become a prominent voice in AI research after completing his pure mathematics degree at Cambridge in 2020.

Nanda reckons that machine learning is unique because we create neural networks that can perform impressive tasks (like complex reasoning and software engineering) without understanding how they work internally. He compares this to having computer programs that can do things no human programmer knows how to write. His work focuses on "mechanistic interpretability" - attempting to uncover and understand the internal structures and algorithms that emerge within these networks.

SPONSOR MESSAGES:

***

CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments.

https://centml.ai/pricing/

Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on ARC and AGI, they just acquired MindsAI - the current winners of the ARC challenge. Are you interested in working on ARC, or getting involved in their events? Goto https://tufalabs.ai/

***

SHOWNOTES, TRANSCRIPT, ALL REFERENCES (DONT MISS!):

https://www.dropbox.com/scl/fi/36dvtfl3v3p56hbi30im7/NeelShow.pdf?rlkey=pq8t7lyv2z60knlifyy17jdtx&st=kiutudhc&dl=0

We riff on:

* How neural networks develop meaningful internal representations beyond simple pattern matching

* The effectiveness of chain-of-thought prompting and why it improves model performance

* The importance of hands-on coding over extensive paper reading for new researchers

* His journey from Cambridge to working with Chris Olah at Anthropic and eventually Google DeepMind

* The role of mechanistic interpretability in AI safety

NEEL NANDA:

https://www.neelnanda.io/

https://scholar.google.com/citations?user=GLnX3MkAAAAJ&hl=en

https://x.com/NeelNanda5

Interviewer - Tim Scarfe

TOC:

1. Part 1: Introduction

[00:00:00] 1.1 Introduction and Core Concepts Overview

2. Part 2: Outside Interview

[00:06:45] 2.1 Mechanistic Interpretability Foundations

3. Part 3: Main Interview

[00:32:52] 3.1 Mechanistic Interpretability

4. Neural Architecture and Circuits

[01:00:31] 4.1 Biological Evolution Parallels

[01:04:03] 4.2 Universal Circuit Patterns and Induction Heads

[01:11:07] 4.3 Entity Detection and Knowledge Boundaries

[01:14:26] 4.4 Mechanistic Interpretability and Activation Patching

5. Model Behavior Analysis

[01:30:00] 5.1 Golden Gate Claude Experiment and Feature Amplification

[01:33:27] 5.2 Model Personas and RLHF Behavior Modification

[01:36:28] 5.3 Steering Vectors and Linear Representations

[01:40:00] 5.4 Hallucinations and Model Uncertainty

6. Sparse Autoencoder Architecture

[01:44:54] 6.1 Architecture and Mathematical Foundations

[02:22:03] 6.2 Core Challenges and Solutions

[02:32:04] 6.3 Advanced Activation Functions and Top-k Implementations

[02:34:41] 6.4 Research Applications in Transformer Circuit Analysis

7. Feature Learning and Scaling

[02:48:02] 7.1 Autoencoder Feature Learning and Width Parameters

[03:02:46] 7.2 Scaling Laws and Training Stability

[03:11:00] 7.3 Feature Identification and Bias Correction

[03:19:52] 7.4 Training Dynamics Analysis Methods

8. Engineering Implementation

[03:23:48] 8.1 Scale and Infrastructure Requirements

[03:25:20] 8.2 Computational Requirements and Storage

[03:35:22] 8.3 Chain-of-Thought Reasoning Implementation

[03:37:15] 8.4 Latent Structure Inference in Language Models

  continue reading

214集单集

所有剧集

×
 
Loading …

欢迎使用Player FM

Player FM正在网上搜索高质量的播客,以便您现在享受。它是最好的播客应用程序,适用于安卓、iPhone和网络。注册以跨设备同步订阅。

 

快速参考指南

边探索边听这个节目
播放