Show notes are at https://stevelitchfield.com/sshow/chat.html
…
continue reading
内容由LessWrong提供。所有播客内容(包括剧集、图形和播客描述)均由 LessWrong 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal。
Player FM -播客应用
使用Player FM应用程序离线!
使用Player FM应用程序离线!
“Gradient Routing: Masking Gradients to Localize Computation in Neural Networks” by cloud, Jacob G-W, Evzen, Joseph Miller, TurnTrout
Manage episode 454603164 series 3364760
内容由LessWrong提供。所有播客内容(包括剧集、图形和播客描述)均由 LessWrong 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal。
We present gradient routing, a way of controlling where learning happens in neural networks. Gradient routing applies masks to limit the flow of gradients during backpropagation. By supplying different masks for different data points, the user can induce specialized subcomponents within a model. We think gradient routing has the potential to train safer AI systems, for example, by making them more transparent, or by enabling the removal or monitoring of sensitive capabilities.
In this post, we:
Outline:
(01:48) Gradient routing
(03:02) MNIST latent space splitting
(04:31) Localizing capabilities in language models
(04:36) Steering scalar
(05:46) Robust unlearning
(09:06) Unlearning virology
(10:38) Scalable oversight via localization
(15:28) Key takeaways
(15:32) Absorption
(17:04) Localization avoids Goodharting
(18:02) Key limitations
(19:47) Alignment implications
(19:51) Robust removal of harmful capabilities
(20:19) Scalable oversight
(21:36) Specialized AI
(22:52) Conclusion
The original text contained 1 footnote which was omitted from this narration.
---
First published:
December 6th, 2024
Source:
https://www.lesswrong.com/posts/nLRKKCTtwQgvozLTN/gradient-routing-masking-gradients-to-localize-computation
---
Narrated by TYPE III AUDIO.
---
…
continue reading
In this post, we:
- Show how to implement gradient routing.
- Briefly state the main results from our paper, on...
- Controlling the latent space learned by an MNIST autoencoder so that different subspaces specialize to different digits;
- Localizing computation in language models: (a) inducing axis-aligned features and (b) demonstrating that information can be localized then removed by ablation, even when data is imperfectly labeled; and
- Scaling oversight to efficiently train a reinforcement learning policy even with [...]
Outline:
(01:48) Gradient routing
(03:02) MNIST latent space splitting
(04:31) Localizing capabilities in language models
(04:36) Steering scalar
(05:46) Robust unlearning
(09:06) Unlearning virology
(10:38) Scalable oversight via localization
(15:28) Key takeaways
(15:32) Absorption
(17:04) Localization avoids Goodharting
(18:02) Key limitations
(19:47) Alignment implications
(19:51) Robust removal of harmful capabilities
(20:19) Scalable oversight
(21:36) Specialized AI
(22:52) Conclusion
The original text contained 1 footnote which was omitted from this narration.
---
First published:
December 6th, 2024
Source:
https://www.lesswrong.com/posts/nLRKKCTtwQgvozLTN/gradient-routing-masking-gradients-to-localize-computation
---
Narrated by TYPE III AUDIO.
---
427集单集
Manage episode 454603164 series 3364760
内容由LessWrong提供。所有播客内容(包括剧集、图形和播客描述)均由 LessWrong 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal。
We present gradient routing, a way of controlling where learning happens in neural networks. Gradient routing applies masks to limit the flow of gradients during backpropagation. By supplying different masks for different data points, the user can induce specialized subcomponents within a model. We think gradient routing has the potential to train safer AI systems, for example, by making them more transparent, or by enabling the removal or monitoring of sensitive capabilities.
In this post, we:
Outline:
(01:48) Gradient routing
(03:02) MNIST latent space splitting
(04:31) Localizing capabilities in language models
(04:36) Steering scalar
(05:46) Robust unlearning
(09:06) Unlearning virology
(10:38) Scalable oversight via localization
(15:28) Key takeaways
(15:32) Absorption
(17:04) Localization avoids Goodharting
(18:02) Key limitations
(19:47) Alignment implications
(19:51) Robust removal of harmful capabilities
(20:19) Scalable oversight
(21:36) Specialized AI
(22:52) Conclusion
The original text contained 1 footnote which was omitted from this narration.
---
First published:
December 6th, 2024
Source:
https://www.lesswrong.com/posts/nLRKKCTtwQgvozLTN/gradient-routing-masking-gradients-to-localize-computation
---
Narrated by TYPE III AUDIO.
---
…
continue reading
In this post, we:
- Show how to implement gradient routing.
- Briefly state the main results from our paper, on...
- Controlling the latent space learned by an MNIST autoencoder so that different subspaces specialize to different digits;
- Localizing computation in language models: (a) inducing axis-aligned features and (b) demonstrating that information can be localized then removed by ablation, even when data is imperfectly labeled; and
- Scaling oversight to efficiently train a reinforcement learning policy even with [...]
Outline:
(01:48) Gradient routing
(03:02) MNIST latent space splitting
(04:31) Localizing capabilities in language models
(04:36) Steering scalar
(05:46) Robust unlearning
(09:06) Unlearning virology
(10:38) Scalable oversight via localization
(15:28) Key takeaways
(15:32) Absorption
(17:04) Localization avoids Goodharting
(18:02) Key limitations
(19:47) Alignment implications
(19:51) Robust removal of harmful capabilities
(20:19) Scalable oversight
(21:36) Specialized AI
(22:52) Conclusion
The original text contained 1 footnote which was omitted from this narration.
---
First published:
December 6th, 2024
Source:
https://www.lesswrong.com/posts/nLRKKCTtwQgvozLTN/gradient-routing-masking-gradients-to-localize-computation
---
Narrated by TYPE III AUDIO.
---
427集单集
所有剧集
×欢迎使用Player FM
Player FM正在网上搜索高质量的播客,以便您现在享受。它是最好的播客应用程序,适用于安卓、iPhone和网络。注册以跨设备同步订阅。