Artwork

内容由Ben Jaffe and Katie Malone, Ben Jaffe, and Katie Malone提供。所有播客内容(包括剧集、图形和播客描述)均由 Ben Jaffe and Katie Malone, Ben Jaffe, and Katie Malone 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal
Player FM -播客应用
使用Player FM应用程序离线!

The Lottery Ticket Hypothesis

19:45
 
分享
 

Manage episode 254315967 series 74115
内容由Ben Jaffe and Katie Malone, Ben Jaffe, and Katie Malone提供。所有播客内容(包括剧集、图形和播客描述)均由 Ben Jaffe and Katie Malone, Ben Jaffe, and Katie Malone 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal
Recent research into neural networks reveals that sometimes, not all parts of the neural net are equally responsible for the performance of the network overall. Instead, it seems like (in some neural nets, at least) there are smaller subnetworks present where most of the predictive power resides. The fascinating thing is that, for some of these subnetworks (so-called “winning lottery tickets”), it’s not the training process that makes them good at their classification or regression tasks: they just happened to be initialized in a way that was very effective. This changes the way we think about what training might be doing, in a pretty fundamental way. Sometimes, instead of crafting a good fit from wholecloth, training might be finding the parts of the network that always had predictive power to begin with, and isolating and strengthening them. This research is pretty recent, having only come to prominence in the last year, but nonetheless challenges our notions about what it means to train a machine learning model.
  continue reading

293集单集

Artwork

The Lottery Ticket Hypothesis

Linear Digressions

3,115 subscribers

published

icon分享
 
Manage episode 254315967 series 74115
内容由Ben Jaffe and Katie Malone, Ben Jaffe, and Katie Malone提供。所有播客内容(包括剧集、图形和播客描述)均由 Ben Jaffe and Katie Malone, Ben Jaffe, and Katie Malone 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal
Recent research into neural networks reveals that sometimes, not all parts of the neural net are equally responsible for the performance of the network overall. Instead, it seems like (in some neural nets, at least) there are smaller subnetworks present where most of the predictive power resides. The fascinating thing is that, for some of these subnetworks (so-called “winning lottery tickets”), it’s not the training process that makes them good at their classification or regression tasks: they just happened to be initialized in a way that was very effective. This changes the way we think about what training might be doing, in a pretty fundamental way. Sometimes, instead of crafting a good fit from wholecloth, training might be finding the parts of the network that always had predictive power to begin with, and isolating and strengthening them. This research is pretty recent, having only come to prominence in the last year, but nonetheless challenges our notions about what it means to train a machine learning model.
  continue reading

293集单集

Semua episod

×
 
Loading …

欢迎使用Player FM

Player FM正在网上搜索高质量的播客,以便您现在享受。它是最好的播客应用程序,适用于安卓、iPhone和网络。注册以跨设备同步订阅。

 

快速参考指南