Artwork

内容由PyTorch, Edward Yang, and Team PyTorch提供。所有播客内容(包括剧集、图形和播客描述)均由 PyTorch, Edward Yang, and Team PyTorch 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal
Player FM -播客应用
使用Player FM应用程序离线!

Half precision

18:00
 
分享
 

Manage episode 301973966 series 2921809
内容由PyTorch, Edward Yang, and Team PyTorch提供。所有播客内容(包括剧集、图形和播客描述)均由 PyTorch, Edward Yang, and Team PyTorch 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal

In this episode I talk about reduced precision floating point formats float16 (aka half precision) and bfloat16. I'll discuss what floating point numbers are, how these two formats vary, and some of the practical considerations that arise when you are working with numeric code in PyTorch that also needs to work in reduced precision. Did you know that we do all CUDA computations in float32, even if the source tensors are stored as float16? Now you know!

Further reading.

  continue reading

83集单集

Artwork

Half precision

PyTorch Developer Podcast

26 subscribers

published

icon分享
 
Manage episode 301973966 series 2921809
内容由PyTorch, Edward Yang, and Team PyTorch提供。所有播客内容(包括剧集、图形和播客描述)均由 PyTorch, Edward Yang, and Team PyTorch 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal

In this episode I talk about reduced precision floating point formats float16 (aka half precision) and bfloat16. I'll discuss what floating point numbers are, how these two formats vary, and some of the practical considerations that arise when you are working with numeric code in PyTorch that also needs to work in reduced precision. Did you know that we do all CUDA computations in float32, even if the source tensors are stored as float16? Now you know!

Further reading.

  continue reading

83集单集

所有剧集

×
 
Loading …

欢迎使用Player FM

Player FM正在网上搜索高质量的播客,以便您现在享受。它是最好的播客应用程序,适用于安卓、iPhone和网络。注册以跨设备同步订阅。

 

快速参考指南

版权2025 | 隐私政策 | 服务条款 | | 版权
边探索边听这个节目
播放