Artwork

内容由Conviction提供。所有播客内容(包括剧集、图形和播客描述)均由 Conviction 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal
Player FM -播客应用
使用Player FM应用程序离线!

OpenAI’s Sora team thinks we’ve only seen the "GPT-1 of video models"

31:24
 
分享
 

Manage episode 414538843 series 3444082
内容由Conviction提供。所有播客内容(包括剧集、图形和播客描述)均由 Conviction 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal

AI-generated videos are not just leveled-up image generators. But rather, they could be a big step forward on the path to AGI. This week on No Priors, the team from Sora is here to discuss OpenAI’s recently announced generative video model, which can take a text prompt and create realistic, visually coherent, high-definition clips that are up to a minute long.

Sora team leads, Aditya Ramesh, Tim Brooks, and Bill Peebles join Elad and Sarah to talk about developing Sora. The generative video model isn’t yet available for public use but the examples of its work are very impressive. However, they believe we’re still in the GPT-1 era of AI video models and are focused on a slow rollout to ensure the model is in the best place possible to offer value to the user and more importantly they’ve applied all the safety measures possible to avoid deep fakes and misinformation. They also discuss what they’re learning from implementing diffusion transformers, why they believe video generation is taking us one step closer to AGI, and why entertainment may not be the main use case for this tool in the future.

Show Links:

Sign up for new podcasts every week. Email feedback to show@no-priors.com

Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @_tim_brooks l @billpeeb l @model_mechanic

Show Notes:

(0:00) Sora team Introduction

(1:05) Simulating the world with Sora

(2:25) Building the most valuable consumer product

(5:50) Alternative use cases and simulation capabilities

(8:41) Diffusion transformers explanation

(10:15) Scaling laws for video

(13:08) Applying end-to-end deep learning to video

(15:30) Tuning the visual aesthetic of Sora

(17:08) The road to “desktop Pixar” for everyone

(20:12) Safety for visual models

(22:34) Limitations of Sora

(25:04) Learning from how Sora is learning

(29:32) The biggest misconceptions about video models

  continue reading

92集单集

Artwork
icon分享
 
Manage episode 414538843 series 3444082
内容由Conviction提供。所有播客内容(包括剧集、图形和播客描述)均由 Conviction 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal

AI-generated videos are not just leveled-up image generators. But rather, they could be a big step forward on the path to AGI. This week on No Priors, the team from Sora is here to discuss OpenAI’s recently announced generative video model, which can take a text prompt and create realistic, visually coherent, high-definition clips that are up to a minute long.

Sora team leads, Aditya Ramesh, Tim Brooks, and Bill Peebles join Elad and Sarah to talk about developing Sora. The generative video model isn’t yet available for public use but the examples of its work are very impressive. However, they believe we’re still in the GPT-1 era of AI video models and are focused on a slow rollout to ensure the model is in the best place possible to offer value to the user and more importantly they’ve applied all the safety measures possible to avoid deep fakes and misinformation. They also discuss what they’re learning from implementing diffusion transformers, why they believe video generation is taking us one step closer to AGI, and why entertainment may not be the main use case for this tool in the future.

Show Links:

Sign up for new podcasts every week. Email feedback to show@no-priors.com

Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @_tim_brooks l @billpeeb l @model_mechanic

Show Notes:

(0:00) Sora team Introduction

(1:05) Simulating the world with Sora

(2:25) Building the most valuable consumer product

(5:50) Alternative use cases and simulation capabilities

(8:41) Diffusion transformers explanation

(10:15) Scaling laws for video

(13:08) Applying end-to-end deep learning to video

(15:30) Tuning the visual aesthetic of Sora

(17:08) The road to “desktop Pixar” for everyone

(20:12) Safety for visual models

(22:34) Limitations of Sora

(25:04) Learning from how Sora is learning

(29:32) The biggest misconceptions about video models

  continue reading

92集单集

所有剧集

×
 
Loading …

欢迎使用Player FM

Player FM正在网上搜索高质量的播客,以便您现在享受。它是最好的播客应用程序,适用于安卓、iPhone和网络。注册以跨设备同步订阅。

 

快速参考指南