使用Player FM应用程序离线!
EXAONE 3.0: An Expert AI for Everyone (with Hyeongu Yun)
Manage episode 451395743 series 3446693
In this episode of Neural Search Talks, we welcome Hyeongu Yun from LG AI Research to discuss the newest addition to the EXAONE Universe: EXAONE 3.0. The model demonstrates strong capabilities in both English and Korean, excelling not only in real-world instruction-following scenarios but also achieving impressive results in math and coding benchmarks. Hyeongu shares the team's approach to the development of this model, revealing key training factors that contributed to its success while also highlighting the challenges they faced along the way. We close this episode off with a look at EXAONE's future, as well as Hyeongu's perspective on the evolving role of AI systems.
Check out the Zeta Alpha Neural Discovery platform. Subscribe to the Zeta Alpha calendar to not miss out on any of our events! Sources: - https://lgresearch.ai/blog/view?seq=460 - https://huggingface.co/LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct - https://arxiv.org/abs/2408.03541 Timestamps: 0:00 Intro by Jakub Zavrel 1:37 The journey of the EXAONE project 4:34 The main challenges in the development of EXAONE 3.0 6:37 The secret to achieving great bilingual performance in English & Korean 7:51 How EXAONE 3.0 stacks against other open-source models 9:20 The trade-off between instruction-following and reasoning skills 12:32 How will retrieval and generative models evolve in the future 16:36 Open sourcing and user feedback on EXAONE 19:20 The role of synthetic data in model training 20:57 The role of LLMs as evaluators 23:16 Outro
21集单集
Manage episode 451395743 series 3446693
In this episode of Neural Search Talks, we welcome Hyeongu Yun from LG AI Research to discuss the newest addition to the EXAONE Universe: EXAONE 3.0. The model demonstrates strong capabilities in both English and Korean, excelling not only in real-world instruction-following scenarios but also achieving impressive results in math and coding benchmarks. Hyeongu shares the team's approach to the development of this model, revealing key training factors that contributed to its success while also highlighting the challenges they faced along the way. We close this episode off with a look at EXAONE's future, as well as Hyeongu's perspective on the evolving role of AI systems.
Check out the Zeta Alpha Neural Discovery platform. Subscribe to the Zeta Alpha calendar to not miss out on any of our events! Sources: - https://lgresearch.ai/blog/view?seq=460 - https://huggingface.co/LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct - https://arxiv.org/abs/2408.03541 Timestamps: 0:00 Intro by Jakub Zavrel 1:37 The journey of the EXAONE project 4:34 The main challenges in the development of EXAONE 3.0 6:37 The secret to achieving great bilingual performance in English & Korean 7:51 How EXAONE 3.0 stacks against other open-source models 9:20 The trade-off between instruction-following and reasoning skills 12:32 How will retrieval and generative models evolve in the future 16:36 Open sourcing and user feedback on EXAONE 19:20 The role of synthetic data in model training 20:57 The role of LLMs as evaluators 23:16 Outro
21集单集
所有剧集
×欢迎使用Player FM
Player FM正在网上搜索高质量的播客,以便您现在享受。它是最好的播客应用程序,适用于安卓、iPhone和网络。注册以跨设备同步订阅。