Artwork

内容由Machine Learning Street Talk (MLST)提供。所有播客内容(包括剧集、图形和播客描述)均由 Machine Learning Street Talk (MLST) 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal
Player FM -播客应用
使用Player FM应用程序离线!

Jeff Clune - Agent AI Needs Darwin

2:00:13
 
分享
 

Manage episode 459198458 series 2803422
内容由Machine Learning Street Talk (MLST)提供。所有播客内容(包括剧集、图形和播客描述)均由 Machine Learning Street Talk (MLST) 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal

AI professor Jeff Clune ruminates on open-ended evolutionary algorithms—systems designed to generate novel and interesting outcomes forever. Drawing inspiration from nature’s boundless creativity, Clune and his collaborators aim to build “Darwin Complete” search spaces, where any computable environment can be simulated. By harnessing the power of large language models and reinforcement learning, these AI agents continuously develop new skills, explore uncharted domains, and even cooperate with one another in complex tasks.

SPONSOR MESSAGES:

***

CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments.

https://centml.ai/pricing/

Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on reasoning and AGI. Are you interested in working on reasoning, or getting involved in their events?

They are hosting an event in Zurich on January 9th with the ARChitects, join if you can.

Goto https://tufalabs.ai/

***

A central theme throughout Clune’s work is “interestingness”: an elusive quality that nudges AI agents toward genuinely original discoveries. Rather than rely on narrowly defined metrics—which often fail due to Goodhart’s Law—Clune employs language models to serve as proxies for human judgment. In doing so, he ensures that “interesting” always reflects authentic novelty, opening the door to unending innovation.

Yet with these extraordinary possibilities come equally significant risks. Clune says we need AI safety measures—particularly as the technology matures into powerful, open-ended forms. Potential pitfalls include agents inadvertently causing harm or malicious actors subverting AI’s capabilities for destructive ends. To mitigate this, Clune advocates for prudent governance involving democratic coalitions, regulation of cutting-edge models, and global alignment protocols.

Jeff Clune:

https://x.com/jeffclune

http://jeffclune.com/

(Interviewer: Tim Scarfe)

TOC:

1. Introduction

[00:00:00] 1.1 Overview and Opening Thoughts

2. Sponsorship

[00:03:00] 2.1 TufaAI Labs and CentML

3. Evolutionary AI Foundations

[00:04:12] 3.1 Open-Ended Algorithm Development and Abstraction Approaches

[00:07:56] 3.2 Novel Intelligence Forms and Serendipitous Discovery

[00:11:46] 3.3 Frontier Models and the 'Interestingness' Problem

[00:30:36] 3.4 Darwin Complete Systems and Evolutionary Search Spaces

4. System Architecture and Learning

[00:37:35] 4.1 Code Generation vs Neural Networks Comparison

[00:41:04] 4.2 Thought Cloning and Behavioral Learning Systems

[00:47:00] 4.3 Language Emergence in AI Systems

[00:50:23] 4.4 AI Interpretability and Safety Monitoring Techniques

5. AI Safety and Governance

[00:53:56] 5.1 Language Model Consistency and Belief Systems

[00:57:00] 5.2 AI Safety Challenges and Alignment Limitations

[01:02:07] 5.3 Open Source AI Development and Value Alignment

[01:08:19] 5.4 Global AI Governance and Development Control

6. Advanced AI Systems and Evolution

[01:16:55] 6.1 Agent Systems and Performance Evaluation

[01:22:45] 6.2 Continuous Learning Challenges and In-Context Solutions

[01:26:46] 6.3 Evolution Algorithms and Environment Generation

[01:35:36] 6.4 Evolutionary Biology Insights and Experiments

[01:48:08] 6.5 Personal Journey from Philosophy to AI Research

Shownotes:

We craft detailed show notes for each episode with high quality transcript and references and best parts bolded.

https://www.dropbox.com/scl/fi/fz43pdoc5wq5jh7vsnujl/JEFFCLUNE.pdf?rlkey=uu0e70ix9zo6g5xn6amykffpm&st=k2scxteu&dl=0

  continue reading

216集单集

Artwork
icon分享
 
Manage episode 459198458 series 2803422
内容由Machine Learning Street Talk (MLST)提供。所有播客内容(包括剧集、图形和播客描述)均由 Machine Learning Street Talk (MLST) 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal

AI professor Jeff Clune ruminates on open-ended evolutionary algorithms—systems designed to generate novel and interesting outcomes forever. Drawing inspiration from nature’s boundless creativity, Clune and his collaborators aim to build “Darwin Complete” search spaces, where any computable environment can be simulated. By harnessing the power of large language models and reinforcement learning, these AI agents continuously develop new skills, explore uncharted domains, and even cooperate with one another in complex tasks.

SPONSOR MESSAGES:

***

CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments.

https://centml.ai/pricing/

Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on reasoning and AGI. Are you interested in working on reasoning, or getting involved in their events?

They are hosting an event in Zurich on January 9th with the ARChitects, join if you can.

Goto https://tufalabs.ai/

***

A central theme throughout Clune’s work is “interestingness”: an elusive quality that nudges AI agents toward genuinely original discoveries. Rather than rely on narrowly defined metrics—which often fail due to Goodhart’s Law—Clune employs language models to serve as proxies for human judgment. In doing so, he ensures that “interesting” always reflects authentic novelty, opening the door to unending innovation.

Yet with these extraordinary possibilities come equally significant risks. Clune says we need AI safety measures—particularly as the technology matures into powerful, open-ended forms. Potential pitfalls include agents inadvertently causing harm or malicious actors subverting AI’s capabilities for destructive ends. To mitigate this, Clune advocates for prudent governance involving democratic coalitions, regulation of cutting-edge models, and global alignment protocols.

Jeff Clune:

https://x.com/jeffclune

http://jeffclune.com/

(Interviewer: Tim Scarfe)

TOC:

1. Introduction

[00:00:00] 1.1 Overview and Opening Thoughts

2. Sponsorship

[00:03:00] 2.1 TufaAI Labs and CentML

3. Evolutionary AI Foundations

[00:04:12] 3.1 Open-Ended Algorithm Development and Abstraction Approaches

[00:07:56] 3.2 Novel Intelligence Forms and Serendipitous Discovery

[00:11:46] 3.3 Frontier Models and the 'Interestingness' Problem

[00:30:36] 3.4 Darwin Complete Systems and Evolutionary Search Spaces

4. System Architecture and Learning

[00:37:35] 4.1 Code Generation vs Neural Networks Comparison

[00:41:04] 4.2 Thought Cloning and Behavioral Learning Systems

[00:47:00] 4.3 Language Emergence in AI Systems

[00:50:23] 4.4 AI Interpretability and Safety Monitoring Techniques

5. AI Safety and Governance

[00:53:56] 5.1 Language Model Consistency and Belief Systems

[00:57:00] 5.2 AI Safety Challenges and Alignment Limitations

[01:02:07] 5.3 Open Source AI Development and Value Alignment

[01:08:19] 5.4 Global AI Governance and Development Control

6. Advanced AI Systems and Evolution

[01:16:55] 6.1 Agent Systems and Performance Evaluation

[01:22:45] 6.2 Continuous Learning Challenges and In-Context Solutions

[01:26:46] 6.3 Evolution Algorithms and Environment Generation

[01:35:36] 6.4 Evolutionary Biology Insights and Experiments

[01:48:08] 6.5 Personal Journey from Philosophy to AI Research

Shownotes:

We craft detailed show notes for each episode with high quality transcript and references and best parts bolded.

https://www.dropbox.com/scl/fi/fz43pdoc5wq5jh7vsnujl/JEFFCLUNE.pdf?rlkey=uu0e70ix9zo6g5xn6amykffpm&st=k2scxteu&dl=0

  continue reading

216集单集

All episodes

×
 
Loading …

欢迎使用Player FM

Player FM正在网上搜索高质量的播客,以便您现在享受。它是最好的播客应用程序,适用于安卓、iPhone和网络。注册以跨设备同步订阅。

 

快速参考指南

边探索边听这个节目
播放