使用Player FM应用程序离线!
Causal Inference & Reinforcement Learning with Andrew Lampinen Ep 13 | CausalBanditsPodcast.com
Manage episode 410037563 series 3526805
Love Causal Bandits Podcast?
Help us bring more quality content: Support the show
Video version of this episode is available here
Causal Inference with LLMs and Reinforcement Learning Agents?
Do LLMs have a world model?
Can they reason causally?
What's the connection between LLMs, reinforcement learning, and causality?
Andrew Lampinen, PhD (Google DeepMind) shares the insights from his research on LLMs, reinforcement learning, causal inference and generalizable agents.
We also discuss the nature of intelligence, rationality and how they play with evolutionary fitness.
Join us in the journey!
Recorded on Dec 1, 2023 in London, UK.
About The Guest
Andrew Lampinen, PhD is a Senior Research Scientist at Google DeepMind. He holds a PhD in PhD in Cognitive Psychology from Stanford University. He's interested in cognitive flexibility and generalization, and how these abilities are enabled by factors like language, memory, and embodiment.
Connect with Andrew:
- Andrew on Twitter/X
- Andrew's web page
About The Host
Aleksander (Alex) Molak is an independent machine learning researcher, educator, entrepreneur and a best-selling author in the area of causality (https://amzn.to/3QhsRz4).
Connect with Alex:
- Alex on the Internet
Links
Papers
- Lampinen et al. (2023) - "Passive learning of active causal strategies in agents and language models" (https://arxiv.org/pdf/2305.16183.pdf)
- Dasgupta, Lampinen, et al. (2022) Language models show human-like content effects on reasoning tasks" (https://arxiv.org/abs/2207.07051)
- Santoro, Lampinen, et al. (2021) - "Symbolic behaviour in artificial intelligence" (https://www.researchgate.net/publication/349125191_Symbolic_Behaviour_in_Artificial_Intelligence)
Rumi.ai
All-in-one meeting tool with real-time transcription & searchable Meeting Memory™
Support the show
Causal Bandits Podcast
Causal AI || Causal Machine Learning || Causal Inference & Discovery
Web: https://causalbanditspodcast.com
Connect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/
Join Causal Python Weekly: https://causalpython.io
The Causal Book: https://amzn.to/3QhsRz4
章节
1. Causal Inference & Reinforcement Learning with Andrew Lampinen Ep 13 | CausalBanditsPodcast.com (00:00:00)
2. [Ad] Rumi.ai (00:22:02)
3. (Cont.) Causal Inference & Reinforcement Learning with Andrew Lampinen Ep 13 | CausalBanditsPodcast.com (00:22:51)
27集单集
Manage episode 410037563 series 3526805
Love Causal Bandits Podcast?
Help us bring more quality content: Support the show
Video version of this episode is available here
Causal Inference with LLMs and Reinforcement Learning Agents?
Do LLMs have a world model?
Can they reason causally?
What's the connection between LLMs, reinforcement learning, and causality?
Andrew Lampinen, PhD (Google DeepMind) shares the insights from his research on LLMs, reinforcement learning, causal inference and generalizable agents.
We also discuss the nature of intelligence, rationality and how they play with evolutionary fitness.
Join us in the journey!
Recorded on Dec 1, 2023 in London, UK.
About The Guest
Andrew Lampinen, PhD is a Senior Research Scientist at Google DeepMind. He holds a PhD in PhD in Cognitive Psychology from Stanford University. He's interested in cognitive flexibility and generalization, and how these abilities are enabled by factors like language, memory, and embodiment.
Connect with Andrew:
- Andrew on Twitter/X
- Andrew's web page
About The Host
Aleksander (Alex) Molak is an independent machine learning researcher, educator, entrepreneur and a best-selling author in the area of causality (https://amzn.to/3QhsRz4).
Connect with Alex:
- Alex on the Internet
Links
Papers
- Lampinen et al. (2023) - "Passive learning of active causal strategies in agents and language models" (https://arxiv.org/pdf/2305.16183.pdf)
- Dasgupta, Lampinen, et al. (2022) Language models show human-like content effects on reasoning tasks" (https://arxiv.org/abs/2207.07051)
- Santoro, Lampinen, et al. (2021) - "Symbolic behaviour in artificial intelligence" (https://www.researchgate.net/publication/349125191_Symbolic_Behaviour_in_Artificial_Intelligence)
Rumi.ai
All-in-one meeting tool with real-time transcription & searchable Meeting Memory™
Support the show
Causal Bandits Podcast
Causal AI || Causal Machine Learning || Causal Inference & Discovery
Web: https://causalbanditspodcast.com
Connect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/
Join Causal Python Weekly: https://causalpython.io
The Causal Book: https://amzn.to/3QhsRz4
章节
1. Causal Inference & Reinforcement Learning with Andrew Lampinen Ep 13 | CausalBanditsPodcast.com (00:00:00)
2. [Ad] Rumi.ai (00:22:02)
3. (Cont.) Causal Inference & Reinforcement Learning with Andrew Lampinen Ep 13 | CausalBanditsPodcast.com (00:22:51)
27集单集
Minden epizód
×欢迎使用Player FM
Player FM正在网上搜索高质量的播客,以便您现在享受。它是最好的播客应用程序,适用于安卓、iPhone和网络。注册以跨设备同步订阅。