Artwork

内容由London Futurists提供。所有播客内容(包括剧集、图形和播客描述)均由 London Futurists 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal
Player FM -播客应用
使用Player FM应用程序离线!

Against pausing AI research, with Pedro Domingos

34:09
 
分享
 

Manage episode 360440346 series 3390521
内容由London Futurists提供。所有播客内容(包括剧集、图形和播客描述)均由 London Futurists 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal

Should the pace of research into advanced artificial intelligence be slowed down, or perhaps even paused completely?
Your answer to that question probably depends on your answers to a number of other questions. Is advanced artificial intelligence reaching the point where it could result in catastrophic damage? Is a slow down desirable, given that AI can also lead to lots of very positive outcomes, including tools to guard against the worst excesses of other applications of AI? And even if a slow down is desirable, is it practical?
Our guest in this episode is Professor Pedro Domingos of the University of Washington. He is perhaps best known for his book "The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World".
That book takes an approach to the future of AI that is significantly different from what you can read in many other books. It describes five different "tribes" of AI researchers, each with their own paradigms, and it suggests that true progress towards human-level general intelligence will depend on a unification of these different approaches. In other words, we won't reach AGI just by scaling up deep learning approaches, or even by adding in features from logical reasoning.
Follow-up reading:
https://homes.cs.washington.edu/~pedrod/
https://www.amazon.co.uk/Master-Algorithm-Ultimate-Learning-Machine/dp/0241004543
https://futureoflife.org/open-letter/pause-giant-ai-experiments/
Topics addressed in this episode include:
*) The five tribes of AI research - why there's a lot more to AI than deep learning
*) Why unifying these five tribes may not be sufficient to reach human-level general intelligence
*) The task of understanding an entire concept (e.g 'horse') from just seeing a single example
*) A wide spread of estimates of the timescale to reach AGI
*) Different views as to the true risks from advanced AI
*) The case that risks arise from AI incompetence rather than from increased AI competence
*) A different risk: that bad actors will gain dangerously more power from access to increasingly competent AI
*) The case for using AI to prevent misuse of AI
*) Yet another risk: that an AI trained against one objective function will nevertheless adopt goals diverging from that objective
*) How AIs that operate beyond our understanding could still remain under human control
*) How fully can evolution be trusted to produce outputs in line with a specified objective function?
*) The example of humans taming wolves into dogs that pose no threat to us
*) The counterexample of humans pursuing goals contrary to our in-built genetic drives
*) Complications with multiple levels of selection pressures, e.g genes and memes working at cross purposes
*) The “genie problem” (or “King Midas problem”) of choosing an objective function that is apparently attractive but actually dangerous
*) Assessing the motivations of people who have signed the FLI (Future of Life Institute) letter advocating a pause on the development of larger AI language models
*) Pros and cons of escalating a sense of urgency
*) The two key questions of existential risk from AI: how much risk is acceptable, and what might that level of risk become in the near future?
*) The need for a more rational discussion of the issues raised by increasingly competent AIs
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

  continue reading

84集单集

Artwork
icon分享
 
Manage episode 360440346 series 3390521
内容由London Futurists提供。所有播客内容(包括剧集、图形和播客描述)均由 London Futurists 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal

Should the pace of research into advanced artificial intelligence be slowed down, or perhaps even paused completely?
Your answer to that question probably depends on your answers to a number of other questions. Is advanced artificial intelligence reaching the point where it could result in catastrophic damage? Is a slow down desirable, given that AI can also lead to lots of very positive outcomes, including tools to guard against the worst excesses of other applications of AI? And even if a slow down is desirable, is it practical?
Our guest in this episode is Professor Pedro Domingos of the University of Washington. He is perhaps best known for his book "The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World".
That book takes an approach to the future of AI that is significantly different from what you can read in many other books. It describes five different "tribes" of AI researchers, each with their own paradigms, and it suggests that true progress towards human-level general intelligence will depend on a unification of these different approaches. In other words, we won't reach AGI just by scaling up deep learning approaches, or even by adding in features from logical reasoning.
Follow-up reading:
https://homes.cs.washington.edu/~pedrod/
https://www.amazon.co.uk/Master-Algorithm-Ultimate-Learning-Machine/dp/0241004543
https://futureoflife.org/open-letter/pause-giant-ai-experiments/
Topics addressed in this episode include:
*) The five tribes of AI research - why there's a lot more to AI than deep learning
*) Why unifying these five tribes may not be sufficient to reach human-level general intelligence
*) The task of understanding an entire concept (e.g 'horse') from just seeing a single example
*) A wide spread of estimates of the timescale to reach AGI
*) Different views as to the true risks from advanced AI
*) The case that risks arise from AI incompetence rather than from increased AI competence
*) A different risk: that bad actors will gain dangerously more power from access to increasingly competent AI
*) The case for using AI to prevent misuse of AI
*) Yet another risk: that an AI trained against one objective function will nevertheless adopt goals diverging from that objective
*) How AIs that operate beyond our understanding could still remain under human control
*) How fully can evolution be trusted to produce outputs in line with a specified objective function?
*) The example of humans taming wolves into dogs that pose no threat to us
*) The counterexample of humans pursuing goals contrary to our in-built genetic drives
*) Complications with multiple levels of selection pressures, e.g genes and memes working at cross purposes
*) The “genie problem” (or “King Midas problem”) of choosing an objective function that is apparently attractive but actually dangerous
*) Assessing the motivations of people who have signed the FLI (Future of Life Institute) letter advocating a pause on the development of larger AI language models
*) Pros and cons of escalating a sense of urgency
*) The two key questions of existential risk from AI: how much risk is acceptable, and what might that level of risk become in the near future?
*) The need for a more rational discussion of the issues raised by increasingly competent AIs
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

  continue reading

84集单集

Semua episod

×
 
Loading …

欢迎使用Player FM

Player FM正在网上搜索高质量的播客,以便您现在享受。它是最好的播客应用程序,适用于安卓、iPhone和网络。注册以跨设备同步订阅。

 

快速参考指南