Artwork

内容由The Nonlinear Fund提供。所有播客内容(包括剧集、图形和播客描述)均由 The Nonlinear Fund 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal
Player FM -播客应用
使用Player FM应用程序离线!

LW - An AI Race With China Can Be Better Than Not Racing by niplav

16:49
 
分享
 

Manage episode 426831749 series 3337129
内容由The Nonlinear Fund提供。所有播客内容(包括剧集、图形和播客描述)均由 The Nonlinear Fund 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An AI Race With China Can Be Better Than Not Racing, published by niplav on July 2, 2024 on LessWrong. Frustrated by all your bad takes, I write a Monte-Carlo analysis of whether a transformative-AI-race between the PRC and the USA would be good. To my surprise, I find that it is better than not racing. Advocating for an international project to build TAI instead of racing turns out to be good if the probability of such advocacy succeeding is 20%. A common scheme for a conversation about pausing the development of transformative AI goes like this: Abdullah: "I think we should pause the development of TAI, because if we don't it seems plausible that humanity will be disempowered by by advanced AI systems." Benjamin: "Ah, if by "we" you refer to the United States (and and its allies, which probably don't stand a chance on their own to develop TAI), then the current geopolitical rival of the US, namely the PRC, will achieve TAI first. That would be bad." Abdullah: "I don't see how the US getting TAI first changes anything about the fact that we don't know how to align superintelligent AI systems - I'd rather not race to be the first person to kill everyone." Benjamin: "Ah, so now you're retreating back into your cozy little motte: Earlier you said that "it seems plausible that humanity will be disempowered", now you're acting like doom and gloom is certain. You don't seem to be able to make up your mind about how risky you think the whole enterprise is, and I have very concrete geopolitical enemies at my (semiconductor manufacturer's) doorstep that I have to worry about. Come back with better arguments." This dynamic is a bit frustrating. Here's how I'd like Abdullah to respond: Abdullah: "You're right, you're right. I was insufficiently precise in my statements, and I apologize for that. Instead, let us manifest the dream of the great philosopher: Calculemus! At a basic level, we want to estimate how much worse (or, perhaps, better) it would be for the United States to completely cede the race for TAI to the PRC. I will exclude other countries as contenders in the scramble for TAI, since I want to keep this analysis simple, but that doesn't mean that I don't think they matter. (Although, honestly, the list of serious contenders is pretty short.) For this, we have to estimate multiple quantities: 1. In worlds in which the US and PRC race for TAI: 1. The time until the US/PRC builds TAI. 2. The probability of extinction due to TAI, if the US is in the lead. 3. The probability of extinction due to TAI, if the PRC is in the lead. 4. The value of the worlds in which the US builds aligned TAI first. 5. The value of the worlds in which the PRC builds aligned TAI first. 2. In worlds where the US tries to convince other countries (including the PRC) to not build TAI, potentially including force, and still tries to prevent TAI-induced disempowerment by doing alignment-research and sharing alignment-favoring research results: 1. The time until the PRC builds TAI. 2. The probability of extinction caused by TAI. 3. The value of worlds in which the PRC builds aligned TAI. 3. The value of worlds where extinction occurs (which I'll fix at 0). 4. As a reference point the value of hypothetical worlds in which there is a multinational exclusive AGI consortium that builds TAI first, without any time pressure, for which I'll fix the mean value at 1. To properly quantify uncertainty, I'll use the Monte-Carlo estimation library squigglepy (no relation to any office supplies or internals of neural networks). We start, as usual, with housekeeping: As already said, we fix the value of extinction at 0, and the value of a multinational AGI consortium-led TAI at 1 (I'll just call the consortium "MAGIC", from here on). That is not to say that the MAGIC-led TAI future is the best possible TAI future...
  continue reading

1702集单集

Artwork
icon分享
 
Manage episode 426831749 series 3337129
内容由The Nonlinear Fund提供。所有播客内容(包括剧集、图形和播客描述)均由 The Nonlinear Fund 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An AI Race With China Can Be Better Than Not Racing, published by niplav on July 2, 2024 on LessWrong. Frustrated by all your bad takes, I write a Monte-Carlo analysis of whether a transformative-AI-race between the PRC and the USA would be good. To my surprise, I find that it is better than not racing. Advocating for an international project to build TAI instead of racing turns out to be good if the probability of such advocacy succeeding is 20%. A common scheme for a conversation about pausing the development of transformative AI goes like this: Abdullah: "I think we should pause the development of TAI, because if we don't it seems plausible that humanity will be disempowered by by advanced AI systems." Benjamin: "Ah, if by "we" you refer to the United States (and and its allies, which probably don't stand a chance on their own to develop TAI), then the current geopolitical rival of the US, namely the PRC, will achieve TAI first. That would be bad." Abdullah: "I don't see how the US getting TAI first changes anything about the fact that we don't know how to align superintelligent AI systems - I'd rather not race to be the first person to kill everyone." Benjamin: "Ah, so now you're retreating back into your cozy little motte: Earlier you said that "it seems plausible that humanity will be disempowered", now you're acting like doom and gloom is certain. You don't seem to be able to make up your mind about how risky you think the whole enterprise is, and I have very concrete geopolitical enemies at my (semiconductor manufacturer's) doorstep that I have to worry about. Come back with better arguments." This dynamic is a bit frustrating. Here's how I'd like Abdullah to respond: Abdullah: "You're right, you're right. I was insufficiently precise in my statements, and I apologize for that. Instead, let us manifest the dream of the great philosopher: Calculemus! At a basic level, we want to estimate how much worse (or, perhaps, better) it would be for the United States to completely cede the race for TAI to the PRC. I will exclude other countries as contenders in the scramble for TAI, since I want to keep this analysis simple, but that doesn't mean that I don't think they matter. (Although, honestly, the list of serious contenders is pretty short.) For this, we have to estimate multiple quantities: 1. In worlds in which the US and PRC race for TAI: 1. The time until the US/PRC builds TAI. 2. The probability of extinction due to TAI, if the US is in the lead. 3. The probability of extinction due to TAI, if the PRC is in the lead. 4. The value of the worlds in which the US builds aligned TAI first. 5. The value of the worlds in which the PRC builds aligned TAI first. 2. In worlds where the US tries to convince other countries (including the PRC) to not build TAI, potentially including force, and still tries to prevent TAI-induced disempowerment by doing alignment-research and sharing alignment-favoring research results: 1. The time until the PRC builds TAI. 2. The probability of extinction caused by TAI. 3. The value of worlds in which the PRC builds aligned TAI. 3. The value of worlds where extinction occurs (which I'll fix at 0). 4. As a reference point the value of hypothetical worlds in which there is a multinational exclusive AGI consortium that builds TAI first, without any time pressure, for which I'll fix the mean value at 1. To properly quantify uncertainty, I'll use the Monte-Carlo estimation library squigglepy (no relation to any office supplies or internals of neural networks). We start, as usual, with housekeeping: As already said, we fix the value of extinction at 0, and the value of a multinational AGI consortium-led TAI at 1 (I'll just call the consortium "MAGIC", from here on). That is not to say that the MAGIC-led TAI future is the best possible TAI future...
  continue reading

1702集单集

所有剧集

×
 
Loading …

欢迎使用Player FM

Player FM正在网上搜索高质量的播客,以便您现在享受。它是最好的播客应用程序,适用于安卓、iPhone和网络。注册以跨设备同步订阅。

 

快速参考指南