Player FM - Internet Radio Done Right
Checked 3M ago
four 年前已添加!
内容由Matt Arnold提供。所有播客内容(包括剧集、图形和播客描述)均由 Matt Arnold 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal。
Player FM -播客应用
使用Player FM应用程序离线!
使用Player FM应用程序离线!
值得一听的播客
赞助
O
On the Bus with Troy Vollhoffer


Not many artists actually hail from Tennessee, but the scenic valleys and rolling hills of The Volunteer State are part of Dustin Lynch’s DNA. In this episode of On the Bus, Country Thunder CEO Troy Vollhoffer sits down with Dustin to discuss his journey from playing fraternity parties and weddings across the southeast to being the first country artist with a club residency at the Wynn in Las Vegas. Plus, stick around for our new segment, Thunder Strike, where Troy features upcoming festival performer Riley Green’s hit song “Damn Good Day to Leave” to give you a taste of what’s to come at Country Thunder in 2025.…
Do AI As Engineering Instead
Manage episode 455629064 series 2862172
内容由Matt Arnold提供。所有播客内容(包括剧集、图形和播客描述)均由 Matt Arnold 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal。
Current AI practice is not engineering, even when it aims for practical applications, because it is not based on scientific understanding. Enforcing engineering norms on the field could lead to considerably safer systems. https://betterwithout.ai/AI-as-engineering This episode has a lot of links! Here they are. Michael Nielsen’s “The role of ‘explanation’ in AI”. https://michaelnotebook.com/ongoing/sporadica.html#role_of_explanation_in_AI Subbarao Kambhampati’s “Changing the Nature of AI Research”. https://dl.acm.org/doi/pdf/10.1145/3546954 Chris Olah and his collaborators: “Thread: Circuits”. distill.pub/2020/circuits/ “An Overview of Early Vision in InceptionV1”. distill.pub/2020/circuits/early-vision/ Dai et al., “Knowledge Neurons in Pretrained Transformers”. https://arxiv.org/pdf/2104.08696.pdf Meng et al.: “Locating and Editing Factual Associations in GPT.” rome.baulab.info “Mass-Editing Memory in a Transformer,” https://arxiv.org/pdf/2210.07229.pdf François Chollet on image generators putting the wrong number of legs on horses: twitter.com/fchollet/status/1573879858203340800 Neel Nanda’s “Longlist of Theories of Impact for Interpretability”, https://www.lesswrong.com/posts/uK6sQCNMw8WKzJeCQ/a-longlist-of-theories-of-impact-for-interpretability Zachary C. Lipton’s “The Mythos of Model Interpretability”. https://arxiv.org/abs/1606.03490 Meng et al., “Locating and Editing Factual Associations in GPT”. https://arxiv.org/pdf/2202.05262.pdf Belrose et al., “Eliciting Latent Predictions from Transformers with the Tuned Lens”. https://arxiv.org/abs/2303.08112 “Progress measures for grokking via mechanistic interpretability”. https://arxiv.org/abs/2301.05217 Conmy et al., “Towards Automated Circuit Discovery for Mechanistic Interpretability”. https://arxiv.org/abs/2304.14997 Elhage et al., “Softmax Linear Units,” transformer-circuits.pub/2022/solu/index.html Filan et al., “Clusterability in Neural Networks,” https://arxiv.org/pdf/2103.03386.pdf Cammarata et al., “Curve circuits,” distill.pub/2020/circuits/curve-circuits/ You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.
…
continue reading
155集单集
Manage episode 455629064 series 2862172
内容由Matt Arnold提供。所有播客内容(包括剧集、图形和播客描述)均由 Matt Arnold 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal。
Current AI practice is not engineering, even when it aims for practical applications, because it is not based on scientific understanding. Enforcing engineering norms on the field could lead to considerably safer systems. https://betterwithout.ai/AI-as-engineering This episode has a lot of links! Here they are. Michael Nielsen’s “The role of ‘explanation’ in AI”. https://michaelnotebook.com/ongoing/sporadica.html#role_of_explanation_in_AI Subbarao Kambhampati’s “Changing the Nature of AI Research”. https://dl.acm.org/doi/pdf/10.1145/3546954 Chris Olah and his collaborators: “Thread: Circuits”. distill.pub/2020/circuits/ “An Overview of Early Vision in InceptionV1”. distill.pub/2020/circuits/early-vision/ Dai et al., “Knowledge Neurons in Pretrained Transformers”. https://arxiv.org/pdf/2104.08696.pdf Meng et al.: “Locating and Editing Factual Associations in GPT.” rome.baulab.info “Mass-Editing Memory in a Transformer,” https://arxiv.org/pdf/2210.07229.pdf François Chollet on image generators putting the wrong number of legs on horses: twitter.com/fchollet/status/1573879858203340800 Neel Nanda’s “Longlist of Theories of Impact for Interpretability”, https://www.lesswrong.com/posts/uK6sQCNMw8WKzJeCQ/a-longlist-of-theories-of-impact-for-interpretability Zachary C. Lipton’s “The Mythos of Model Interpretability”. https://arxiv.org/abs/1606.03490 Meng et al., “Locating and Editing Factual Associations in GPT”. https://arxiv.org/pdf/2202.05262.pdf Belrose et al., “Eliciting Latent Predictions from Transformers with the Tuned Lens”. https://arxiv.org/abs/2303.08112 “Progress measures for grokking via mechanistic interpretability”. https://arxiv.org/abs/2301.05217 Conmy et al., “Towards Automated Circuit Discovery for Mechanistic Interpretability”. https://arxiv.org/abs/2304.14997 Elhage et al., “Softmax Linear Units,” transformer-circuits.pub/2022/solu/index.html Filan et al., “Clusterability in Neural Networks,” https://arxiv.org/pdf/2103.03386.pdf Cammarata et al., “Curve circuits,” distill.pub/2020/circuits/curve-circuits/ You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.
…
continue reading
155集单集
所有剧集
×This concludes "Gradient Dissent", the companion document to "Better Without AI". Thank you so much for listening! You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.…
Current text generators, such as ChatGPT, are highly unreliable, difficult to use effectively, unable to do many things we might want them to, and extremely expensive to develop and run. These defects are inherent in their underlying technology. Quite different methods could plausibly remedy all these defects. Would that be good, or bad? https://betterwithout.ai/better-text-generators John McCarthy’s paper “Programs with common sense”: http://www-formal.stanford.edu/jmc/mcc59/mcc59.html Harry Frankfurt, "On Bullshit": https://www.amazon.com/dp/B001EQ4OJW/?tag=meaningness-20 Petroni et al., “Language Models as Knowledge Bases?": https://aclanthology.org/D19-1250/ Gwern Branwen, “The Scaling Hypothesis”: gwern.net/scaling-hypothesis Rich Sutton’s “Bitter Lesson”: www.incompleteideas.net/IncIdeas/BitterLesson.html Guu et al.’s “Retrieval augmented language model pre-training” (REALM): http://proceedings.mlr.press/v119/guu20a/guu20a.pdf Borgeaud et al.’s “Improving language models by retrieving from trillions of tokens” (RETRO): https://arxiv.org/pdf/2112.04426.pdf Izacard et al., “Few-shot Learning with Retrieval Augmented Language Models”: https://arxiv.org/pdf/2208.03299.pdf Chirag Shah and Emily M. Bender, “Situating Search”: https://dl.acm.org/doi/10.1145/3498366.3505816 David Chapman's original version of the proposal he puts forth in this episode: twitter.com/Meaningness/status/1576195630891819008 Lan et al. “Copy Is All You Need”: https://arxiv.org/abs/2307.06962 Mitchell A. Gordon’s “RETRO Is Blazingly Fast”: https://mitchgordon.me/ml/2022/07/01/retro-is-blazing.html Min et al.’s “Silo Language Models”: https://arxiv.org/pdf/2308.04430.pdf W. Daniel Hillis, The Connection Machine, 1986: https://www.amazon.com/dp/0262081571/?tag=meaningness-20 Ouyang et al., “Training language models to follow instructions with human feedback”: https://arxiv.org/abs/2203.02155 Ronen Eldan and Yuanzhi Li, “TinyStories: How Small Can Language Models Be and Still Speak Coherent English?”: https://arxiv.org/pdf/2305.07759.pdf Li et al., “Textbooks Are All You Need II: phi-1.5 technical report”: https://arxiv.org/abs/2309.05463 Henderson et al., “Foundation Models and Fair Use”: https://arxiv.org/abs/2303.15715 Authors Guild v. Google: https://en.wikipedia.org/wiki/Authors_Guild%2C_Inc._v._Google%2C_Inc. Abhishek Nagaraj and Imke Reimers, “Digitization and the Market for Physical Works: Evidence from the Google Books Project”: https://www.aeaweb.org/articles?id=10.1257/pol.20210702 You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.…
F
Fluidity

Analysis of image classifiers demonstrates that it is possible to understand backprop networks at the task-relevant run-time algorithmic level. In these systems, at least, networks gain their power from deploying massive parallelism to check for the presence of a vast number of simple, shallow patterns. https://betterwithout.ai/images-surface-features This episode has a lot of links: David Chapman's earliest public mention, in February 2016, of image classifiers probably using color and texture in ways that "cheat": twitter.com/Meaningness/status/698688687341572096 Jordana Cepelewicz’s “Where we see shapes, AI sees textures,” Quanta Magazine, July 1, 2019: https://www.quantamagazine.org/where-we-see-shapes-ai-sees-textures-20190701/ “Suddenly, a leopard print sofa appears”, May 2015: https://web.archive.org/web/20150622084852/http://rocknrollnerd.github.io/ml/2015/05/27/leopard-sofa.html “Understanding How Image Quality Affects Deep Neural Networks” April 2016: https://arxiv.org/abs/1604.04004 Goodfellow et al., “Explaining and Harnessing Adversarial Examples,” December 2014: https://arxiv.org/abs/1412.6572 “Universal adversarial perturbations,” October 2016: https://arxiv.org/pdf/1610.08401v1.pdf “Exploring the Landscape of Spatial Robustness,” December 2017: https://arxiv.org/abs/1712.02779 “Overinterpretation reveals image classification model pathologies,” NeurIPS 2021: https://proceedings.neurips.cc/paper/2021/file/8217bb4e7fa0541e0f5e04fea764ab91-Paper.pdf “Approximating CNNs with Bag-of-Local-Features Models Works Surprisingly Well on ImageNet,” ICLR 2019: https://openreview.net/forum?id=SkfMWhAqYQ Baker et al.’s “Deep convolutional networks do not classify based on global object shape,” PLOS Computational Biology, 2018: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1006613 François Chollet's Twitter threads about AI producing images of horses with extra legs: twitter.com/fchollet/status/1573836241875120128 and twitter.com/fchollet/status/1573843774803161090 “Zoom In: An Introduction to Circuits,” 2020: https://distill.pub/2020/circuits/zoom-in/ Geirhos et al., “ImageNet-Trained CNNs Are Biased Towards Texture; Increasing Shape Bias Improves Accuracy and Robustness,” ICLR 2019: https://openreview.net/forum?id=Bygh9j09KX Dehghani et al., “Scaling Vision Transformers to 22 Billion Parameters,” 2023: https://arxiv.org/abs/2302.05442 Hasson et al., “Direct Fit to Nature: An Evolutionary Perspective on Biological and Artificial Neural Networks,” February 2020: https://www.gwern.net/docs/ai/scaling/2020-hasson.pdf You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.…
Current AI practice is not engineering, even when it aims for practical applications, because it is not based on scientific understanding. Enforcing engineering norms on the field could lead to considerably safer systems. https://betterwithout.ai/AI-as-engineering This episode has a lot of links! Here they are. Michael Nielsen’s “The role of ‘explanation’ in AI”. https://michaelnotebook.com/ongoing/sporadica.html#role_of_explanation_in_AI Subbarao Kambhampati’s “Changing the Nature of AI Research”. https://dl.acm.org/doi/pdf/10.1145/3546954 Chris Olah and his collaborators: “Thread: Circuits”. distill.pub/2020/circuits/ “An Overview of Early Vision in InceptionV1”. distill.pub/2020/circuits/early-vision/ Dai et al., “Knowledge Neurons in Pretrained Transformers”. https://arxiv.org/pdf/2104.08696.pdf Meng et al.: “Locating and Editing Factual Associations in GPT.” rome.baulab.info “Mass-Editing Memory in a Transformer,” https://arxiv.org/pdf/2210.07229.pdf François Chollet on image generators putting the wrong number of legs on horses: twitter.com/fchollet/status/1573879858203340800 Neel Nanda’s “Longlist of Theories of Impact for Interpretability”, https://www.lesswrong.com/posts/uK6sQCNMw8WKzJeCQ/a-longlist-of-theories-of-impact-for-interpretability Zachary C. Lipton’s “The Mythos of Model Interpretability”. https://arxiv.org/abs/1606.03490 Meng et al., “Locating and Editing Factual Associations in GPT”. https://arxiv.org/pdf/2202.05262.pdf Belrose et al., “Eliciting Latent Predictions from Transformers with the Tuned Lens”. https://arxiv.org/abs/2303.08112 “Progress measures for grokking via mechanistic interpretability”. https://arxiv.org/abs/2301.05217 Conmy et al., “Towards Automated Circuit Discovery for Mechanistic Interpretability”. https://arxiv.org/abs/2304.14997 Elhage et al., “Softmax Linear Units,” transformer-circuits.pub/2022/solu/index.html Filan et al., “Clusterability in Neural Networks,” https://arxiv.org/pdf/2103.03386.pdf Cammarata et al., “Curve circuits,” distill.pub/2020/circuits/curve-circuits/ You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.…
Few AI experiments constitute meaningful tests of hypotheses. As a branch of machine learning research, AI science has concentrated on black box investigation of training time phenomena. The best of this work is has been scientifically excellent. However, the hypotheses tested are mainly irrelevant to user and societal concerns. https://betterwithout.ai/AI-as-science This chapter references Chapman's essay, "How should we evaluate progress in AI?" https://metarationality.com/artificial-intelligence-progress "Troubling Trends in Machine Learning Scholarship", Zachary C. Lipton and Jacob Steinhardt: https://arxiv.org/abs/1807.03341 You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.…
Do AI As Science And Engineering Instead - We’ve seen that current AI practice leads to technologies that are expensive, difficult to apply in real-world situations, and inherently unsafe. Neglected scientific and engineering investigations can bring better understanding of the risks of current AI technology, and can lead to safer technologies. https://betterwithout.ai/science-engineering-vs-AI Run-Time Task-Relevant Algorithmic Understanding - The type of scientific and engineering understanding most relevant to AI safety is run-time, task-relevant, and algorithmic. That can lead to more reliable, safer systems. Unfortunately, gaining such understanding has been neglected in AI research, so currently we have little. https://betterwithout.ai/AI-algorithmic-level For more information, see David Chapman's 2017 essay "How should we evaluate progress in AI?" https://betterwithout.ai/artificial-intelligence-progress You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.…
Current AI results from experimental variation of mechanisms, unguided by theoretical principles. That has produced systems that can do amazing things. On the other hand, they are extremely error-prone and therefore unsafe. Backpropaganda, a collection of misleading ways of talking about “neural networks,” justifies continuing in this misguided direction. https://betterwithout.ai/backpropaganda You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.…
The conclusion of this chapter. So-called “neural networks” are extremely expensive, poorly understood, unfixably unreliable, deceptive, data hungry, and inherently limited in capabilities. In short: they are bad. https://betterwithout.ai/artificial-neurons-considered-harmful Sayash Kapoor and Arvind Narayanan’s "The bait and switch behind AI risk prediction tools": https://aisnakeoil.substack.com/p/the-bait-and-switch-behind-ai-risk A video titled "Latent Space Walk": https://www.youtube.com/watch?v=bPgwwvjtX_g Another video showing a walk through latent space: https://www.youtube.com/watch?v=YnXiM97ZvOM You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.…
This begins "Gradient Dissent", the companion material to "Better Without AI". The neural network and GPT technologies that power current artificial intelligence are exceptionally error prone, deceptive, poorly understood, and dangerous. They are widely used without adequate safeguards in situations where they cause increasing harms. They are not inevitable, and we should replace them with better alternatives. https://betterwithout.ai/gradient-dissent Artificial Neurons Considered Harmful, Part 1 - So-called “neural networks” are extremely expensive, poorly understood, unfixably unreliable, deceptive, data hungry, and inherently limited in capabilities. In short: they are bad. https://betterwithout.ai/artificial-neurons-considered-harmful You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.…
The five short chapters in this episode are the conclusion of the main body of Better Without AI. Next, we'll begin the book's appendix, Gradient Dissent. Cozy Futurism - If we knew we’d never get flying cars, most people wouldn’t care. What do we care about? https://betterwithout.ai/cozy-futurism Meaningful Futurism - Likeable futures are meaningful, not just materially comfortable. Bringing one about requires imagining it. I invite you to do that! https://betterwithout.ai/meaningful-future The Inescapable: Politics - No realistic approach to future AI can avoid questions of power and social organization. https://betterwithout.ai/inescapable-AI-politics Responsibility https://betterwithout.ai/responsibility This is about you https://betterwithout.ai/about-you You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.…
A Future We Would Like - The most important questions are not about technology but about us. What sorts of future would we like? What role could AI play in getting us there, and also in that world? What is your own role in helping that happen? https://betterwithout.ai/a-future-we-would-like How AI Destroyed The Future -We are doing a terrible job of thinking about the most important question because unimaginably powerful evil artificial intelligences are controlling our brains. https://betterwithout.ai/AI-destroyed-the-future A One-Bit Future - Superintelligence scenarios reduce the future to infinitely good or infinitely bad. Both are possible, but we cannot reason about or act toward them. Messy complicated good-and-bad futures are probably more likely, and in any case are more feasible to influence. https://betterwithout.ai/one-bit-future This episode mentions David Chapman's essay "Vaster Than Ideology" for getting AI out of your head. Text link: https://meaningness.com/vaster-than-ideology Episode link: https://fluidity.libsyn.com/vaster-than-ideology You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.…
Stop obstructing scientific progress! We already know how to dramatically accelerate science: by getting out of the way. https://betterwithout.ai/stop-obstructing-science How to science better. What do exceptional scientists do differently from mediocre ones? Can we train currently-mediocre ones to do better? https://betterwithout.ai/better-science-without-AI Scenius: upgrading science FTW. Empirically, breakthroughs that enable great progress depend on particular, uncommon social constellations and accompanying social practices. Let’s encourage these! https://betterwithout.ai/human-scenius-vs-artificial-genius Matt Clancy reviews the evidence for scientific progress slowing, with citations and graphs. https://twitter.com/mattsclancy/status/1612440718177603584 "Scenius, or Communal Genius", Kevin Kelly, The Technium. https://kk.org/thetechnium/scenius-or-comm/…
Progress requires experimentation. Suggested ways AI could speed progress by automating experiments appear mistaken. https://betterwithout.ai/limits-to-induction You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.…
Forgive the sound quality on this episode; I recorded it live in front of an audience on a platform floating in a lake during the 2024 solar eclipse. This is a standalone essay by David Chapman on metarationaity.com. How scientific research is like cunnilingus: a phenomenology of epistemology. https://metarationality.com/going-down-on-the-phenomenon You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.…
What Is The Role Of Intelligence In Science? Actually, what are “science” and “intelligence”? Precise, explicit definitions aren’t necessary, but discussions of Transformative AI seem to depend implicitly on particular models of both. It matters if those models are wrong. https://betterwithout.ai/intelligence-in-science Katja Grace, “Counterarguments to the basic AI x-risk case”. https://aiimpacts.org/counterarguments-to-the-basic-ai-x-risk-case/ What Do Unusually Intelligent People Do? If we want to know what a superintelligent AI might do, and how, it could help to investigate what the most intelligent humans do, and how. If we want to know how to dramatically accelerate science and technology development, it could help to investigate what the best scientists and technologists do, and how. https://betterwithout.ai/what-intelligent-people-do Patrick Collison and Tyler Cowen, “We Need a New Science of Progress,” The Atlantic, July 30, 2019. https://www.theatlantic.com/science/archive/2019/07/we-need-new-science-progress/594946/ Gwern Branwen, “Catnip immunity and alternatives”. https://www.gwern.net/Catnip#optimal-catnip-alternative-selection-solving-the-mdp You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.…
欢迎使用Player FM
Player FM正在网上搜索高质量的播客,以便您现在享受。它是最好的播客应用程序,适用于安卓、iPhone和网络。注册以跨设备同步订阅。