Pat Pataranutaporn on human flourishing with AI, augmenting reasoning, enhancing motivation, and benchmarking human-AI interaction (AC Ep82)
Manage episode 473369922 series 3394253
“We should not make technology so that we can be stupid. We should make technology so we can be even smarter… not just make the machine more intelligent, but enhance the overall intelligence—especially human intelligence.”
–Pat Pataranutaporn

About Pat Pataranutaporn
Pat Pataranutaporn is Co-Director of MIT Media Lab’s new Advancing Humans with AI (AHA) research program, alongside Pattie Maes. In addition to extensive academic publications, his research has been featured in Scientific American, MIT Tech Review, Washington Post, Wall Street Journal, and other leading publications. His work has been named in TIME’s “Best Inventions” lists and Fast Company’s “World Changing Ideas.”
What you will learn
Reimagining ai as a tool for human flourishing
Exploring the future you project and long-term thinking
Boosting motivation through personalized ai learning
Enhancing critical thinking with question-based ai prompts
Designing agents that collaborate, not dominate
Preventing collective intelligence from becoming uniform
Launching aha to measure ai’s real impact on people
Episode Resources
People
Organizations & Institutions
Technical Terms & Concepts
Transcript
Ross Dawson: Pat, it is wonderful to have you on the show.
Pat Pataranutaporn: Thank you so much. It’s awesome to be here. Thanks for having me.
Ross: There’s so much to dive into, but as a starting point: you focus on human flourishing with AI, exactly. So what does that mean? Paint the big picture of AI and how it can help us to flourish as who we are and our humanity.
Pat: Yeah, that’s a great question. So I’m a researcher at MIT Media Lab. I’ve been working on human-AI interaction before it was cool—before ChatGPT took off, right?
So we have been asking this question for a long time: when we focus on artificial intelligence, what does it mean for people? What does it mean for humanity?
I think today, a lot of conversation is about how we can make models better, how we can make technology smarter and smarter. But does that mean that we can be stupid? Does it mean that we can just let the machine be the smart one and let it take over?
That is not the vision that we have at MIT. We believe that technology should make humans better.
So I think the idea of human flourishing is an umbrella term that we use to describe different areas where we think AI could enhance the human experience.
For me in particular, I focus on three areas: how AI can enhance human wisdom, enhancing wonder, and well-being. So: 3 W’s—wisdom, wonder, and well-being.
We work on many projects to look into these areas. For example, how AI could allow a person to talk to their future self, so that they can think in the longer term, to see that future more vividly. That’s about enhancing wonder and wisdom.
We think a lot about how AI can help people think more critically and analyze information that they encounter on a daily basis in a more comprehensive way.
And you know well-being, we have many projects that look at how AI can improve human mental health, positive thinking, and things like that.
But at the end, we also focus on AI that doesn’t lead to human flourishing, to balance it out. We study in what contexts human-AI interaction leads to negative outcomes—like people becoming lonelier or experiencing negative outcomes such as false memories, misinformation, and things like that.
As scientists, we’re not overly optimistic or pessimistic. We’re trying to understand what’s going on and how we can design a better future for everyone. That’s what we’re trying to focus on. Yeah?
Ros: Fabulous. And as you say, there are many, many different projects and domains of research which you’re delving into. So I’d like to start to dive into some of those.
One that you mentioned was the Future You project. So I’d love to hear about what that is, how you created it, and what the impact was on people being able to interact with their future selves.
Pat: Totally. So, I mean, as I said, right, the idea of human flourishing is really exciting for us. And in order to flourish, like, you cannot think short term. You need to think long term and be able to sort of imagine: how would you get there, right?
So as a kid, I was interested in sort of a time machine. Like, I loved dinosaurs. I wanted to go back into the past and also go into the future, see what would happen in the future, like the exciting future we might have. So I really love this idea of, like, having a time machine.
And of course, we cannot do a real time machine yet, but we can make a simulation of a time machine that uses a person’s personal data and can extrapolate that, and use other data to kind of see, okay, if the person has this current behavior, things that they care about, what would happen down the road—like what would happen in the future.
So we built an AI simulation that is a digital twin of a person. And we first ask people to kind of provide us with some basic information: their aspiration, things that they want to achieve in the future. And then we use the current behavior that they have to kind of create what we call a synthetic memory, or a memory that that person might have in the future, right?
So normally, memory is something that you already experienced. But in this case, because we want to simulate the future self, we need to build memory that you did not experience yet but might actually experience in the future.
So we use language model combined with the information that the person gives us to create this sort of intermediary representation of person experience, and then feed that into a model that then allows us to create human-like conversation.
And then we also age the image of the person. So when the person uploads the image, we also use a visual model that can kind of create an older representation of that person. And then combine these together, we are creating an AI-simulated future self that people can have a conversation with.
So we have been working with psychologists—Professor Hal Herschfeld from UCLA—who looks at the concept of future self-continuity, which is a psychological concept that measures how well a person can vividly imagine their future self. And he has shown that if you can increase this future self-continuity, people tend to have better mental health, better financial saving, better decision, because they can kind of think for the long term, right?
So we did this experiment where we created this future self system and then tested it with people and compared it with a regular chatbot and having no intervention at all. And we have shown that this future self intervention can increase future self-continuity and also reduce people’s anxiety as well.
So they become much more of a future thinker—not only think about today’s situation, but can see the possibility of the future and have better mental health overall. So I think this is really exciting for us, because we built a new type of system, but also really showed that it had a positive impact in the real world.
Ross: What were the ranges of ages of people who were involved in this research?
Pat: Yeah, so right now, the prototype that we developed is for younger population—people that just finished college or people that just finished high school, people that still need to think about what their future might look like, people that still would benefit from having ability to kind of think in the longer term.
And right now, we actually have a public demo that everyone can use. So people can go to our website and then actually start to use it. You can also volunteer the data for research as well. So this is sort of in the wild, or in the real world study. That’s what we are doing right now.
So if people like to volunteer the data, then we can also use the data to kind of do future research on this topic. But right now, the system has been used by people in over 190 countries, and we are really excited for this research to be in the real world and have people using it.
Ross: Fabulous. We’ll have the link in the show notes.
So, one of the other interesting aspects raised across your research is the potential positive impact of AI on motivation. I think that’s a really interesting point. Because, classically, if you think about the future of education, AI can have custom learning pathways and so on. But the role of the human teachers, of course, is to inspire and to motivate and to engage and so on.
So I’d love to hear about how you’re using AI to develop people’s positive motivation.
Pat: Yeah, that’s a really great question. And I totally agree with you that the role of the teacher is to inspire and create this sort of positive reinforcement or positive encouragement for the student, right? We are not trying to replace that.
Our research is trying to see what kind of tools the teacher can use to improve student motivation, right? And I think today, a lot of people have been asking, like, well, we have AI that can do so many things—why do we need to learn, right?
And we believe at MIT that learning is not just for the benefit of getting a job or for the benefit that you will have a good life, but it’s good for personal growth, and it’s also a fun process, right? Learning something allows you to feel excited about your life—like, oh, you can now do this, even though AI can do that.
I mean, a car can also go from one place to another place, but that doesn’t mean we should stop walking, right? Or you can go to a restaurant and a professional chef can cook for you, but it’s also a very fun thing to cook at home, right? With your loved ones or with your family, right?
So I think learning is a really important process of being human, and AI could make that process even more interesting and even more personal, right?
We really emphasize a lot on the idea of personalized learning, which means that learning can be tailored to each individual. People are very different, right? We learn in different ways. We care about different things.
And learning is also about connecting the dots—things that we already know and new things that we haven’t learned before. How do we connect that dot better?
So we have built many AI systems that try to address these.
The first project we looked at was what happens if we can create virtual characters that can work with teachers to help students learn new materials. They can be a guest lecturer, they could be a virtual tutor that students can interact with in addition to their real teacher, right?
And we showed that by creating characters based on the people that students like and admire—like, at that time, I think people liked Elon Musk a lot (I don’t know about now; I think we would have a different story)—but at that time, Elon Musk was a hero to many people.
So we showed that if you learn from virtual Elon Musk, people have a higher level of learning motivation, and they want to learn more advanced material compared to a generic AI.
So personalization, in this case, really helped with enhancing personalized feeling and also learning motivation and positive learning experience. We have shown this across different educational measures.
Another project we did was looking at examples, right? When you learn things, you want examples to help you understand the concept, right? Sometimes concepts can be very abstract, but when you have examples, that’s when you can start to connect it with the real world.
Here we showed that if we use AI to create examples that resonate with the student’s interests—like if they love Harry Potter, or, I don’t know, like Kim Kardashian, or whatever—Minecraft or whatever things that people like these days, right? Well, I feel like an old person now, but yeah, things that people care about.
If you create an example using elements that people care about, we can also make the lesson more accessible and exciting for people as well, right?
So this is a way that AI could make learning more positive and more fun and engaging for students. Yeah.
Ross: So one of the domains you’ve looked at is augmented reasoning. And so I think it’s a particularly interesting point now. In the last six months or so, we’ve all talked about reasoning models with large language models—or perhaps “reasoning” in quotation marks.
And there are also studies that have shown in various guises that people do seem to be reducing their cognitive engagement sometimes, whether they’re overusing LLMs or using them in the wrong ways. So I’d love to hear about your research in how we can use AI to augment reasoning as well as critical thinking capabilities.
Pat: That’s a great question. I mean, that’s going back to what I said, right? Like, what does it mean for humans to have smart models around us? Does it mean we can be stupid?
I think that’s a degradation of humans, right? We should not make technology so that we can be stupid. We should make technology so we can be even smarter, right?
So I think the end goal of having a machine or models that can do reasoning for us, rather than enhance our reasoning capability—I think that’s the wrong goal, right? And again, if you have the wrong outcome or the wrong measurement, you’re gonna get the wrong thing.
So first of all, you need to align the goal in the right direction.
That’s why, in my PhD research, I really want to focus on things that ultimately have positive impact on people. AI models continue to advance, but sometimes humans don’t advance with the AI models, right?
So in this case, reasoning is something that’s very, very critical. You can trace it back to ancient Greek. Socrates talked a lot about the importance of questioning and asking the right question, and always using this critical thinking process—not trusting things at face value, right?
We have been working on systems—again, the outcome of human-AI interaction can be influenced by both human behavior and AI behavior, right? So we can design AI systems that engage people in critical thinking rather than doing the critical thinking for them. That could be very dangerous, right?
These systems right now don’t really have real reasoning capability. They’re doing simulated reasoning. And sometimes they get it right because, on the internet, people have already expressed reasoning and thinking processes. If you repeat that, you can get to the right answer.
I mean, the internet is bigger than we imagined. I think that’s what the language models show us—that there’s always something on the internet that allows you to get to the right answer. You have powerful models that can learn those patterns, right?
So these models are doing simulated reasoning, which means they don’t have real understanding. Many people have shown that right now—that even though these systems perform very well on benchmarks, in the real world they still fail, especially with things that are very unique and very critical, right?
So in that case, the model, instead of doing the reasoning for us, could make us have better reasoning by teaching us the critical thinking process. And there are many processes for that. Many schools of thought.
We have looked at two processes. One of them is in a project called Variable Reasoner. We made a wearable device—like wearable smart glasses—with an AI agent that runs the process of verifying statements that people listen to and identify and flag when the statement people listen to has no evidence to support, right?
This is really, really important—especially if you love political speeches, or you love watching advertisements or TikTok. Because right now, social media is filled with statements that sound so convincing but have no evidence whatsoever.
So this type of system can help flag that. Because, as humans, we tend to go—or we tend to follow along—if things sound reasonable, sound correct, sound persuasive, we tend to go with them. But things that sound persuasive or sound correct doesn’t mean it’s correct, right?
It can use all sorts of heuristics and other fallacies to get you to fall into that trap. So our system—the AI—can be the system that follows things along and helps us flag that for us.
We have shown that when people wear these glasses, when the AI helps them think through the statements they listen to, people tend to agree more with statements that are well-reasoned and have evidence to support, right?
So we can show that we can nudge people to pay more attention to the evidence part of the information they encounter.
That’s one project.
Another project—we borrowed the technique from Socrates, the ancient Greek philosopher. We showed that if the AI doesn’t give the answer to people right away but rather asks a question back—it’s kind of counterintuitive, like, well, but people need to arrive at that information for themselves—
We showed that when the AI asked questions, it improved people’s ability to discern true information from false information better than AI giving the correct answer.
Which some people might ask: why is that the case?
And I think it’s because people already have the ability. Many of us already have the ability to discern information. We are just being distracted by other things.
So when the AI asks a question, it can help us focus on things that matter—especially if the AI frames the information in a way that makes us think, right?
For example, if there is a statement like: “Video games lead to people becoming more violent,” and the evidence is “a gamer slapped another last week.” For example—
If the AI starts to frame that into: “If one person stabs another person, does that mean that every gamer will become violent after playing video games?”
And then you start to realize that, oh, now there’s an overgeneralization. You’re using the example of one to overgeneralize to everyone, right?
If the AI frames the statement into a question like this, some people will be able to come up with the answer and discern for themselves. And this not only allows them to reach the right and correct answer but also strengthens their process as well, right?
It’s kind of like AI creating or scaffolding our critical thinking so that our critical thinking muscle can be strengthened, right?
So I think this is a really important area of research. And there are many more research coming out that show how we can design AI systems that enhance critical thinking rather than doing the critical thinking for us.
Ross: So in a number of other domains, there’s been research which has showed that whilst in some contexts AI can produce superior cognition or better thinking abilities, when the AI is withdrawn, they revert back.
So one of the things is not only using AI in the enhancement process, but post-AI—to actually enhance the norms. When you don’t have the AI, that you’re still able to enhance your critical thinking.
So has that been demonstrated, or is that something you would look at?
Pat: Yeah, that’s a really important question. We haven’t looked at a study in that sort of domain—what happens when people stop using the AI, or what happens when the AIs are being removed from people—but that’s something that is part of the research roadmap that we are doing.
At MIT right now, there’s a new research effort called AHA. We want to create aha moments, but AHA also stands for Advancing Humans with AI. And the emphasis is on advancing humans, right? AI is the part that’s supposed to help humans advance. So the focus is on the humans.
We have looked at different research areas. We’ve already been doing a lot of work in this, but we are creating this roadmap for what future AI researchers need to focus on—and this is part of it.
This is the point that you just mentioned: the idea of looking at what happens when the AI is removed from the equation, or when people no longer have access to the technology. What happens to their cognitive process and their skills? That is a really important part that is part of our roadmap.
And so, for the audience out there—this April 10 is when we are launching this AHA research program at MIT. We have a symposium that everyone can watch. It’s going to be streamed online on the MIT Media Lab website. You can go to aha.media.mit.edu, and see this symposium.
The theme of this symposium is: Can we design AI for human flourishing? And we have great speakers from OpenAI, Microsoft. We have great thinkers like Geraldine, Tristan Harris, Sherry Turkle, Arianna Huffington, and many amazing people who are joining us to really ask this question.
And hopefully, we hope that this kind of conversation will inspire the larger AI researchers and people in the industry to ask the important question of AI for human flourishing—not just AI for AI’s sake, or AI for technological advancement’s sake.
Ross: Yeah, I’ve just looked at the agenda and the speakers—this is mind-boggling. Looks like an extraordinary conference, and I’m very much looking forward to seeing the impact that that has.
So one of the other things I’m very interested in is this intersection of agents—AI agents, multi-agents—and collective intelligence. And as I often say, and you very much manifested in your work, this is not about multi-agent as a stack of different AI agents around. It’s saying, well, there are human agents, there are AI agents—so how can you pull these together to get a collective intelligence that manifests the best of both? A group of people and AI working together.
So I’d love to hear about your directions and research in that space.
Pat: Yeah, there is a lot of work that we are doing. And in fact, my PhD advisor, Professor Pattie Maes, is credited as one of the pioneers of software agents. And she is actually receiving the Lifetime Achievement Award in ACM SIGCHI, which is the special interest group in human-computer interaction—this is in a couple of months, actually.
So it’s awesome and amazing that she’s being recognized as the pioneer of this field.
But the question of agents, I think, is really interesting, because right now, the terminology is very broad. AI is a broad term. AGI is an even broader term. And “agent”—I don’t know what the definition is, right?
I mean, some people argue that it’s a type of system that can take action on behalf of the user, so the user doesn’t need to supervise. This means doing things autonomously. But there are different degrees of autonomy—like things that may require human approval, or things that can just do things on their own. And it can be in the physical world, or the digital world, or in between, right?
So the definition of agent is pretty broad. But I think, again, going back to the question of what is the human experience of interacting with this agent—are we losing our agency or the sense of ownership?
We have many projects that look into and investigate that.
For example, in one project, we design new form factors or new interaction paradigms for interacting with agents. This is a project we worked on with KBTG, which is one of the largest banks in Asia, where we’re trying to help people with financial decisions.
If you ask a chatbot, you need to pass back and forth a lot of information—like you need a bank statement, or your savings, or all these accounts. A chatbot is not the right modality.
You could have an AI agent that interacts with people in the task—like if you’re planning your financial spending, or investment, or whatever. The AI could be another hand or another pointer on screen. You have your pointer, right? But the AI can be another pointer, and then you can talk to that pointer, and you can feel like there are two agents interacting with one another.
And we showed that—even just changing, using the same exact model—but changing the way that information is flowing and visualized to the user, and the way the user can interact with the agent, rather than going from one screen, then going to the chatbot, typing something, and then going back…
Now, the agent has access to what the user is doing in real time. And because it’s another pointer, it can point and highlight things that are important at the moment to help steer the user toward things that are critical, or things they should pay attention to, right?
We showed that this type of interaction reduces cognitive load and makes people actually enjoy the process even more.
So I think the idea of an agent is not a system by itself. It’s also the interaction between human and agent—and how can we design it so that it feels like a collaborative, positive collaboration, rather than a delegation that feels like people are losing some agency and autonomy, right?
So I think this is a really, really important question that we need to investigate. Yeah?
Ross: Well, the thing is, it is a trust—a relationship of trust, essentially. So you and it. So there’s the nature of the interface between the human, who is essentially trusting an agent—an agent to act on their behalf—and they’re able to do things well, that they’re able to represent them well, that they check nothing’s missed.
And so this requires a rich—essentially, in a way—emotional interface between the two. I think that’s a key part of that when we move into multi-agent systems, where you have multiple agents, each with their defined roles or capabilities, interacting.
This comes, of course—MIT also has a Center for Collective Intelligence. I mean, I’d love to sort of wonder what the intersections between your work and the Center for Collective Intelligence might be.
Pat: Well, one thing that I think both of our research groups focus on is the idea of intelligence not as things that already happen in technologies, but things that happen collectively—at the societal level, or at the collective level.
I think that should be the ultimate goal of whatever we do, right? You should not just make the machine more intelligent, but how do we enhance the overall intelligence?
And I think the question also is: how do we diversify human intelligence as well, right? Because you can be intelligent in a narrow area, but in the real world, problems are very complex. You don’t want everyone to think in the same way.
I mean, there are studies showing that on the individual level, AI can make people’s essays better. But if you look across different essays written by people assisted by AI, they start to look the same—which means that there is an individual gain, but a collective loss, right?
And I think that’s a big problem, right? Because now everyone is thinking in the same way. Well, maybe everyone is a little bit better, but if they’re all the same, then we have no diverse solution to the bigger problems.
So in one project that we looked into is how do we use AI that has the opposite value as a person—to help make people think more diversely.
If you like something, the AI could like the other thing, and then make the idea something in between. Or, if you are so deep into one thing, the AI could represent the broader type of intelligence that gets you out of your depth, basically.
Or, if you are very broad, maybe the AI will go in deep in one direction—so complementing your intelligence in a way.
And we have shown that this type of AI system can really drive collaboration in a direction that is very diverse—very different from the user.
But at the same time, if you have an AI that is similar to the person—like has the same value, same type of intelligence—it can make them go even deeper. In the sense that if you have a bias toward a certain topic, and the AI also has a bias in the same topic as you, it can make that go even further.
So again, it’s really about the interaction—and what type of intelligence do we want our people to interact with? And what are the outcomes that we care about, whether it’s individual or collective?
I think these are design choices that need to be studied and evaluated empirically. Yeah.
Ross: That’s fantastic. I mean, I have a very deep belief in human uniqueness. I think we’re all far more unique than almost anybody realizes. And society basically makes us look and makes us more the same.
So AI is perhaps a far stronger force in sort of pulling us together—society already is that, yeah. But I mean, to that point of saying, well, I may have a unique way of thinking, or just unique perspectives—and so, I mean, you’re talking about things where we can actually draw out and amplify and augment what it is that is most unique and individual about each of us.
Pat: Right, totally. And I mean, I think the former CEO of Google, right, he has said at one point that, why would an individual—why would a person—want to talk to another person when you can talk to an AI that is 100,000 million people at the same time, right?
But I feel like that’s a boring thing. Because the AI could take on any direction. It doesn’t have an opinion of its own, right?
But because a human is limited to our own life experience until that point, it gives us a unique perspective, right? When things are everything, everywhere, all at once, it’s like generic and has no perspective of its own.
I think each individual person—whether it’s the things they’re living through, things that influence their life, things they grew up with—has that sort of story that made them unique. I think that’s more— to me, that is more interesting, and I think it’s what we should preserve, not try to make everything average out.
So for me, this is the thing we should amplify.
And again, I talk a lot about human-AI interaction, because I feel like the interaction is the key—not just the model capability, but how it interacts with people. What features, what modality it actually uses to communicate with people.
And I think this question of interaction is so interdisciplinary. You need to learn a lot about human behavior, psychology, AI engineering, system design, and all of that, right?
So I think that’s the most exciting field to be.
Ross: Yeah, It’s fantastic. So in the years to come, what do you find most exciting about what the Augmenting Humans with AI group could do?
Pat: Well, I mean, many big ideas or aha moments that we want to create—definitely. We have actually an exciting project announcing tomorrow with one of the largest AI organizations or companies in the world. So please watch out for that. There’s new, exciting research in that direction, happening at scale. So there’s a big project that’s launching tomorrow, which is March 21. So if this is after that, yeah.
I think one thing that we are working on is—we’re collaborating with many organizations, trying to focus and make them not just think about AGI, but think about HGI: Human General Intelligence. You know, what would happen to human general intelligence? We want everyone to flourish—not machines to flourish. We want people to flourish, right? To kind of steer many of the organizations, many of the AI companies, into thinking this way.
And in order to do that, we first need a new type of benchmark, right? We have a lot of benchmarks on AI capabilities, but we don’t have any benchmarks on what happens to people after using the AI, right? So we need new benchmarks that can really show if the AI makes people depressed, empowers, or enhances these human qualities—these human experiences. We need to design new ways to measure that, especially when they’re using the AI.
Second, we need to create an observatory that allows us to observe how people are evolving—or co-evolving—with AI around the world. Because AI affects different groups of people differently, right? We had a study showing that—this is kind of funny—but people talk about AI bias, that it’s biased toward certain genders, ethnicities, and so on. We did a study showing that, if you remove all the factors, just by the name of people, the AI will have a bias based on the name—or just the last name, right? If you have a famous last name, like Trump or Musk, the AI tends to favor those people more than people who have a generic or regular last name. And this is kind of crazy to me, because you can get rid of all the demographic information that we say causes bias, and just the name of a person already can lead to that bias.
So we know that AI affects people differently. We need to design this type of observatory that we will deploy around the world to measure the impact of AI on people over time—and whether that leads to human flourishing or makes things worse. We don’t have empirical evidence for that right now. People are in two camps: the optimistic camp, saying AI is going to bring prosperity, we don’t need to care, we don’t need to regulate. And another group saying AI is going to be the worst thing—existential crisis, human extinction. We need to regulate and kill and stop. But we don’t have real scientific empirical evidence on humans at scale.
So that’s another thing that MIT’s Advancing Human-AI Interaction is going to do. We’re going to try to establish this observatory so that we can inform people with scientific evidence.
And finally, what I think is the most exciting thing: right now, we have so many papers published on AI—more than any human can read, maybe more than any AI can be trained on. Because every minute there’s a new paper being published, right? And people are not knowing what is going on. Maybe they know a little bit about their area, or maybe some papers become very famous, but we want to design an Atlas of Human-AI Interaction—a new type of AI for science that allows us to piece together different research papers that come out so that we have a comprehensive view of what is being researched.
What are we over-researching right now? We had a preliminary version of this Atlas, and we showed that people right now do a lot of research on trust and explanation—but less so on other aspects, like loneliness. For example, that AI chatbots might make people lonely—very little research has gone into that.
So we have this engine that’s always running. When new papers are being published, the knowledge is put into this knowledge tree. So we see what areas are growing, what areas are not growing, every day. And we see this evolve as the research field evolves. Then I think we will be able to have a better comprehension of when AI leads to human flourishing—or when it doesn’t—and see what is being researched, what is being developed, in real time.
So these are the three moonshot ideas that we care about right now at MIT Media Lab. Yeah.
Ross Dawson: Fantastic. I love your work—both you and all of your colleagues. This is so important. I’m very grateful for what you’re doing, and thanks so much for sharing your work on The Amplifying Cognition Show.
Pat Pataranutaporn: Thank you so much. And I’m glad that you are doing this show to help people think more about this idea of amplifying human cognition. I think that’s an important question and an important challenge for this century and the future century as well.
So thank you for having me. Bye.
The post Pat Pataranutaporn on human flourishing with AI, augmenting reasoning, enhancing motivation, and benchmarking human-AI interaction (AC Ep82) appeared first on Amplifying Cognition.
102集单集