Artwork

内容由Kevin Knudson and Evelyn Lamb提供。所有播客内容(包括剧集、图形和播客描述)均由 Kevin Knudson and Evelyn Lamb 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal
Player FM -播客应用
使用Player FM应用程序离线!

Episode 70 - Joel David Hamkins

39:45
 
分享
 

Manage episode 302868105 series 1516226
内容由Kevin Knudson and Evelyn Lamb提供。所有播客内容(包括剧集、图形和播客描述)均由 Kevin Knudson and Evelyn Lamb 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal

Evelyn Lamb: Hello, and welcome to my favorite theorem, a math podcast with no quiz at the end. I'm Evelyn Lamb, one of your hosts. I'm a freelance math and science writer in Salt Lake City, Utah, and this is your other host.
Kevin Knudson: Hi, I'm Kevin Knudson, professor of mathematics at the University of Florida. I like your background, Evelyn. Our listeners can't can't see it. But it looks like you had a nice camping trip somewhere in Utah.
EL: Yes, that's actually a couple years ago — we’re going to describe this in great detail for all the listeners who can't see it, no — this was a camping trip in Dinosaur National Monument on the Utah-Colorado border, which is a really cool place to visit.
KK: Excellent. It’s beautiful over there. Yeah. So how's things?
EL: Oh, not too bad. A bit smoky here, we're getting a lot of wildfire smoke from the west coast. And it, you know, makes some of those outdoor activities that are so fun, a little less fun. So I hope it clears out.
KK: Right. Yeah, well, my big adventure lately was getting my son settled into his new apartment in Vancouver. And it was my first time on an airplane in a year and a half. That was weird. And then of course, Vancouver, such a lovely city, though. We had a good time. So he's all set up and starting grad school and his nice new adventure.
EL: Yeah. And he's kind of, almost as far as he can be in a populated place in North America from you.
KK: That’s correct. That's, like, almost 3000 miles. That's far.
EL: Yeah.
KK: It’s great. Anyway, enough about that.
EL: Yeah. Well, today, we're very happy to have Joel David Hamkins join us. Joel, would you like to introduce yourself and say a little bit about yourself?
Joel David Hamkins: Yeah, sure. I am a mathematician and philosopher at the University of Oxford. Actually it’s something of an identity crisis that I have, whether I'm a mathematician or a philosopher, because my original training was in mathematics. My PhD was in mathematics. And for many, many years, I counted myself as a mathematician. But somehow, over the years, my work became increasingly concerned with philosophical issues, and I managed somehow to turn myself into a philosopher. And so here in Oxford, my main appointment is in the philosophy faculty, although I'm also affiliated with the mathematics department. And so I don't really know what I am, whether a mathematician or a philosopher. I do work in mathematical logic and philosophical logic, really all parts of logic, and especially connected with the mathematics and philosophy of the infinite.
KK: Well, math is just kind of, it's just applied philosophy anyway, right?
EL: This might be a dangerous question. But what are — like, do you feel a big cultural difference between being in a math department and a philosophy department?
JDH: Oh, there's huge cultural differences between math and philosophy, I mean, on many different issues, and they come up again and again, when I'm teaching especially, or interacting with colleagues and so on. I mean, for example, there's a completely different attitude about reading original works. In philosophy, this is very important to read the original authors, but in mathematics, we tend to read the newer accounts, even of old theorems, and maybe for good reason, because oftentimes, those newer accounts, I think, become improved with greater understanding or more connections with other work and so on. Okay, but one can certainly understand the value of reading the original authors. And there's many other issues like that, cultural differences between math and philosophy.
EL: Yeah, well, I, when I was at the University of Utah, I taught math history a couple times. And I was trying to use some original sources there. And it is very difficult to read original sources in math. I don't know if part of that is just because we aren't used to it. But part of it, I do feel like the language, and the way we talk about things changes a lot really quickly. It makes it so that even reading papers from the 1920s or something, you sometimes feel like, “What are they talking about?” And then you find out Oh, they're just talking about degree four polynomials, but they're using terms that you just don't use anymore?
JDH: Yeah, I think that's absolutely right. And oftentimes, though, it's really interesting when there's an old work that uses what we consider to be modern notation. Like if you look at Cantor's original writings on the ordinals, say, it's completely contemporary. His notation, he writes things that contemporary set theorists would be able to understand easily, and even even to the point of using the same Greek letters, alpha and beta to represent ordinals, and so on, which is what we still do today. And so it's quite remarkable when the original authors’ notation survives. That's amazing, I think.
EL: Yeah. So we invited you here to share your favorite theorem. What have you decided to share with us today?
JDH: Well, I found it really difficult to decide what my favorite theorem is. But I want to tell you about one of my most favorite theorems, which is the fundamental theorem of finite games. So this is the theorem, it was proved by Zermelo in 1913. And it's the theorem that asserts that in any two player finite game of perfect information, one of the players has a winning strategy, or else, both players have drawing strategies if it's a game that allows for draws. So that's the theorem.
EL: So what is the definition of a game? This is, like, the most mathematical thing to set, you know, like, yeah, okay, games, we've all played them starting from when we were, you know, two years old or something. But now we need to sit down and define it.
JDH: Yeah. Well, that's one of the things that really excites me about this theorem, because it forces you to grapple with exactly that question. What is a game? What is a finite game? What does it mean? And it's not an easy question to answer. I mean, the theorem itself, I think, is something that you might think is obvious. For example, if you think about the game of chess, well, maybe you would think it's obvious that, look, either one of the players has a winning strategy, or they both have drawing strategies. But when it comes to actually proving that fact, then maybe it's not so obvious, even for such a game as chess. And then you're forced to grapple with exactly the question that Evelyn just asked, you know, what is a finite game? What is a strategy? What is a winning strategy? What does it mean to have a winning strategy, and so on?
So there's a wonderful paradox that surrounds the issue of finite games called the hypergame paradox. So if you have a naive account of what a finite game is, maybe you think a finite game is a game so that all plays of the game finish in finitely many moves or something. Okay, so that seems like a kind of reasonable definition of a finite game. And then there's this game called hypergame. And the way that you play hypergame is, say, if you and I are going to play hyper game, then the first player, maybe you go first, you choose a finite game. And then we play that game. And that's how you play hypergame. So the first player gets to pick which finite game you're going to play. And then you play that game. And if we said every finite game is a game, so that all plays end in finitely many moves, then it seems like this hypergame would be a finite game, because you picked a finite game, and then we play it, and then the game would have to end in finitely many moves. So it seems like hyper game itself is a finite game. But then, the paradox is that if hypergame is a finite game, then you could pick hypergame as your first move.
KK: Right.
JDH: Okay, but then we play hypergame. But we just said when playing hypergame, it's allowed to play hypergame as the first move. So then I would pick hypergame as my first move in that game. And then the next move would be to start playing hypergame. And then you could say, hypergame again. And then I could say hypergame, and so on, and we could all just say hypergame, all day long, forever. But that would be an infinite play. And so what's going on? Because it seems contradictory. We proved first, that every play of hyper game ends in finitely many moves, but then we exhibited a play that didn't end. So that's a kind of paradox that results from being naive about the answer to Evelyn's question, what is a finite game? If you're not clear on what a finite game is, then you're going to end up in this kind of Russell paradox situation.
KK: Exactly. Yeah. That's what I was thinking of, the sets that don't contain themselves. Right.
JDH: Right, exactly. It's actually a bit closer to the what's called the Burali-Forti paradox, which is the paradox of the class of all ordinals is well ordered, but it's bigger than any given ordinal. And so it's something like that. So if you think a lot about what a finite game is, then you're going to be led to the concept of a game tree. And a game tree is the sort of tree of all possible positions that you might get to. So there's the initial position at the start of the game, and the first player has some options that they can move to, and those are the sort of the child nodes of that root node. And then those nodes lead to further nodes for the choices of the second player, and so on. And so you get this game tree. And the thing about a finite game is that — well, one reasonable definition of finite game is that the whole game tree should be finite. So that in finite infinitely many moves, you're going to end up at a leaf node of the tree, and every leaf node should be labeled as a win for one of the players or the other, or as a draw, or whatever the outcomes are. And so if you have the finite tree conception of what it means to be a finite game, then hypergame is not a finite game, because at the first move, there are infinitely many different games you could choose, and so the game tree of hyper game won't be a finite tree, it will be an infinite tree. It will be infinitely branching at its first node.
And so if you have this game tree conception, then a finite game can mean a game with a finite game tree. And then we can understand the fundamental theorem of finite games. So my favorite theorem is the assertion that in any finite game like that, with a finite game tree, then one of the players has a winning strategy, or both players have drawing strategies. And what does it mean to be a strategy? I mean, what is a strategy? If you think about chess strategies or strategies as they're talked about conventionally, then oftentimes, people just mean I kind of heuristic, you know, the strategy of control the center or something, but in mathematics, we want a more precise notion of strategy. So controlling the center isn't really a strategy; that’s just a heuristic. It doesn't tell you actually what to do. So a strategy is a function on the game tree that tells you exactly which moves to make whenever it's your turn. And then a play of the game is basically a branch through the game tree. And it conforms with this strategy if whenever it was your turn at a node in the game tree, then it did, what the strategy was telling you to do at that node. So a strategy is winning if all the plays that conform with that strategy end up in a win for that player.
KK: Okay, so is this the point of view Zermelo took when he proved this?
JDH: So yes, Zermelo didn't have, he didn't quite have it all together. And this is maybe related to the fact that we don't actually read Zermelo’s original paper now when we want to prove the fundamental theorem of finite games because we have a much richer understanding, I think, of this theorem now. I mean, for example, in my in my proof-writing book, I gave three different proofs of this theorem. And Zermelo didn't have the concept of a game tree, or even a finite game in terms of game trees, like I just described. Rather, he was focused specifically on the game of chess, and he was thinking about positions in chess as pictures of the board with the pieces and where they are. But nowadays, we don't really think of positions like that, and there's a kind of problem with thinking about positions like that. Because if you think about a position in chess, like a photograph of the board, then you don't even know whose turn it is, really, because the same position can arise, and it could be different players’ turns, and you don't know, for example, whether the king has moved yet or not, but that's a very important thing, because if the king has moved already, then castling is no longer an option for that player.
KK: Right.
JDH: Or, or you need to know also what the previous move was, in order to apply the en passant rule correctly, and so on. So if you just have the board, you don't know whether en passant is allowed or not. I mean, en passant is one of these finicky rules with the pawn captures where you can take it if the previous player had moved two steps, and your pawn is in a certain situation, then you can capture anyway. So I mean, it's a technical thing, it doesn't matter too much. But the point is that just knowing the photograph of the board doesn't tell you doesn't tell you whose turn it is, and it doesn't tell you all the information that you need to know in order to know what the valid moves are. So we think of now a position in a game is a node in the game tree, and that has all the all of the information that you need.
So Zermelo’s proof was concerned with games that had the property that there were only finitely many possible configurations, like chess. There are only finitely many situations to be in, in chess. And he argued on the basis of that, but really, it amounts to arguing with the finiteness of the game tree in the end.
KK: Yeah. So is this at all related to Nash's equilibrium theorem? So I was scrolling Twitter the other day, as one does when one has nothing else to do, and I saw a tweet that had, it was a screenshot. And it was the entire paper that Nash published in the Proceedings of the National Academy proving his equilibrium theorem. I mean, it's a remarkable thing that it's just a few paragraphs. Is there any connection here?
JDH: Right, so actually, this kind of nomenclature, there are three different subjects connected with game theory. There’s game theory, which, Nash equilibrium and so on is usually considered part of game theory. And then there's another subject standing next to it, which is often called combinatorial game theory, or it's sometimes also called the the theory of games. And this is the the study of actual games like Nim, or chess, or Go, and so on, or the the Conway game values are part of this subject. And then there's a third subject, which I call the the logic of games, which is a study of things like the fundamental theorem of finite games, and the sort of the logical properties. So to my way of thinking, the Nash equilibrium is not directly connected with the theory of games, but rather is a core concept of game theory, which is studying things like the stability of probabilistic strategies, and so on, whereas combinatorial game theory isn't usually about those combinatorial strategies, but about sort of logically perfect strategies, and optimal play, and so on. And that isn't so much connected to my way of thinking with the Nash equilibrium.
KK: So fine, I hand you a finite game. It has a winning strategy. Can you ever hope to find it?
JDH: Oh, I see. Is it computable? Well, yeah, this is the big issue. Even in chess, for example, I mean, chess has a finite game tree. And so in principle, we can, in the sense of computability theory, there is a computable strategy you can prove, because it's a finite game. So the strategy is a finite function, and every finite function like that is definitely computable. And we can even say more about how to find the strategy. I mean, one of the proofs is this backpropagation proof, you look at the game tree, and you propagate. You know that the leaf nodes, the terminal nodes are labeled as a win, and you can propagate that information up the tree. But it turns out that the game tree of chess is so enormous…
KK: Right.
JDH: That it wouldn't fit in the universe, even if you used every single atom to represent a node of the tree. And so in that sense, you can never write a computer program that would compute the perfect chess play. It's just too big, the strategy. If you're really talking about the strategy on the whole game tree, then the game tree is just too enormous to fit in the universe. And so it's not a practical matter. But theoretically, like in terms of Turing computability or something, then of course it's computable. There are some other issues. For example, I've done a lot of work with infinitary games. I mean, this connects my interest with infinity. And it turns out that there are some, some positions, say, in infinite chess, which I've studied, we identified computable positions in infinite chess, for which White has a winning strategy, but but there's no computable winning strategy. So if the players play computably, in other words, according to a computable procedure, then it will be a draw. So it's very interesting, this interaction between optimal play, and computable optimal plays not always, they don't always align.
EL: So I've got to ask you to back up a little bit. What is infinite chess?
KK: Yeah.
JDH: Oh, I see infinite chess. So imagine a chessboard without any border. It just goes forever in all four directions.
EL: So you start with the same, you know, don't start with infinite number — or, yeah, I’ll let you keep going.
JDH: So of course, infinite chess. I mean, we can imagine playing infinite chess, you know, at a café or something, but it's not a game that that you sit down in a café to play. It's a game that mathematicians think, “What would it be like to play if this if the board looked like this, or if it looked like that?” And so there's no standard starting position. You present as a starting position, and you say, “Well, in this position, it has a very interesting property.” So Richard Stanley asked the question, for example, on mathoverflow. And that sparked my interest in this question. Well, one of the things that we proved was that you can have positions in infinite chess that White has a winning strategy, so it's going to win in finitely many moves, it’s going to make checkmate in finitely many moves. But it's not made in n for any n, for any finite n. In other words, Black can make it take as long as he wants, but it's hopeless. In other words, Black can say, “Oh, this time, I know you're going to win this time, but it's going to take you 100 moves.” Or he could say, “This time, it's going to take you a million moves.” For any number, he can delay it that long, but still White is going to win infinitely many moves following the winning strategy. So these are called games with game value omega, and, and then we produce positions with higher game, higher ordinal game values, omega squared, omega cube, and so on, the current record is omega to the fourth. So these transfinite game values come in. And so it's really quite fascinating how that happens.
KK: Well, knowing that you like logic and philosophy so much, now I see why you like these games, right? This is kind of a logic puzzle, in some sense. Yeah. Yeah.
EL: Well, and what made you choose this theorem as your favorite theorem?
JDH: Well, I mean, there's a lot of things about it to like. First of all, what I mentioned already, it forces you to get clear on the definitions, which I find to be interesting. And also it has many different proofs. I mean, as I mentioned, there are already three different proofs, three different elementary proofs. But some of those proofs lead immediately to to stronger theorems. For example, there's a slightly more relaxed notion of finite game, where you have a game tree, so that all plays are finite, but the tree itself doesn't have to be finite. So this would be what's called a well-founded tree. And then these would be called the clopen games, because in the in the product topology, the winning condition, amounts to a clopen set in that case. And then the Gale-Stewart theorem proved in the ‘50s, is that infinitely long games whose winning condition is an open set, those are also determined: one of the players has a winning strategy, open determinacy. And then Tony Martin generalized that to Borel determinacy. So Borel games also have this property. So infinite games whose winning condition is a Borel set in the product space are determined, one of the players has a winning strategy. And then if you ask, Well, maybe all games are determined, in the sense that one of the players has a winning strategy, you whether or not the winning condition is Borel or not. And this is called the axiom of determinacy. And it's refutable from the axiom of choice for games on omega, say, you can refute it. But it turns out that if you drop the axiom of choice, then the consistency strength of that axiom has enormous strength. It has large cardinal strength. In large cardinal set theory, the strength of the axiom of determinacy is infinitely many wooden cardinals, if you've ever heard of these large cardinals. And for example, under AD, the axiom of determinacy, it follows that every set of reals is Lebesgue measurable, and every set of reals has the property of Baire. And so there's all these amazing regularity set theoretic consequences from that axiom, which is just about playing these games.
And so what I view as the whole topic, the fundamental theorem of finite games just leads you on this walkway to these extremely deep ideas that come along much later. And I just find that whole thing so fascinating. That's why I like it so much.
EL: So I guess I want to pull you out of these extremely deep things into something much more shallow, which is, are there games like, games that kids play, or that like, people play, that are infinite games by their nature?
JDH: Oh, I see. I mean, I've studied quite a number of different infinite games. For example, well, I have a master's student now he just wrote his dissertation, his master's dissertation on infinite checkers, but we're in the UK, so we call it infinite draughts. But I guess ordinary checkers is usually just on an eight by eight board and so that doesn't count. So, I've done some work on Infinite Connect Four, I mean, infinitary versions of Connect Four and infinite Sudoku and infinite Go. And there are infinite analogs of many of these games that are quite interesting.
EL: I guess what I'm wondering is, so, in my mind, maybe there's a game where you can, like, go back and forth forever. And there's no rule in the game that says you can't just like walk towards the opponent and then walk backwards. And then the game tree might not be finite, or maybe I don't quite understand how the game tree would work.
JDH: No, no, you're absolutely right about that. Yeah, in that situation, the game tree would be infinite. I mean, it's related — for example, in chess, there's this threefold repetition rule. If you repeat the situation three times, then it's a draw. But the actual rule isn't that it's automatically a draw, but that either player is allowed to call it a draw. Right? And that's a difference. Because if you don't insist, if both players choose not to call the draw, then actually the game tree of chess would be infinite then because you could just keep moving back and fourth forever. And there's also another rule, which is not so well known, unless you're playing a lot of chess tournament play, which is the 50 move rule. And this is the rule in chess tournaments that they use, where if there's 50 moves without a pawn movement or a capture, then it's a draw. And the reason for that is — I mean, of course, I view both of those rules as kind of practical rules just to have an end of the game so that it doesn't just go on forever in the way that you described. But when we were deciding on the rules for infinite chess, we just got rid of those rules, and we thought, look, if you if you want to play forever, that's a draw. So any infinite play is a draw is the the real rule, to my way of thinking. And and the reason why we have the threefold repetition rule, and the 50 move rule is just those are proxies for the real rule, which is that infinite play is a draw. But it doesn't quite answer your question. I'm sorry, I don't know any children's games that are just naturally infinite already. Name the biggest number.
EL: Yeah, well I’ve also been sitting here wondering, like, can you make a game — like other children's games that aren't maybe on a board? Tic tac toe, of course, is the first thing I think, but then what about Duck Duck Goose [for the Minnesotans: Duck Duck Grey Duck] or Simon Says, or these other children's games that aren't really board games? You know, can you even work those into the framework of these finite games, or infinite games, or not? I don’t know.
JDH: There is this game of Nim where you play with stacks of coins and you remove coins from one stack or another. It's a beautiful game with a really nice resolution to it in terms of the strategy, but that game has infinitary versions where the stacks are allowed to have infinite ordinal heights. And basically, the classic proof of the Nim strategy works just as well for ordinals. And so if you if you think about the sort of the balancing strategy, where you look at the things base two, and so on, well, ordinals have a base two representation in ordinal arithmetic also, and you can still carry out the balancing strategy even when the stacks have infinite height.
EL: It never occurred to me to think about infinite ordinal height Nim.
KK: Well, you never have that many toothpicks, right?
EL: Yeah. Yeah. limited by my environment, I guess. I must say that I've spent a lot of my mathematical career sort of avoiding things with with too much infinity in them, because they're very intimidating to me. So maybe we have kind of different mathematical outlooks.
KK: Right. Yeah. So the other thing we like to do on this podcast is invite our guests to pair their theorem with something. So what pairs well, with the fundamental theorem of finite games?
JDH: Well, there's only one possible answer to this, and I worry that maybe I'm cheating by saying, of course, I have to pair it with the game of chess.
KK: Sure.
JDH: Because Zermelo’s theorem was really focused on chess, and he proved that, look, in chess, either White or Black has a winning strategy, or else both of them have drawn strategies. And I never played a lot of chess when I was a child, but when I had kids, they got involved in the scholastic chess scene in New York, which is quite hyperactive, and fascinating. And so my kids were playing a lot of chess, and I went to hundreds of chess tournaments and so on, and so I started playing chess and I learned a huge amount of chess from, they had such great coaches at their schools and so on. But actually, I'm a pretty mediocre chess player even after having played now for so many years. And one of my coauthors on the infinite chess papers that I wrote, is quite talented chess player. He's a national master, Cory Evans. He was a philosophy graduate student when I met him at the City University of New York, which is where my appointment was at the time. And so I got to meet a lot of a lot of really talented chess players, and it was really great working with him on that infinite chess stuff, because I realized that that actual chess knowledge is really focused on the 8 by 8 board, and that once you go to these much bigger boards, the the chess grandmasters even become a little bit at sea. And so I would know what I'm trying to do mathematically to create these positions with high game values, and I would show them this crazy position with, you know, 20 bishops and hundreds of rooks and so on. And I would talk a little about, and he would say, hang on, this pawn is hanging here, it's totally unprotected. And it would completely ruin my position. So their chess ability, their chess reading ability was such that they could look at these crazy infinite positions and point out flaws with the position. And that was really something that was important for our collaboration. These chess positions are so finicky, these huge, infinite ones. And so many details are running on whether the things are protected properly, and whether — because oftentimes, you have to argue that the play has to proceed according to this main line. And if you want to prove the theorem, you have to really prove that. And if there's some little upset that means that the flow of play isn't exactly like what you thought, then the whole argument is basically falling apart. And so it really was depending on on all of that. So I really had a great time interacting with a lot of these talented chess players. It was really fantastic.
KK: I’m a lousy chess player.
EL: A lot of interest among chess players in the mathematical study of the game of chess? Even leaving aside the infinite versions, but, you know, the finite version. I assume there are theorems being proved about regular chess. Do players care about them much?
JDH: Well some of them definitely do. And actually, there's a huge overlap, of course, between chess players and mathematician
EL: Oh, yeah, that's true.
JDH: And so maybe maybe a lot of the interest is coming from that overlap. But, for example, there was a problem that I had asked, I think I asked it on Mathoverflow. Take chess pieces. On an empty board, take a full set of chess pieces and just throw them at the board. So you you get some position. What's the chance that it's a legal position? So in other words, a random assignment of pieces. And you can you can make some calculations and prove some interesting things about the likelihood that it's a legal position. In other words, a legal position, meaning one that could in principle arise in a game, in a legal game, right. And
EL: Do you happen to recall any ballpark, you know, is this, like a 1% chance?
JDH: It’s way less than 1%. It’s exceedingly unlikely.
EL: Okay.
JDH: If you allow, if you insist on all the pieces, because then there haven't been any captures. So the pawns have to be sort of perfect. There has to be one column and opposing. And already just because of that, that already makes it extremely unlikely to happen if you have all the pieces. And then some other people answered on Mathoverflow, I think giving better bounds when you don't have all the pieces and so on. But it wasn't quite open. But I think the general conclusion was that it's extremely unlikely that you get a legal position.
KK: Well, that makes sense. Given the complexity of the moves, it would be pretty remarkable if a random placement would would actually work.
JDH: There are some amazing — there’s a book by Raymond Smullyan, about the “chess detective,” and he has these many instances. It’s sort of like he gives you a chess position, and and you have to deduce, what was the previous move? Because these positions are often extremely strange. Like you think, “How could that possibly arise?” So there's sort of logic. I mean, he's a logician. And so there are sort of chess logic puzzles to figure out what the previous move was. And there's often a story associated with the game that, you know, so it must have been Black, who was the murderer because… It’s really some fascinating work that way. I really like that.
KK: Well, this has been informative. I've certainly learned a lot today.
EL: Yeah.
KK: So we like to give our guests a chance to advertise. Where can we find you online? And if there's anything you want to promote, we're happy to let you do it.
JDH: Oh, I see. Well, you can find me online, I have a blog, jdh.hamkins.org. And also, I'm on Twitter, and also on Mathoverflow. And I just published a number of books. So one of them I mentioned already, it's called Proof and the Art of Mathematics. And this is a book for aspiring mathematicians to learn how to write proofs, Proof and the Art of Mathematics with MIT Press. And I have another book, a philosophy book called Lectures on the Philosophy of Mathematics, also with MIT Press. And that is a book that I use for my lectures on the philosophy of mathematics here in Oxford. And it's, I would say, a kind of, grounded in mathematics perspective on issues in the philosophy of mathematics.
EL: Yeah, and if I can praise you a little bit, I will say something that I have enjoyed ever since I've been following you is that you — some of some of the things you write are about, like very technical, you know, deep mathematical things. But you've also had some really neat, like, puzzles that you've shared with children and stuff like that. I remember I was working on a Math Circle project one time about paper folding and cutting and you had a fun, I think it was like you show someone a configuration of holes in a piece of paper and say, can you fold the paper so that you just have to punch one hole in this folded paper to get the holes looking like this? Or something like that. And so it kind of spans a big range of mathematical sophistication, and what level you want to jump into something. So I think that's something fun and other people who might be looking for activities like that might enjoy it.
JDH: Thank you so much. I'm so glad to hear you mention that project. Those projects are all available on my blog if you click on the math for kids link, which is one of the buttons on my blog. And they all arose because I was going into my daughter's school every year, or a couple times a year, with these different projects, including that one and a number of other ones. So have about a dozen or more math for kids projects on my blog.
KK: Very cool.
EL: Well, thanks for joining us. I enjoyed talking about chess, a game that I have probably played, you know, 10 times in my life.
JDH: Well, it's a pleasure to be here. Thank you so much for having me.
KK: Yeah. Thanks.
[outro]
In this episode of the podcast, we were happy to talk with Joel David Hamkins, a mathematician and philosopher (or is that philosopher and mathematician?) at the University of Oxford, about the fundamental theorem of finite games. Here are some links you might enjoy perusing after you listen to the episode.

His website, Twitter, and Mathoverflow pages
On his website, check out Math for Kids for some fun activities for all ages
His books Proof and the Art of Mathematics and Lectures on the Philosophy of Mathematics
The Wikipedia page about the fundamental theorem of finite games
The PBS Infinite Series episode on infinite chess
The Mathoverflow question and answers about legal chess board positions

  continue reading

93集单集

Artwork

Episode 70 - Joel David Hamkins

My Favorite Theorem

345 subscribers

published

icon分享
 
Manage episode 302868105 series 1516226
内容由Kevin Knudson and Evelyn Lamb提供。所有播客内容(包括剧集、图形和播客描述)均由 Kevin Knudson and Evelyn Lamb 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal

Evelyn Lamb: Hello, and welcome to my favorite theorem, a math podcast with no quiz at the end. I'm Evelyn Lamb, one of your hosts. I'm a freelance math and science writer in Salt Lake City, Utah, and this is your other host.
Kevin Knudson: Hi, I'm Kevin Knudson, professor of mathematics at the University of Florida. I like your background, Evelyn. Our listeners can't can't see it. But it looks like you had a nice camping trip somewhere in Utah.
EL: Yes, that's actually a couple years ago — we’re going to describe this in great detail for all the listeners who can't see it, no — this was a camping trip in Dinosaur National Monument on the Utah-Colorado border, which is a really cool place to visit.
KK: Excellent. It’s beautiful over there. Yeah. So how's things?
EL: Oh, not too bad. A bit smoky here, we're getting a lot of wildfire smoke from the west coast. And it, you know, makes some of those outdoor activities that are so fun, a little less fun. So I hope it clears out.
KK: Right. Yeah, well, my big adventure lately was getting my son settled into his new apartment in Vancouver. And it was my first time on an airplane in a year and a half. That was weird. And then of course, Vancouver, such a lovely city, though. We had a good time. So he's all set up and starting grad school and his nice new adventure.
EL: Yeah. And he's kind of, almost as far as he can be in a populated place in North America from you.
KK: That’s correct. That's, like, almost 3000 miles. That's far.
EL: Yeah.
KK: It’s great. Anyway, enough about that.
EL: Yeah. Well, today, we're very happy to have Joel David Hamkins join us. Joel, would you like to introduce yourself and say a little bit about yourself?
Joel David Hamkins: Yeah, sure. I am a mathematician and philosopher at the University of Oxford. Actually it’s something of an identity crisis that I have, whether I'm a mathematician or a philosopher, because my original training was in mathematics. My PhD was in mathematics. And for many, many years, I counted myself as a mathematician. But somehow, over the years, my work became increasingly concerned with philosophical issues, and I managed somehow to turn myself into a philosopher. And so here in Oxford, my main appointment is in the philosophy faculty, although I'm also affiliated with the mathematics department. And so I don't really know what I am, whether a mathematician or a philosopher. I do work in mathematical logic and philosophical logic, really all parts of logic, and especially connected with the mathematics and philosophy of the infinite.
KK: Well, math is just kind of, it's just applied philosophy anyway, right?
EL: This might be a dangerous question. But what are — like, do you feel a big cultural difference between being in a math department and a philosophy department?
JDH: Oh, there's huge cultural differences between math and philosophy, I mean, on many different issues, and they come up again and again, when I'm teaching especially, or interacting with colleagues and so on. I mean, for example, there's a completely different attitude about reading original works. In philosophy, this is very important to read the original authors, but in mathematics, we tend to read the newer accounts, even of old theorems, and maybe for good reason, because oftentimes, those newer accounts, I think, become improved with greater understanding or more connections with other work and so on. Okay, but one can certainly understand the value of reading the original authors. And there's many other issues like that, cultural differences between math and philosophy.
EL: Yeah, well, I, when I was at the University of Utah, I taught math history a couple times. And I was trying to use some original sources there. And it is very difficult to read original sources in math. I don't know if part of that is just because we aren't used to it. But part of it, I do feel like the language, and the way we talk about things changes a lot really quickly. It makes it so that even reading papers from the 1920s or something, you sometimes feel like, “What are they talking about?” And then you find out Oh, they're just talking about degree four polynomials, but they're using terms that you just don't use anymore?
JDH: Yeah, I think that's absolutely right. And oftentimes, though, it's really interesting when there's an old work that uses what we consider to be modern notation. Like if you look at Cantor's original writings on the ordinals, say, it's completely contemporary. His notation, he writes things that contemporary set theorists would be able to understand easily, and even even to the point of using the same Greek letters, alpha and beta to represent ordinals, and so on, which is what we still do today. And so it's quite remarkable when the original authors’ notation survives. That's amazing, I think.
EL: Yeah. So we invited you here to share your favorite theorem. What have you decided to share with us today?
JDH: Well, I found it really difficult to decide what my favorite theorem is. But I want to tell you about one of my most favorite theorems, which is the fundamental theorem of finite games. So this is the theorem, it was proved by Zermelo in 1913. And it's the theorem that asserts that in any two player finite game of perfect information, one of the players has a winning strategy, or else, both players have drawing strategies if it's a game that allows for draws. So that's the theorem.
EL: So what is the definition of a game? This is, like, the most mathematical thing to set, you know, like, yeah, okay, games, we've all played them starting from when we were, you know, two years old or something. But now we need to sit down and define it.
JDH: Yeah. Well, that's one of the things that really excites me about this theorem, because it forces you to grapple with exactly that question. What is a game? What is a finite game? What does it mean? And it's not an easy question to answer. I mean, the theorem itself, I think, is something that you might think is obvious. For example, if you think about the game of chess, well, maybe you would think it's obvious that, look, either one of the players has a winning strategy, or they both have drawing strategies. But when it comes to actually proving that fact, then maybe it's not so obvious, even for such a game as chess. And then you're forced to grapple with exactly the question that Evelyn just asked, you know, what is a finite game? What is a strategy? What is a winning strategy? What does it mean to have a winning strategy, and so on?
So there's a wonderful paradox that surrounds the issue of finite games called the hypergame paradox. So if you have a naive account of what a finite game is, maybe you think a finite game is a game so that all plays of the game finish in finitely many moves or something. Okay, so that seems like a kind of reasonable definition of a finite game. And then there's this game called hypergame. And the way that you play hypergame is, say, if you and I are going to play hyper game, then the first player, maybe you go first, you choose a finite game. And then we play that game. And that's how you play hypergame. So the first player gets to pick which finite game you're going to play. And then you play that game. And if we said every finite game is a game, so that all plays end in finitely many moves, then it seems like this hypergame would be a finite game, because you picked a finite game, and then we play it, and then the game would have to end in finitely many moves. So it seems like hyper game itself is a finite game. But then, the paradox is that if hypergame is a finite game, then you could pick hypergame as your first move.
KK: Right.
JDH: Okay, but then we play hypergame. But we just said when playing hypergame, it's allowed to play hypergame as the first move. So then I would pick hypergame as my first move in that game. And then the next move would be to start playing hypergame. And then you could say, hypergame again. And then I could say hypergame, and so on, and we could all just say hypergame, all day long, forever. But that would be an infinite play. And so what's going on? Because it seems contradictory. We proved first, that every play of hyper game ends in finitely many moves, but then we exhibited a play that didn't end. So that's a kind of paradox that results from being naive about the answer to Evelyn's question, what is a finite game? If you're not clear on what a finite game is, then you're going to end up in this kind of Russell paradox situation.
KK: Exactly. Yeah. That's what I was thinking of, the sets that don't contain themselves. Right.
JDH: Right, exactly. It's actually a bit closer to the what's called the Burali-Forti paradox, which is the paradox of the class of all ordinals is well ordered, but it's bigger than any given ordinal. And so it's something like that. So if you think a lot about what a finite game is, then you're going to be led to the concept of a game tree. And a game tree is the sort of tree of all possible positions that you might get to. So there's the initial position at the start of the game, and the first player has some options that they can move to, and those are the sort of the child nodes of that root node. And then those nodes lead to further nodes for the choices of the second player, and so on. And so you get this game tree. And the thing about a finite game is that — well, one reasonable definition of finite game is that the whole game tree should be finite. So that in finite infinitely many moves, you're going to end up at a leaf node of the tree, and every leaf node should be labeled as a win for one of the players or the other, or as a draw, or whatever the outcomes are. And so if you have the finite tree conception of what it means to be a finite game, then hypergame is not a finite game, because at the first move, there are infinitely many different games you could choose, and so the game tree of hyper game won't be a finite tree, it will be an infinite tree. It will be infinitely branching at its first node.
And so if you have this game tree conception, then a finite game can mean a game with a finite game tree. And then we can understand the fundamental theorem of finite games. So my favorite theorem is the assertion that in any finite game like that, with a finite game tree, then one of the players has a winning strategy, or both players have drawing strategies. And what does it mean to be a strategy? I mean, what is a strategy? If you think about chess strategies or strategies as they're talked about conventionally, then oftentimes, people just mean I kind of heuristic, you know, the strategy of control the center or something, but in mathematics, we want a more precise notion of strategy. So controlling the center isn't really a strategy; that’s just a heuristic. It doesn't tell you actually what to do. So a strategy is a function on the game tree that tells you exactly which moves to make whenever it's your turn. And then a play of the game is basically a branch through the game tree. And it conforms with this strategy if whenever it was your turn at a node in the game tree, then it did, what the strategy was telling you to do at that node. So a strategy is winning if all the plays that conform with that strategy end up in a win for that player.
KK: Okay, so is this the point of view Zermelo took when he proved this?
JDH: So yes, Zermelo didn't have, he didn't quite have it all together. And this is maybe related to the fact that we don't actually read Zermelo’s original paper now when we want to prove the fundamental theorem of finite games because we have a much richer understanding, I think, of this theorem now. I mean, for example, in my in my proof-writing book, I gave three different proofs of this theorem. And Zermelo didn't have the concept of a game tree, or even a finite game in terms of game trees, like I just described. Rather, he was focused specifically on the game of chess, and he was thinking about positions in chess as pictures of the board with the pieces and where they are. But nowadays, we don't really think of positions like that, and there's a kind of problem with thinking about positions like that. Because if you think about a position in chess, like a photograph of the board, then you don't even know whose turn it is, really, because the same position can arise, and it could be different players’ turns, and you don't know, for example, whether the king has moved yet or not, but that's a very important thing, because if the king has moved already, then castling is no longer an option for that player.
KK: Right.
JDH: Or, or you need to know also what the previous move was, in order to apply the en passant rule correctly, and so on. So if you just have the board, you don't know whether en passant is allowed or not. I mean, en passant is one of these finicky rules with the pawn captures where you can take it if the previous player had moved two steps, and your pawn is in a certain situation, then you can capture anyway. So I mean, it's a technical thing, it doesn't matter too much. But the point is that just knowing the photograph of the board doesn't tell you doesn't tell you whose turn it is, and it doesn't tell you all the information that you need to know in order to know what the valid moves are. So we think of now a position in a game is a node in the game tree, and that has all the all of the information that you need.
So Zermelo’s proof was concerned with games that had the property that there were only finitely many possible configurations, like chess. There are only finitely many situations to be in, in chess. And he argued on the basis of that, but really, it amounts to arguing with the finiteness of the game tree in the end.
KK: Yeah. So is this at all related to Nash's equilibrium theorem? So I was scrolling Twitter the other day, as one does when one has nothing else to do, and I saw a tweet that had, it was a screenshot. And it was the entire paper that Nash published in the Proceedings of the National Academy proving his equilibrium theorem. I mean, it's a remarkable thing that it's just a few paragraphs. Is there any connection here?
JDH: Right, so actually, this kind of nomenclature, there are three different subjects connected with game theory. There’s game theory, which, Nash equilibrium and so on is usually considered part of game theory. And then there's another subject standing next to it, which is often called combinatorial game theory, or it's sometimes also called the the theory of games. And this is the the study of actual games like Nim, or chess, or Go, and so on, or the the Conway game values are part of this subject. And then there's a third subject, which I call the the logic of games, which is a study of things like the fundamental theorem of finite games, and the sort of the logical properties. So to my way of thinking, the Nash equilibrium is not directly connected with the theory of games, but rather is a core concept of game theory, which is studying things like the stability of probabilistic strategies, and so on, whereas combinatorial game theory isn't usually about those combinatorial strategies, but about sort of logically perfect strategies, and optimal play, and so on. And that isn't so much connected to my way of thinking with the Nash equilibrium.
KK: So fine, I hand you a finite game. It has a winning strategy. Can you ever hope to find it?
JDH: Oh, I see. Is it computable? Well, yeah, this is the big issue. Even in chess, for example, I mean, chess has a finite game tree. And so in principle, we can, in the sense of computability theory, there is a computable strategy you can prove, because it's a finite game. So the strategy is a finite function, and every finite function like that is definitely computable. And we can even say more about how to find the strategy. I mean, one of the proofs is this backpropagation proof, you look at the game tree, and you propagate. You know that the leaf nodes, the terminal nodes are labeled as a win, and you can propagate that information up the tree. But it turns out that the game tree of chess is so enormous…
KK: Right.
JDH: That it wouldn't fit in the universe, even if you used every single atom to represent a node of the tree. And so in that sense, you can never write a computer program that would compute the perfect chess play. It's just too big, the strategy. If you're really talking about the strategy on the whole game tree, then the game tree is just too enormous to fit in the universe. And so it's not a practical matter. But theoretically, like in terms of Turing computability or something, then of course it's computable. There are some other issues. For example, I've done a lot of work with infinitary games. I mean, this connects my interest with infinity. And it turns out that there are some, some positions, say, in infinite chess, which I've studied, we identified computable positions in infinite chess, for which White has a winning strategy, but but there's no computable winning strategy. So if the players play computably, in other words, according to a computable procedure, then it will be a draw. So it's very interesting, this interaction between optimal play, and computable optimal plays not always, they don't always align.
EL: So I've got to ask you to back up a little bit. What is infinite chess?
KK: Yeah.
JDH: Oh, I see infinite chess. So imagine a chessboard without any border. It just goes forever in all four directions.
EL: So you start with the same, you know, don't start with infinite number — or, yeah, I’ll let you keep going.
JDH: So of course, infinite chess. I mean, we can imagine playing infinite chess, you know, at a café or something, but it's not a game that that you sit down in a café to play. It's a game that mathematicians think, “What would it be like to play if this if the board looked like this, or if it looked like that?” And so there's no standard starting position. You present as a starting position, and you say, “Well, in this position, it has a very interesting property.” So Richard Stanley asked the question, for example, on mathoverflow. And that sparked my interest in this question. Well, one of the things that we proved was that you can have positions in infinite chess that White has a winning strategy, so it's going to win in finitely many moves, it’s going to make checkmate in finitely many moves. But it's not made in n for any n, for any finite n. In other words, Black can make it take as long as he wants, but it's hopeless. In other words, Black can say, “Oh, this time, I know you're going to win this time, but it's going to take you 100 moves.” Or he could say, “This time, it's going to take you a million moves.” For any number, he can delay it that long, but still White is going to win infinitely many moves following the winning strategy. So these are called games with game value omega, and, and then we produce positions with higher game, higher ordinal game values, omega squared, omega cube, and so on, the current record is omega to the fourth. So these transfinite game values come in. And so it's really quite fascinating how that happens.
KK: Well, knowing that you like logic and philosophy so much, now I see why you like these games, right? This is kind of a logic puzzle, in some sense. Yeah. Yeah.
EL: Well, and what made you choose this theorem as your favorite theorem?
JDH: Well, I mean, there's a lot of things about it to like. First of all, what I mentioned already, it forces you to get clear on the definitions, which I find to be interesting. And also it has many different proofs. I mean, as I mentioned, there are already three different proofs, three different elementary proofs. But some of those proofs lead immediately to to stronger theorems. For example, there's a slightly more relaxed notion of finite game, where you have a game tree, so that all plays are finite, but the tree itself doesn't have to be finite. So this would be what's called a well-founded tree. And then these would be called the clopen games, because in the in the product topology, the winning condition, amounts to a clopen set in that case. And then the Gale-Stewart theorem proved in the ‘50s, is that infinitely long games whose winning condition is an open set, those are also determined: one of the players has a winning strategy, open determinacy. And then Tony Martin generalized that to Borel determinacy. So Borel games also have this property. So infinite games whose winning condition is a Borel set in the product space are determined, one of the players has a winning strategy. And then if you ask, Well, maybe all games are determined, in the sense that one of the players has a winning strategy, you whether or not the winning condition is Borel or not. And this is called the axiom of determinacy. And it's refutable from the axiom of choice for games on omega, say, you can refute it. But it turns out that if you drop the axiom of choice, then the consistency strength of that axiom has enormous strength. It has large cardinal strength. In large cardinal set theory, the strength of the axiom of determinacy is infinitely many wooden cardinals, if you've ever heard of these large cardinals. And for example, under AD, the axiom of determinacy, it follows that every set of reals is Lebesgue measurable, and every set of reals has the property of Baire. And so there's all these amazing regularity set theoretic consequences from that axiom, which is just about playing these games.
And so what I view as the whole topic, the fundamental theorem of finite games just leads you on this walkway to these extremely deep ideas that come along much later. And I just find that whole thing so fascinating. That's why I like it so much.
EL: So I guess I want to pull you out of these extremely deep things into something much more shallow, which is, are there games like, games that kids play, or that like, people play, that are infinite games by their nature?
JDH: Oh, I see. I mean, I've studied quite a number of different infinite games. For example, well, I have a master's student now he just wrote his dissertation, his master's dissertation on infinite checkers, but we're in the UK, so we call it infinite draughts. But I guess ordinary checkers is usually just on an eight by eight board and so that doesn't count. So, I've done some work on Infinite Connect Four, I mean, infinitary versions of Connect Four and infinite Sudoku and infinite Go. And there are infinite analogs of many of these games that are quite interesting.
EL: I guess what I'm wondering is, so, in my mind, maybe there's a game where you can, like, go back and forth forever. And there's no rule in the game that says you can't just like walk towards the opponent and then walk backwards. And then the game tree might not be finite, or maybe I don't quite understand how the game tree would work.
JDH: No, no, you're absolutely right about that. Yeah, in that situation, the game tree would be infinite. I mean, it's related — for example, in chess, there's this threefold repetition rule. If you repeat the situation three times, then it's a draw. But the actual rule isn't that it's automatically a draw, but that either player is allowed to call it a draw. Right? And that's a difference. Because if you don't insist, if both players choose not to call the draw, then actually the game tree of chess would be infinite then because you could just keep moving back and fourth forever. And there's also another rule, which is not so well known, unless you're playing a lot of chess tournament play, which is the 50 move rule. And this is the rule in chess tournaments that they use, where if there's 50 moves without a pawn movement or a capture, then it's a draw. And the reason for that is — I mean, of course, I view both of those rules as kind of practical rules just to have an end of the game so that it doesn't just go on forever in the way that you described. But when we were deciding on the rules for infinite chess, we just got rid of those rules, and we thought, look, if you if you want to play forever, that's a draw. So any infinite play is a draw is the the real rule, to my way of thinking. And and the reason why we have the threefold repetition rule, and the 50 move rule is just those are proxies for the real rule, which is that infinite play is a draw. But it doesn't quite answer your question. I'm sorry, I don't know any children's games that are just naturally infinite already. Name the biggest number.
EL: Yeah, well I’ve also been sitting here wondering, like, can you make a game — like other children's games that aren't maybe on a board? Tic tac toe, of course, is the first thing I think, but then what about Duck Duck Goose [for the Minnesotans: Duck Duck Grey Duck] or Simon Says, or these other children's games that aren't really board games? You know, can you even work those into the framework of these finite games, or infinite games, or not? I don’t know.
JDH: There is this game of Nim where you play with stacks of coins and you remove coins from one stack or another. It's a beautiful game with a really nice resolution to it in terms of the strategy, but that game has infinitary versions where the stacks are allowed to have infinite ordinal heights. And basically, the classic proof of the Nim strategy works just as well for ordinals. And so if you if you think about the sort of the balancing strategy, where you look at the things base two, and so on, well, ordinals have a base two representation in ordinal arithmetic also, and you can still carry out the balancing strategy even when the stacks have infinite height.
EL: It never occurred to me to think about infinite ordinal height Nim.
KK: Well, you never have that many toothpicks, right?
EL: Yeah. Yeah. limited by my environment, I guess. I must say that I've spent a lot of my mathematical career sort of avoiding things with with too much infinity in them, because they're very intimidating to me. So maybe we have kind of different mathematical outlooks.
KK: Right. Yeah. So the other thing we like to do on this podcast is invite our guests to pair their theorem with something. So what pairs well, with the fundamental theorem of finite games?
JDH: Well, there's only one possible answer to this, and I worry that maybe I'm cheating by saying, of course, I have to pair it with the game of chess.
KK: Sure.
JDH: Because Zermelo’s theorem was really focused on chess, and he proved that, look, in chess, either White or Black has a winning strategy, or else both of them have drawn strategies. And I never played a lot of chess when I was a child, but when I had kids, they got involved in the scholastic chess scene in New York, which is quite hyperactive, and fascinating. And so my kids were playing a lot of chess, and I went to hundreds of chess tournaments and so on, and so I started playing chess and I learned a huge amount of chess from, they had such great coaches at their schools and so on. But actually, I'm a pretty mediocre chess player even after having played now for so many years. And one of my coauthors on the infinite chess papers that I wrote, is quite talented chess player. He's a national master, Cory Evans. He was a philosophy graduate student when I met him at the City University of New York, which is where my appointment was at the time. And so I got to meet a lot of a lot of really talented chess players, and it was really great working with him on that infinite chess stuff, because I realized that that actual chess knowledge is really focused on the 8 by 8 board, and that once you go to these much bigger boards, the the chess grandmasters even become a little bit at sea. And so I would know what I'm trying to do mathematically to create these positions with high game values, and I would show them this crazy position with, you know, 20 bishops and hundreds of rooks and so on. And I would talk a little about, and he would say, hang on, this pawn is hanging here, it's totally unprotected. And it would completely ruin my position. So their chess ability, their chess reading ability was such that they could look at these crazy infinite positions and point out flaws with the position. And that was really something that was important for our collaboration. These chess positions are so finicky, these huge, infinite ones. And so many details are running on whether the things are protected properly, and whether — because oftentimes, you have to argue that the play has to proceed according to this main line. And if you want to prove the theorem, you have to really prove that. And if there's some little upset that means that the flow of play isn't exactly like what you thought, then the whole argument is basically falling apart. And so it really was depending on on all of that. So I really had a great time interacting with a lot of these talented chess players. It was really fantastic.
KK: I’m a lousy chess player.
EL: A lot of interest among chess players in the mathematical study of the game of chess? Even leaving aside the infinite versions, but, you know, the finite version. I assume there are theorems being proved about regular chess. Do players care about them much?
JDH: Well some of them definitely do. And actually, there's a huge overlap, of course, between chess players and mathematician
EL: Oh, yeah, that's true.
JDH: And so maybe maybe a lot of the interest is coming from that overlap. But, for example, there was a problem that I had asked, I think I asked it on Mathoverflow. Take chess pieces. On an empty board, take a full set of chess pieces and just throw them at the board. So you you get some position. What's the chance that it's a legal position? So in other words, a random assignment of pieces. And you can you can make some calculations and prove some interesting things about the likelihood that it's a legal position. In other words, a legal position, meaning one that could in principle arise in a game, in a legal game, right. And
EL: Do you happen to recall any ballpark, you know, is this, like a 1% chance?
JDH: It’s way less than 1%. It’s exceedingly unlikely.
EL: Okay.
JDH: If you allow, if you insist on all the pieces, because then there haven't been any captures. So the pawns have to be sort of perfect. There has to be one column and opposing. And already just because of that, that already makes it extremely unlikely to happen if you have all the pieces. And then some other people answered on Mathoverflow, I think giving better bounds when you don't have all the pieces and so on. But it wasn't quite open. But I think the general conclusion was that it's extremely unlikely that you get a legal position.
KK: Well, that makes sense. Given the complexity of the moves, it would be pretty remarkable if a random placement would would actually work.
JDH: There are some amazing — there’s a book by Raymond Smullyan, about the “chess detective,” and he has these many instances. It’s sort of like he gives you a chess position, and and you have to deduce, what was the previous move? Because these positions are often extremely strange. Like you think, “How could that possibly arise?” So there's sort of logic. I mean, he's a logician. And so there are sort of chess logic puzzles to figure out what the previous move was. And there's often a story associated with the game that, you know, so it must have been Black, who was the murderer because… It’s really some fascinating work that way. I really like that.
KK: Well, this has been informative. I've certainly learned a lot today.
EL: Yeah.
KK: So we like to give our guests a chance to advertise. Where can we find you online? And if there's anything you want to promote, we're happy to let you do it.
JDH: Oh, I see. Well, you can find me online, I have a blog, jdh.hamkins.org. And also, I'm on Twitter, and also on Mathoverflow. And I just published a number of books. So one of them I mentioned already, it's called Proof and the Art of Mathematics. And this is a book for aspiring mathematicians to learn how to write proofs, Proof and the Art of Mathematics with MIT Press. And I have another book, a philosophy book called Lectures on the Philosophy of Mathematics, also with MIT Press. And that is a book that I use for my lectures on the philosophy of mathematics here in Oxford. And it's, I would say, a kind of, grounded in mathematics perspective on issues in the philosophy of mathematics.
EL: Yeah, and if I can praise you a little bit, I will say something that I have enjoyed ever since I've been following you is that you — some of some of the things you write are about, like very technical, you know, deep mathematical things. But you've also had some really neat, like, puzzles that you've shared with children and stuff like that. I remember I was working on a Math Circle project one time about paper folding and cutting and you had a fun, I think it was like you show someone a configuration of holes in a piece of paper and say, can you fold the paper so that you just have to punch one hole in this folded paper to get the holes looking like this? Or something like that. And so it kind of spans a big range of mathematical sophistication, and what level you want to jump into something. So I think that's something fun and other people who might be looking for activities like that might enjoy it.
JDH: Thank you so much. I'm so glad to hear you mention that project. Those projects are all available on my blog if you click on the math for kids link, which is one of the buttons on my blog. And they all arose because I was going into my daughter's school every year, or a couple times a year, with these different projects, including that one and a number of other ones. So have about a dozen or more math for kids projects on my blog.
KK: Very cool.
EL: Well, thanks for joining us. I enjoyed talking about chess, a game that I have probably played, you know, 10 times in my life.
JDH: Well, it's a pleasure to be here. Thank you so much for having me.
KK: Yeah. Thanks.
[outro]
In this episode of the podcast, we were happy to talk with Joel David Hamkins, a mathematician and philosopher (or is that philosopher and mathematician?) at the University of Oxford, about the fundamental theorem of finite games. Here are some links you might enjoy perusing after you listen to the episode.

His website, Twitter, and Mathoverflow pages
On his website, check out Math for Kids for some fun activities for all ages
His books Proof and the Art of Mathematics and Lectures on the Philosophy of Mathematics
The Wikipedia page about the fundamental theorem of finite games
The PBS Infinite Series episode on infinite chess
The Mathoverflow question and answers about legal chess board positions

  continue reading

93集单集

Tutti gli episodi

×
 
Loading …

欢迎使用Player FM

Player FM正在网上搜索高质量的播客,以便您现在享受。它是最好的播客应用程序,适用于安卓、iPhone和网络。注册以跨设备同步订阅。

 

快速参考指南