Unlocking the Brain with David Eagleman
New Episode of The Decision Education Podcast just dropped!
What if when you are imagining the future you are actually remembering the past? That is just one of the fascinating topics I explore in the latest episode of The Decision Education Podcast with my guest, David Eagleman, neuroscientist and host of the Inner Cosmos podcast.
We had a wide-ranging chat exploring the surprising ways our brains process and interpret information influence the decisions we make, sensory hacks, decision quirks, and why “lashing yourself to the mast” could be your best bet against temptation.
Key takeaways from this episode include the way simulating the future helps us combat temporal discounting and make better decisions, the power of Ulysses contracts, and the social dynamics of decision-making.
Thanks to First Round Capital for supporting The Decision Education Podcast—empowering leaders to make choices that shape our future.
Transcript
Annie: I’m so excited to welcome my guest today, David Eagleman. David Eagleman is a neuroscientist, bestselling author, and entrepreneur who has made it his life’s work to unravel the mysteries of the human brain. As a professor at Stanford University, he leads groundbreaking research into time perception, sensory substitution, and the intersection of neuroscience with law and technology. David’s insatiable curiosity has led him to write several acclaimed books, including Livewired: The Inside Story of the Ever-Changing Brain and Incognito: The Secret Lives of the Brain, which explore the brain’s adaptability and hidden workings. He’s also the creator and host of the Emmy-nominated PBS series, The Brain, and the podcast, Inner Cosmos.
David is a graduate of Rice University, where he earned a B.A. in British and American literature, and the Baylor College of Medicine, where he earned a Ph.D. in neuroscience. Beyond academia, David serves on the World Economic Forum‘s Global Agenda Council on Neuroscience and Behavior and has founded several neurotech companies. His ability to translate complex scientific concepts into engaging narratives has made him a sought after speaker and science communicator, inspiring millions to think differently about their brains and decision-making processes.
David, I am so excited to have you on this podcast—I can’t even. We met, actually, just a few months ago. My husband had stalked you, I think—he’s also the co-founder of the Alliance for Decision Education. And you have been incredibly kind in supporting the cause, putting together two great events out in California in your area. And just so appreciative of the support that you give the organization. So let me just start off there with a huge thank you.
David: Yeah. It’s such a pleasure, Annie.
Annie: So I want to understand from the horse’s mouth about what you do and what your path was to getting here. So I think I would be remiss in not pointing out that you started off as an English major.
David: That’s true.
Annie: Which seems like a very odd path to neuroscience. So, you know, I would love to hear you kind of talk about, you know, what was your path to getting where you are now and why and how did this happen?
David: Well, I always grew up with two loves—literature and science. And in college, I was studying a lot of science. I was studying electrical engineering, and space physics, and so on. But I couldn’t quite find the thing that I loved in science. And it wasn’t until my last semester of my senior year that I took a neuroscience class. And then I was hooked. I was in at that point on. But yeah, so that’s why my major was literature, which was really my first love. And I write, you know, fiction as well as nonfiction. I have eight books and I’m about to finish my ninth and tenth. So that’s a big part of my life still. I think it’s because literature and science are trying to do the same thing. They’re just different ways of trying to get at the truth. They have slightly different ways of verifying knowledge and, you know, going out and running experiments in the science world. But otherwise, they’re both trying to figure out what the heck we’re doing here.
Annie: That’s true. Actually, I hadn’t really thought about it through that frame, but that’s a great frame. So having taken that neuroscience class in your senior year and discovering that this was your love, obviously you go on to become a neuroscientist. When we think about a neuroscientist, you’re traditional in some ways, obviously, but in many ways you’re not because you have a sort of a very big and broad way of interacting with that discipline. Can you just talk a little bit about what you do, how you interact with neuroscience, how you interact with the public?
David: Yeah, I’m—my interest in neuroscience has always been the big picture of how do you take all these pieces and parts, these 86 billion neurons, and all the activity and chemicals and so on, and get something out of it like, you know, your life, and your technicolor perception of the world, and so on. So everything that I do somehow falls under that question of how does the brain construct reality? And there are many different channels within that that I take. So, for example, one thing that I do is called sensory substitution, which is can we push information into the brain via unusual channels?
I also study ways that people see reality differently. For example, in synesthesia, about three percent of the population has a mixture of the senses. So when they, let’s say, look at letters of the alphabet, that triggers a color experience for them, you know? It’s not a disease or a disorder, it’s just an alternative perceptual reality. But we study lots of these things, like how people have very different memories, how some people have a loud internal voice, and some people have no internal voice, how some people have very clear visual imagery, like a movie, and other people have none at all. So I’m very interested in those sorts of topics too.
Also, I’m interested in how this cashes out for society. So I run a national nonprofit called the Center for Science and Law, which is about where neuroscience intersects with the legal system. So these are all the kinds of things that I’m very interested in.
And then I guess the other part of your question is, you know, many years ago I started writing books. I grew up watching Carl Sagan on television and it was, you know, I was so inspired by him. And my parents said something to me at some point, like, “Oh, you’re going to be, you know, president of the United States someday.” And I said, “No, no, I want to be Carl Sagan someday.” So that’s what I enjoy doing is public communication of science as well.
Annie: Can you talk about one of the frameworks that you sort of developed for one of your books, maybe the one that you’re kind of the most proud of?
David: Yeah. A couple examples when I was writing my book, Incognito, about what’s going on under the hood, you know, in the unconscious brain. You know, I realized that the right way to think about the brain is as, you know, a machine that’s built on conflict.
You’ve got this—what I ended up talking about as a team of rivals in Abraham Lincoln’s terminology. So you’ve got all these different neural networks that have different desires, different wants, and the way to think about what’s happening in the brain is it’s like a neural parliament where you’ve got all these political parties, all of whom love their country and want the right thing for their country. But they all have very different desires in terms of what they think that is, whether to eat the cookies or not eat the cookies or go to the gym or make a promise to somebody that you’ll go to the gym. You know, they’ve all got different ways of going about things, some short-term, some long-term—these are all areas of interest in decision-making that perhaps we’ll get to momentarily. Anyway, so what that means is that the way the brain makes decisions is nothing like a computer. And you, with the exact same brain, will make very different decisions at different moments, depending on what the temptations are that surround you, what the circumstance, the social context, all of that.
Annie: I want to go back to the team of rivals because I think this is something so core to decision-making. You know, we can think about, you know, when we’re making decisions, we have lots and lots of different competing goals, you know, that we’re trying to satisfy in some way. And you mentioned one of them, which is I’m trying to make myself happy now, and I’m trying to make some future version of myself happy also. And those two things are in competition. You know, should I eat the cookie now, as you said. Because me, now, thinks the cookie would be very yummy. But, obviously, future me would prefer that I don’t eat that, right? I mean, and this is actually a huge tension. And we know that in decision-making something called temporal discounting, which is we tend to overweight the present and discount the future, which is why we eat lots of cookies and chips and things like that despite our long-term goals.
But it sounds like what you’re talking about with a team of rivals is really, kind of, this is the neurological basis for that type of tension, as we’re trying to balance out those different goals. You know, I think this idea of like, how are we thinking about balancing the different—in some ways, the different versions of ourselves that have different goals, like balancing different time horizons, things like that. I want to kind of bring this team of rivals thing back to you know, Daniel Kahneman, Amos Tversky, particularly Kahneman, as he wrote about in Thinking, Fast and Slow, and this idea of System 1 and System 2. Very popular, right? An extremely popular concept. Lots of people will throw it out.
David: Very popular. A little oversimplified, but yeah.
Annie: I prefer the term deliberative and reflexive systems, but System 1 and System 2 would generally map onto that, System 1 being reflexive, System 2 being deliberative.
I think that, and, you know, I have a neuroscientist here that I get to talk to, so excited to say this, because I think that most people assume that that maps onto the brain pretty cleanly. And the way that they generally talk about it is that System 2 is situated in the prefrontal cortex. I’m guilty of this myself. And System 1 would be in the temporal lobe.
Can you talk a little bit about first of all, how do you think about System 1 and System 2? How does it relate to this gang of rivals? And then sort of is there a neurological mapping to that at all? Or is that just a silly way that we oversimplify that concept?
David: Okay. So I’ll take those in reverse order. So the general story with everything in the brain is that it’s generally impossible to pin down a spot in the brain or an area. It would be like if I gave you a map of your city and I said, okay, and he just put a pin where the economy is. You’d say, but it’s all the interaction of all the pieces and parts going on here. I say, yeah, yeah. But tell me where it is. So it’s the same idea with pretty much everything in the brain. That said, the reason I don’t love the System 1, System 2 thing—although Kahneman was awesome, and Tversky was awesome, and I love their stuff generally—but it’s because it’s more complicated than that. That actually hides a lot of the complexities. It’s much more interesting than just, oh, you’ve got the automatic and then the conscious part. So that’s what I think about that is, you know, it’s a cool—it’s a good start, but it doesn’t actually expose all of the stuff going on.
And there’s one more thing. This is, you know, I think you and I might’ve talked about this once, but just for interest, I want to maybe play devil’s advocate against ourselves on one point here, which is the idea that we can put probabilities on anything in life. You know, there’s—mathematically, we can do this for all kinds of good things and which card will come up next and so on. But when it comes to, you know, what I should do in terms of, you know, the cookie, or the job, or whatever, it’s very difficult to put probabilities on this or determine expected values on these things. And this is this concept I think we talked about. It’s called Knightian uncertainty, introduced by an economist named Knight, who said, you know, there are lots of questions for which you just can’t put a p-value on it. So just as an example, you know, what’s the probability that there’s life in the cosmos besides humans? You know, who the heck knows? Like, it’s really hard to put a p-value on it and say, oh, I got it. It’s 0.375 percent. So that’s the thing that I think we deal with a lot in life is when we’re making decisions about things. It’s not like we know exactly how to assign those p-values.
Annie: Yeah. So I would push back. I’m going to push back on that for a second. And what I’m going to say is, yes, there’s Knightian uncertainty. I think that for anything that we’re deciding that has consequences to our life, what I would argue is that somewhere under the hood, we’re making some guess at what the probability of certain things occurring is. And the only reason that I say that is because it kind of has to be so, in my opinion. If I’m trying to decide, not having Waze or a mapping system, what route I’m going to take to work and when I should leave—all of that’s probably happening pretty quickly. But really what it has to do with is, if I think about the chances I’m going to get there in under 30 minutes, or 30 minutes to 45 minutes, or 45 minutes to an hour, which then has to to be like, what’s the probability there’s really bad traffic at this time of day, sort of bringing my experience into this. Right? And then I have the other thing of like, how bad is it if I’m late? So the way that I actually manage my risk there, for example, is going to be different if I’m trying to catch an international flight than if, you know, I’m just trying to get to the grocery store or something like that, right?
Because I can tolerate getting it wrong, quote unquote, like having a bad outcome. So we may not explicitly be saying, here’s the probability of different things, and there’s certainly no right answer, in the sense that we can’t know for sure. Like, we’re not omniscient. And so we can’t objectively know that the answer is that 67 percent of the time I’m going to encounter traffic that’s going to cause me to have a 15-minute delay, right? So that would be very difficult to do, but we are obviously forecasting that because that’s how we decide when to set our alarm. It’s how we decide when to leave our house. It’s, in fact, how we decide who we’re going to marry. We think about those goals that you’re talking about, right, and we’re trying to figure out, does this person in comparison to other people have a higher probability of actually getting me the things that I believe that I want—though I might be wrong about what I want, that’s okay—as compared to other people that I believe that I might meet in the future or people who I have already met that I could go and try to get to marry me.
David: Well, yeah. Multiplied by the probability that you’ll get somebody else. You got one of those other people. Yeah.
Annie: Right. Of course. Right. So in that sense, like, these calculations have to be occurring under the hood. My point about sort of trying to forecast these probabilities is you might as well make that explicit. Because if you make it explicit, then other people can look and say, you know, I think you’re probably pretty far off on that. Let’s have a discussion about it. And you’re actually going to, sort of, maybe in terms of this team of rivals, you’re going to be better at understanding, like, how much you value different pieces of the equation. You can actually check your accuracy. And there’s all sorts of good things that come out of that. But I certainly would never think that when you’re forecasting some probability that you can know for sure, like these are educated guesses, they’re going to have pretty wide bands. I just think they’re happening anyway, so you ought to try to make them explicit.
David: Fair enough. What do you think about when we have pretty bad illusions about the probabilities of something and that ends up being useful? Just as an example, when I wrote my first book, Sum: Forty Tales from the Afterlives, I thought, oh, this will be straightforward. Everyone’s going to love this book. And I submitted—I got rejection after rejection after rejection. But it didn’t slow me down at all. I just kept submitting. I kept going like a maniac. I got a stack of rejection letters as high as the book. And it finally got published after like 200 rejections. And then it did well. But I’m so glad that I went in completely naive, because I feel like it could have been a very different thing had I gotten 10 rejections and said, okay, I give up. The p-value is very low here.
Annie: You know, this is one of those problems of survivorship bias. So, you know, the question that you want to ask is, so we have David Eagleman who has a pile of 200 rejections, finally got his book published. It was actually, you know, a success and then he got to publish all sorts of other books and ended up doing a TV show, ended up, you know, doing a podcast. And are we supposed to actually make decisions based on David Eagleman’s experience? Or are we supposed to look at what’s very hard to find because we don’t know about them? The hundreds and hundreds and hundreds of David Eaglemans who got the 201st reduction or the 202nd rejection and actually never ended up getting their book published. And there was a huge opportunity cost to that because they weren’t focusing on other things that would have made themselves more successful. Or they did end up getting it published after 200 rejections and three people read it, and it didn’t actually help them and they didn’t get tenure. And again, they spent so much time on that when they could have been doing it on other things that would have made them more successful. And I think that’s a really hard thing, right?
So that stick-to-it-ive-ness that you showed—you know, clearly anybody who’s successful has been gritty. That is absolutely true. But it doesn’t mean that we’re supposed to advise people that being irrationally gritty would be good. Right? And it’s that question of where is it irrational versus not? And it depends a little bit on what your goals are, right? What the risk is that you’re willing to take—how much positive expected value you need in order to continue, how much you care if you fail, what your other alternatives are and all those things. That gets very complicated. But I think survivorship bias is just a really, really powerful bias that can lead us to cling to these stories. And then get sort of turned on its head into a rationalization for continuing things that you ought not to continue anymore. But sometimes it’s good because being gritty, you can’t succeed if you’re not gritty.
David: Yeah.
Annie: So that’s kind of the tension between the two.
David: Yeah. And I totally take that point about survivorship bias. I do wonder though what you had said about sitting down with a friend, and if I said, “Look, you know, here’s my probability that I’ll get this book published.” And if somebody sat with me, they’d say, “Look, Eagleman, you’re wrong. You—the p-value is very tiny, so you should give up now.” I just think it’s hard.
Annie: You need a really good friend because a really good friend will say, “What are your values?”
David: Yeah, that’s right.
Annie: Do you care if you want to just keep trying? Do you care? What are you sacrificing? What would you be doing with the time? You know, you have to actually walk them through that. And what you have to do is find a friend who’s not going to impose their own values on your decision.
David: We need probability advisors, right?
Annie: Exactly. I want to go back to this idea of mental time travel and I want to understand sort of from a neurological standpoint, like, how would you say that ability that humans have to be able to imagine beyond their lifetimes—like, I know that you obviously can’t go back in evolution, but first of all, do you believe that that’s unique to humans? When you look at other species, how far ahead in time do you think they can get? And then I guess what I would add to that is sort of what’s the purpose of that, right? Is that helpful to decision-making? For example, this ability that we have to be able to do that and how can we then harness that maybe to help make our decisions better?
David: Yeah. So this is one thing that human brains do. They do it way better than anyone. It’s actually very difficult to answer the question of how much can other animals do it? There’s not a lot of evidence that they do do it. For example, you know, squirrels will bury their acorns for the winter, but it’s not clear if they’re doing that just as a reflexive thing, rather than thinking about the threat of the winter coming.
But humans, we actually spend most of our time, not in the here and now, but in the there and then, either thinking about our past or simulating possible futures. And we’re extraordinarily good at it. Now that said, the caveat here is that, you know, we often misremember. Our memories are actually quite poor in many ways. And whenever we’re remembering an event that’s already happened, that gets polluted with new data that we’ve had since then, and we simulate possible futures, but of course that is limited by our experience. And if we haven’t seen—this is in a sense close to the availability bias—you know, if we’ve never seen some kind of situation then we imagine the situation that we are able to, and we take that to be more probable. But that said, that’s what allows us to make good decisions or better decisions than we would otherwise, because we can simulate things out, simulate all the players. We have extraordinarily social brains. Well, if I say this, then he’ll say that, and then blah, blah, blah. And you’re able to just say, hey, that might actually work. And so we do things that way. We spend most of our time in those futures.
Annie: So if I understand what you just said. When we’re imagining the future, what we could say is that we’re actually remembering. Is that fair?
David: That’s exactly right.
Annie: So when we imagine the future, we’re actually remembering and we’re taking past experiences and recombining them in a novel way. Is that—
David: That’s exactly right. And actually I would say that’s sort of a new framework in neuroscience, but it actually goes back to Galen, the Greek physician, who said essentially this—that the reason that we remember is so that we can generate possible futures. So yes, but the modern scientific evidence for that has to do with people who get damage to a part of the brain called the hippocampus and they can’t remember, and therefore they’re unable to simulate the future. So you say to them, “Hey, think about the vacation that you want to take next month you know, let’s say you go to the beach, what’s that going to be like?” And they say, “I can’t picture anything. I’m just, I don’t see it. I’m just blanking here.” They’re unable to simulate possibilities. So clearly the systems in your brain involving memory are the same systems involved in simulating possible futures. But of course we’re creative. It’s not simply a reproduction of the past. It’s a remix of it. And we’re bending and breaking and blending ideas to generate these new possible stories.
Annie: So when you put people in a fMRI machine, and you ask them to imagine the future, it’s the hippocampus—the same areas that are associated with memory are being recruited.
David: Yeah, exactly. It’s a whole network of areas, but there’s a—it’s greatly overlapping. You see the same general networks. Yep.
Annie: So there was recently an interesting study that I saw. I wonder if you saw it—and hopefully I won’t butcher it—which was about theory of mind. So, you know, theory of mind is the ability for me to imagine how you might be viewing or how you might be feeling about a situation that I’m viewing as well. In developmental psychology, you know, a very classic study would be you have some sort of 360 scene where there’s like a mountain in between, so that me, the viewer, looking at one view of that can’t actually see in that moment what somebody from the other side is doing, is seeing. You know, you have a child walk around it, and the question is when they’re looking from one direction and you put a doll over in the other direction and you say, what can the doll see right now? What is the age at which they can actually tell you what the doll would be looking at or another child would be looking at? So really being able to get into somebody else’s head. So this is another one of these problems, right, which is when you’re looking at animal models, do they have theory of mind? Hard to say.
So there was an ingenious study with crows who, as you know, are very smart. And they had food that they could hide, and they had places that they could hide it, and the room that they were in when they were hiding it had a peephole. And so they wanted to know if the crows would act differently in terms of hiding the food if the peephole were open or closed. And then they also had the crows experience looking through the peephole so that they could see into the room. And it turned out that if the crows had experience looking through the peephole, that they actually acted very differently when the peephole was open versus when it was closed, depending, you know, in terms of how they were hiding their food. So it seemed to be pretty strong, you know, at least in my view, I don’t know, you could pick it apart I’m sure—pretty strong evidence that crows may actually have theory of mind.
David: Cool.
Annie: There’s different ways that we can take perspective, right? So one of the things that I need to do in order to be a really good decision maker is to be able to take the perspective of my future self, right? And understand how my future self might feel or experience, you know, the outcomes of decisions that I’m making today. But obviously there’s also—how are other people going to view me? How are other people going to view the decision? Can I put myself in somebody else’s shoes and understand their perception of what I’m doing? So can you just kind of talk generally about theory of mind? Is it unique to humans? Why is it so important for decision-making and also magic, by the way? Why is it so important for magic?
David: Yeah. So as far as we know, theory of mind is mostly a human thing. Although this study you said about crows might prove otherwise. It might be that some other really smart animals develop it, possibly independently. Corvid brains have a pretty different evolutionary history than we do, but they might have ended up at the same place. But yeah, this ability to simulate what it’s like to be someone else is something that we use every day.
Obviously magicians use it in the sense that they really need to know what the audience member is thinking and where their spotlight of attention is, and therefore what things they have seen or not seen the magician do. The magician totally depends on this. But so do con artists. So do psychiatrists. So do novelists. Everybody needs to know, okay, what is the other person thinking? So do poker players, by the way. They, you know, can do this at many levels. Like what is she thinking that I’m thinking that she’s thinking? So we are—we’re quite good at doing this. It’s a developmental skill that children don’t start off with and eventually develop.
And my interest lately has been in whether AI has theory of mind, because several people have written little papers saying they think AI has spontaneously emerged—I should say theory of mind has spontaneously emerged within these large language models. I totally disagree. So this has actually led to something that I’ve recently defined called the intelligence echo illusion, which is that, you know, what these large language models are trained on is everything written by all the humans over all this time. And so often what happens is, you type in a prompt, it gives you an answer that exists a hundred places people have written about this. But you don’t know that possibly. And so you say, oh my God, this thing, it’s sentient. It’s got theory of mind, whatever. But in fact, it’s just an echo of the intelligence that’s already there among the humans. And to you, you mistake it for a voice. But anyway, all this to say, I don’t think AI can do it—at least large language models, the way they’re structured now, they are statistical parrots that are, you know, knowing very deep things about the probability of which word comes next. But they don’t know what it is to be inside somebody else’s head.
Annie: So when a large language model—you know, there’s that famous story of the person from Google who was very upset, right? Because they felt that their LLM was sentient. But you know, when we see something, we don’t like to just say it can be explained just sort of statistically, right? Like this is just—somebody said it and it was in the training set and that’s what it just parroted back at you. And no, it doesn’t have any consciousness, and it doesn’t know what it’s saying, just in the same way that, you know, we didn’t develop the traits we have for a purpose. They got selected for, because of random variation in the population, right?
And it feels like that all kind of goes back to the way that human beings really try to impute meaning. And maybe we can talk about that even more broadly because it’s not just meaning, right? Our brains really like to create patterns. So maybe you could just speak to, you know, what are the benefits of that pattern recognition that our brains are so good at? Like, why is that happening? But then also kind of, where does it go wrong? Not just in terms of, like, my LLM actually loves me, right? Which might happen. But also when the pattern recognition kind of goes awry and we end up in conspiracy theory thinking.
David: Yeah, the general story is the world is far too complex for us to be able to take that in. And so we’re always imposing our patterns. There are a lot of ways of talking about this. I always talk about this in terms of the internal model—we’re looking for things that match our model. Other people talk about it as screens that we’re looking through. And, you know, we only see what we’re able to see through the screen and so on. But yes, this is what we’re trying to do is make patterns that allow us to interpret what the heck is going on out there. And often this leads us to mistaken conclusions. And this happens in a thousand ways.
You know, neuroscientists study, for example, pareidolia is where you see faces everywhere. So like, you know, your little electrical plug—it looks like a little face, or you see burn marks on a piece of toast—it looks like a little face. You see a cloud—you think it’s a face or the man on the moon, stuff like that. What that illustrates, of course, is simply that we are really preprogrammed to find faces. That’s massively important to us. And so therefore we oversee that. And this happens all the time when you look at stock market investors and the patterns that they impose when they’re looking at random fluctuations of a stock going up and down. You know, people run studies on this and they have all kinds of patterns that they’ll impose on this. So yep, it’s something that we do and often it leads us astray. Although often it’s something that, yeah, that’s important.
Annie: Well, you know, obviously developmentally to humans, finding faces is really important. You know, but when we start to overly pattern match, obviously it’s going to really affect our decisions, right? I mean, in the extreme case, when you start to believe in conspiracy theories, the kind of decisions that you then make are obviously highly compromised because we’re matching everything to a particular scheme or a model that we have about the way that the world works, which isn’t actually representative of the world, right?
David: Yeah, exactly right. By the way, if anyone’s interested, I did a special episode of Inner Cosmos on conspiracy theories and why brains, you know, at the extreme will go for those. And it has to do with, you know, as you’re saying, it’s doing the right thing, which is saying, hey, is this connected to this? Maybe that’s connected to that and so on. But I think there are lots of reasons why people go for conspiracy theories. And there’s a whole social aspect, I think, that gets overlooked when we talk about it sometimes, which is just that, you know, if you’ve got some conspiracy theory, you get to be the one at the party who says, “Oh, I think blah, blah, blah, blah, blah.” And you get attention for it. And maybe people think you’re really smart for detecting that. Or you think that people think you’re smart for detecting that or whatever. Or you just like being the contrarian or whatever it is. But there are all kinds of reasons why people will generate or repeat conspiracy theories.
Annie: And I assume some of them have to do with, if you go back to the gang of rivals, that the other way that you can derive social capital from that is that there’s other people who believe the conspiracy theory and now you have group belongingness as well, right?
David: That’s exactly right. I mean, people will bond over the strangest things. And so yeah, exactly right. If I believe in a flat earth and you do too, then we can be pals and link arms and talk about how terrible everyone else is that they don’t believe in that.
Annie: I think the general view is that conspiracy theories are for the less intelligent among us and that more intelligent people obviously don’t fall for conspiracy theories. True? Not true?
David: Here’s what I think it all comes down to, and this is, you would love this, this is your language, yeah, it just comes down to probability. So for example, the idea that we faked the moon landing. So you might say, hey, that seems plausible. But you have to think through the probability. Okay. Let’s see how many people at NASA would have had to be involved in that? Okay. How many journalists, how many cameramen, how many blah, blah, blah? Okay—a lot of people. What are the probabilities that not a single person there is going to defect over the course of 70 years? And then, of course, what’s the benefit for them defecting? Well, if you’re the one who defects, and you get to write the New York Times bestselling book on how you did the whole moon fake thing, you know, that’s pretty good incentive to defect. You know, maybe even if there’s such military rigor that nobody’s going to defect, what are the chances that like the grandson doesn’t find something in the attic of the guy who did something? Anyway, you start putting together the probabilities. And that’s, I think, the way that maybe a good, thinking brain makes decisions about these. It’s not to say that there can’t be conspiracy theories and that there can’t be things improbable that happen. It’s just that some stories are more plausible than others. And I think we just have to look at the probabilities on these things.
Annie: Yeah, I mean, I agree. And I think the evidence actually pretty strongly shows that smart people fall for conspiracy theories all the time as well, that it’s not like a smart/dumb thing at all. It’s just the way our brains work, and people are more or less, independent of intelligence, prone to overmatching basically.
David: You know, that’s right. And Ted Kaczynski, for example, is a very high IQ guy—the Unabomber—nd also had lots of these things. Yeah, exactly right. There’s all sorts of aspects of why you might believe it. And then, by the way, you know, this is something that comes up in psychoses where people have, for example, schizophrenia where they will link things that aren’t linked, that aren’t related. They will think these things have a meaningful relationship between them. And so it’s very easy for somebody with schizophrenia, for example, to generate conspiracy theories. The interesting part is those conspiracy theories that actually catch on, which is to say that, somebody with schizophrenia will link lots of things, but most other people won’t say, “Oh, that sounds right to me.”
Annie: You know, in my world, when I think about cognitive biases and how they affect decisions, I think there is this interaction between the way we perceive time and cognitive biases. So, yeah, I don’t know if you have an example of how our perception of time might lead to cognitive bias or any, you know, personal experience where understanding that kind of helped you make a decision.
David: Oh, that’s interesting—the relationship between them. I mean, here’s what I would say. What all my research showed was that the amount of time we think something took has to do with the density of memory we have. So if we lay down a lot of memory about something, it seems to have taken longer because retrospectively, we’re making that judgment. We say, “Oh, well, that happened. That happened. Okay. That must’ve been blah.” So when you’re eight years old and you’re having a summer, everything’s new. There’s all these new experiences. You’re writing it all down. As an adult, you know, it’s pretty much the same as others. So you sort of have already gotten good at pattern matching. And so you’re writing down very little. And so when you say, “What happened this summer?” It’s hard to draw any footage up. And so then you say, “My God, it just disappeared.” So that’s the general story, but I—
Annie: So I guess what that means is that you should just try to seek out novel experiences and then maybe time won’t seem like it’s going by so quickly.
David: Yeah, that’s exactly right, within the constraints of good decision-making. Right. Right. Yeah. Yes.
Annie: So if you had to think about, like, a decision-making tool or idea or strategy that would be, like, the most powerful—generally for people and also sort of for the next generation, what’s the strategy right now that you might be obsessed with?
David: The one that I’m obsessed with is the Ulysses Contract. And this is the idea of making a contract with your future self, who you know is going to behave badly when they’re faced with some temptation. So, for example, Ulysses, the Greek hero, is coming home from the Trojan War. He knows he’s going to pass the island of the sirens. They sing so beautifully. He can’t wait to hear that, but he knows that like any mortal man, he’ll go toward the island and crash into the rocks and die. So you remember what he does? He lashes himself to the mast. He has his men fill their ears with beeswax, and that way he’s able to hear the siren song and they’re able to sail past the island.
What’s going on there is the Ulysses, distant from the island, the Ulysses of present mind, is contracting the future Ulysses, who he knows is going to behave badly so that he’s sure he will do the right thing. And I find this actually the most powerful tool. And so I am writing one of my next books about this because I just find it so powerful.
So there are a million ways that people can set up Ulysses Contracts. For example, when people go to Alcoholics Anonymous meetings, the first thing they’re told is to clear all the alcohol out of your house. Because you might decide, hey, I’m not going to drink anymore. I’m making this decision. But, you know, on a festive Saturday night or a lonely Sunday night, you might, you know, if it’s there, you’re going to give into that temptation. Or with drug addiction programs, they say, don’t ever carry more than twenty dollars of cash in your pocket because you might think, I’m definitely not going to do any more drugs. But then you run into someone who says, “Hey, I’ve got some drugs to sell you.” And if you’ve got the cash in your pocket, it will burn a hole there.
So there are many, many ways that we can do this in our lives. You know, going back to the chocolate chip cookie example, I just make sure we never have chocolate chip cookies and that kind of stuff sitting around the house. So anyway, that’s a way that we can use our long-term decision-making, when we have a sense of the kind of person we would like to be, to constrain our behavior when we know we’re going to act badly.
Annie: So we’ve talked about this before. I’m a huge proponent of Ulysses Contracts. Just, you know, the sort of broad term for them is precommitment contracts, where you’re pre-committing to some action that you want to take in the future. Different ways that you can do that are, if you really want to be going to the gym and you know that when you actually have to go, you’re going to be like, oh, I’m too busy. You know, there are ways to implement a pre-commitment contract. A simple way of doing Ulysses Contracts would be, I could say to you, “Let’s meet at the gym.” Because then I’m kind of screwing you over if I don’t actually show up. Right? So that’s a way for me to contract my future self to actually follow through,you know?
And going back to this idea of a team of rivals, how do you think about the interaction between Ulysses Contracts and the team of rivals? Like when, as I hear you talk about it, it sounds like what it’s doing is throwing the balance for who’s going to kind of win that argument or what the different weighting of the different rivalries is going to be in a way that’s going to help you to get to the things that you want in the long run. Is that kind of how you think about it?
David: That’s exactly right. Your long-term decision-making says, look, I know I don’t want to drink that alcohol or eat that cake or whatever the thing is. And so the thing that requires self understanding is knowing that no matter how strongly you want that to be true, that this other parliamentary party in your brain is going to make the wrong choice when faced with the temptation. And so it’s actually setting up the situation so it can’t do the wrong thing.
Yeah, this is to my mind one of the major lessons for, let’s say, children or adults or anybody is—really understanding who you’re going to be in a particular situation. Because generally, we all have the misperception of, look, I know who I am. I know that if I’m making this decision that I’m not going to eat that chocolate cake tonight, I won’t do it. Because I’m that kind of strong person. And it’s just—it’s an illusion to think we know ourselves, in one moment, who we are going to be in the future. And so the way you have to do it is actually figure out a way to lash yourself to the mast.
Annie: Does it actually improve your decision-making just to make a commitment to somebody? For me to say to you, I don’t want to eat chocolate chip cookies anymore? And even if I have them in my house—has sort of committing, you know, and creating that accountability with another person—is that also helpful if you need to go to the extreme?
David: Oh, I think it’s very helpful. But it needs to actually be a contract and not just a promise. So here’s an example. There was a woman who was very active in civil rights from the 1960s onward. She marched, she did all kinds of things. But she had a problem, which is that she always smoked, and couldn’t stop smoking, and really wanted to quit. And so what she finally did is she told her friend, she wrote a $10,000 check, and said to her friend, “If you catch me smoking, I want you to donate this check to the Ku Klux Klan,” which was her worst, like the thing she hated the most in the world.
And this is what’s known as an anti-charity move. And the idea with this is—that really makes her accountable. Because that would be the worst thing in the world for her to see her money go to the KKK. So getting social, you know, pressure in there really matters. But with all these things, there’s different levels of what’s going to make a difference.
I’ll just give you another example. You know, there are these fitness bootcamps that you can sign up for where you run around, a bunch of people at seven in the morning, you do a bunch of pushups and jumping jacks and so on. But the way to really, you know, the way to do this right is there are these camps where if you don’t show up, everybody jogs to your house and does jumping jacks on your front lawn and screams your name until you come out. So it’s socially embarrassing. It’s useful to leverage social embarrassment to get yourself to do things that you know you don’t want to do. So obviously nobody even takes that risk. They just show up and tie their shoes and get out there, even though they don’t want to at seven.
Annie: Oh, that’s really interesting. So you’re making a distinction—and I think this is really important for listeners—between a promise, or even like a commitment, and a contract.
David: Yeah, exactly. I, you know, think about it this way. Ulysses didn’t tie himself to the mast and tell his men, “Hey, put this little loop here so I can pull this if I really need to, so that I can get out.” He didn’t—he was actually lashed to the mast and could not get out. And fundamentally that’s what Ulysses contracts needs to be.
Annie: Yeah. I love the jumping jacks on the lawn. Because again, going back to the team of rivals, we have this social identity. Like, how are other people going to view us? And that’s going to be really embarrassing, you know? And going back to back to the KKK, it’s the same thing. It’s recruiting, you know, well, I like to smoke and I like the feeling of smoking. I know what my future self wants, which is not to get lung cancer. So I’d like to stop smoking. Man, if I donated to the KKK, that would be so inconsistent with, you know, what my identity is. So it sounds like the best contracts are not just going to resolve some of the rivalry between present and future, right? So they’re not just going to resolve some of that rivalry and sort of weight it toward being good, you know, making sure your future self is behaving or your present self, as it becomes in the future, is behaving toward what your goals are. Not just that, but recruiting the different types of rivalries to align now on the right type of decision. One of them is what is my social capital?
David: Yeah, that’s exactly right. And by the way, when people are trying to lose weight, when they’re on a long-term diet, social media is actually super helpful because they end up posting and say, “Hey! I’m down three more pounds this month” and so on. And everyone pipes in and says, “Hey, congratulations!”
And so on. So now you don’t want to give up your diet because you’ve got all these people who are giving you a thumbs up. And so on. That stuff really works. It matters.
Annie: That’s a great example of social media use for good, for a good Ulysses Contract. I love that.
What do you think the impact on society will be when the Alliance succeeds in its mission to ensure Decision Education is part of every K-12 student’s learning experience?
David: I mean, I think it will improve everybody’s behavior, not in terms of people not doing crazy, fun stuff, but in terms of just having a better sense of—is doing the short-term thing worth it right now? Just being able to understand, okay, I’m a different person in the short term than I am in the long term. What do I want long term? Just, you know, one example—since we just mentioned social media a moment ago—is, you know, kids have to deal with something that you and I never had to deal with, which is stuff they post now is going to be with them their entire life. And when they’re running for President of the United States, their dumb tweet is going to be front and center. So just being more aware of thinking through time, I think, is one of the advantages. And hopefully being able to understand themselves as a complicated, multifaceted creature through time.
Annie: Thank you for that answer. If listeners want to go online and learn more about your work or follow you on social media, where would you send them to start?
David: Eagleman.com is my website and I’m on all the social media channels. And my podcast is Inner Cosmos, which I’m happy to say has a high watermark of the number one science podcast in the nation. So I’m thrilled to get to unpack meaty issues every week on that.
Annie: That’s amazing. I highly recommend that podcast as a listen. Also for any books or articles that we mentioned today, you can check the show notes out on the Alliance site. And David, this has been so fun. I really could have spoken to you for hours and hours. This was a blast for me. I hope it was fun for you as well. And thank you so much for joining us and for all your support of the work that we do with the Alliance.
David Eagleman: You bet. Keep it up, Annie. Onward and upward.
Guest bio
David Eagleman is a neuroscientist at Stanford University, an internationally best-selling author, and a Guggenheim Fellow. Dr. Eagleman’s areas of research include sensory substitution, time perception, vision, and synesthesia; he also studies the intersection of neuroscience with the legal system, and in that capacity he directs the Center for Science and Law. Eagleman is the author of many books, including Livewired, The Runaway Species, The Brain, Incognito, and Wednesday is Indigo Blue. He is also the author of a widely adopted textbook on cognitive neuroscience, Brain and Behavior, as well as a best-selling book of literary fiction, Sum, which has been translated into 32 languages, turned into two operas, and named a Best Book of the Year by Barnes and Noble. Dr. Eagleman writes for the Atlantic, New York Times, Economist, Time, Discover, Slate, Wired, and New Scientist, and appears regularly on National Public Radio and the BBC to discuss both science and literature. He has been a TED speaker, a guest on the Colbert Report, and profiled in The New Yorker magazine. He has spun several companies out of his lab, including Neosensory, a company which uses haptics for sensory substitution and addition. He runs the science podcast Inner Cosmos and is the writer and presenter of The Brain, an Emmy-nominated television series on PBS and the BBC.
Show notes
Books
Livewired: The Inside Story of the Ever-Changing Brain – David Eagleman (2020)
Incognito: The Secret Lives of the Brain – David Eagleman (2011)
Sum: Forty Tales from the Afterlives – David Eagleman (2009)
Resources
Websites
Articles
Podcasts
Social Media
My favourite part of the podcast was when David challenged the idea that we can know probabilities of certain events occurring when we make decisions & your response that these expected value / probabilistic calculations have to occur "under the hood" and we can improve our decisions by making the probabilities transparent.
This made me realise I fundamentally disagree with you on how minds work. I am very much in the camp of David Deutsch (& Karl Popper) who argues that knowledge (& decisions based on it) does not grow incrementally in probabilistic terms based on updated priors (Bayes) but by conjecture and criticism in which we take guesses and refute them or accept them as the best explanation available (Deutsch/Popper).
I would love to hear you debate this with someone like David Deutsch or Brett Hall, a guy who is promoting David's ideas & has talked and written extensively on the problems with Bayesian knowledge.