The Science of a Good Explanation with Tania Lombrozo
Episode 041 of The Decision Education Podcast just dropped!
Episode description
Why do we crave explanations—and what happens when they lead us astray? In this episode, I sit down with cognitive scientist Tania Lombrozo to explore the psychology of figuring out “why.”
From puzzling over everyday baking mishaps to big-picture questions about conspiracy theories, Tania unpacks how our drive to explain things shapes what we believe, how we learn, and the choices we make. Together, Annie and Tania dive into strategies for better reasoning—like probabilistic thinking, generating alternative explanations, and shifting from “could it be true?” to “must it be true?”
Key takeaways include why teaching is one of the most powerful learning tools, how to avoid the trap of satisfying explanations trumping correct ones, and what conspiracy thinking reveals about our deep discomfort with randomness.
Guest bio
Tania Lombrozo is a professor of psychology at Princeton University, as well as an associate of the Department of Philosophy and the University Center for Human Values. She directs the Program in Cognitive Science and co-directs the Natural Artificial Minds (NAM) research initiative within the Princeton Laboratory for Artificial Intelligence.
Tania Lombrozo received a B.S. in symbolic systems and a B.A. in philosophy from Stanford University in 2002, followed by a Ph.D. in psychology from Harvard University in 2006. She was a professor of psychology at the University of California, Berkeley, from 2006–2018, before joining the faculty at Princeton.
Professor Lombrozo’s research aims to address foundational questions about cognition using the empirical tools of cognitive psychology and the conceptual tools of analytic philosophy.
She lives in Princeton, New Jersey, with her husband and two daughters.
Transcript
Producer’s Note: This transcript was created using AI. Please excuse any errors.
Annie: Tanya, thank you so much for joining me. I’m so excited for this conversation.
Tania: Thank you. I’m happy to be here.
Annie: I just wanted to kind of start with this very big concept, right, which is humans’ drive toward explanations. So I think that in some ways it’s kind of obvious to people that like, oh, we kind of want to understand the world. But I think it goes much deeper than that in terms of this drive. Can you kind of give us a general idea of like, what do we mean by that? What are some examples of that drive to explain things? Let’s just start there.
Tania: Yeah, absolutely. So I think one way to think about the human drive for explanation is in terms of the motivations and emotions that drive it, and I think we’ve all had these experiences. So one side of it, we have curiosity. We really want to know why certain things happen the way they do, why they happen at all. And that’s part of what really initiates the process of looking for explanations. But then there’s also this feeling we get when we get a good explanation, which I call explanatory satisfaction. When you get a really good explanation, there’s a kind of like, oh, I get it now. I understand. Or on the flip side of that, when you get a bad explanation, there’s like an itch that hasn’t been scratched. It feels like you’re missing something, right? So I think we can all relate to those experiences. I think those are very human emotions that drive a lot of our learning. And so from my perspective, part of the reason why we’re such explanation-seeking creatures is because explanation is one of the main ways that we learn about the world and communicate about what we learn. And I think you see that at all scales. So you see that in cases like science, for example, and in children’s learning, but you also see it in really mundane everyday cases.
So two that I thought I might share that I thought listeners would be able to relate to are cases like cooking and clickbait, right. So in the case of cooking, at least I’ve had the experience of having virtually the same recipe in one case turn out to be spectacular. And then I try to reproduce that and it doesn’t work out the same way. And that makes me wonder why, right? Like what’s the difference? Why were the cookies so delicious the first time and underbaked the second time? I did everything, I think, exactly the same. And so we’re driven to explain, and that’s a good example of one of the reasons why we might care.
If we can come up with the explanation, that’s going to then guide our future actions, right? If I could figure out what it was that made the difference between the delicious batch of cookies and the mediocre batch, presumably that would then put me in a better position to replicate the better cookies every time, to prevent the bad batch from happening again.
So even though often when we’re driven to explain, it’s just for the sake of understanding, I think the kind of knowledge it gives us about the world is precisely the kind of knowledge that we can use to guide our future actions in more productive ways.
Annie: I like the cooking example. I actually made brownies once the night before Thanksgiving, and this recipe has worked every single time. And this one time it was just, like dry and dense and I had to go back and figure out what had happened because I make them all the time and I figured out that I had accidentally doubled the flour, which by the way, will make your brownies completely disgusting.
I want to dig in on one part of what you said. Well, actually kind of two parts. One is sort of thinking about this satisfaction we feel when we think we’ve got the explanation like, and how that’s driving things. But then to sort of roll back to the core of something you said about it’s changing your behavior going forward, like when you can find a really good explanation, it changes the types of decisions that you make, the types of choices you make going forward.
So one thing that I’ve been thinking really deeply about is how we often jump from a description to an explanation, like we get there too fast. So I would love to just sort of hear from you about what you think some of the upsides are to this desire for explanation, which I think has to do with like a drive to learn, right? But what are the downsides that come along with it? You don’t just get that for free, right?
Tania: Right. I think there’s a few downsides. So one of the things that makes explanation seeking so powerful is that we’re very picky about the kinds of explanations that we find satisfying. We tend to like explanations that fit in with the way that we already think the world works. We tend to like explanations that are simple, that are broad, that account for a lot of phenomena, and so on. And so when we’re looking for explanations out in the world, we have preferences about what we want to find there. And some of the times those preferences align with the true structure of the world, and some of the time they don’t. And so I think what you’re pointing to are those dangerous cases where we might have preferences for what would be a good or satisfying or intelligible explanation to us, but where that mismatches the actual structure of the world, and that is a case where it might lead us to make the wrong kind of inference.
Annie: I think this is really important for people to understand, right? That when you get to an explanation that feels really natural, that feels really good to you, or if we can come up with an easy example, that it’s kind of like it happens so naturally. It’s just sort of this part of, like, our mind where I think we don’t even notice that we skipped something or that we haven’t actually interrogated that explanation in a way that would tell us whether it’s true. But if we’re given the right kind of examples, it’s obvious that you shouldn’t be drawing those conclusions. You have to get away from those prior beliefs or those things that feel good where you can actually relate those things in some really good narratives.
So yeah, I would love for you to give sort of examples where people don’t fall for it and examples where people do fall for it so that we can really kind of ground that for listeners.
Tania: Yeah, absolutely. So I’ll give you, so first, a classic example about correlation not being causation. So we’re familiar with there being a correlation between smoking and lung cancer. And that’s a case where people are very familiar with the possibility of there being a causal link between the two that has been interrogated scientifically. There’s very good evidence and so people will very naturally look at statistics or data reporting correlation and for a causal relationship. But smokers might be more likely to, for example, have yellow fingers and people will quickly realize that having yellow fingers doesn’t cause lung cancer. There might be a correlation between those factors because of the common cause of smoking, but there’s not a causal relationship. And so that’s a case that’s often used to illustrate why a correlation’s insufficient for causation. And I think people get it because they have the background to causal beliefs about how it is that smoking might be related to lung cancer, but having yellow fingers is not.
I’ll give you an example from one of my studies where we tried to isolate the role of these sorts of causal mechanism beliefs in this sort of a process. So what we wanted to do was to give people evidence of a correlation between two variables where we had control over whether or not people could come up with some sort of a causal mechanism linking those two variables. And we had several different cover stories. I’m going to share my favorite with you. So if you were a participant in our study, you would be told that you are hired as the assistant to the director of a museum, and one of your first jobs is to collect all of this data that’s been gathered about the museum and to sort of like figure out if there’s any relationships in these data about museum visitors and so on. And one of the things that you notice is that there’s a correlation between which museum patrons visited the portrait gallery and which ones chose to make an optional donation as they exited the museum. And the relationship is such that those who visited the portrait gallery are significantly more likely to make a donation as they exit the museum.
Okay, now we deliberately chose these so that it hopefully isn’t obvious to you or to listeners why those variables would be related. And so we could vary the strength of the correlation. There could be sort of a very weak correlation, a modest one, a nearly perfect one. And when you give people a strong enough correlation and ask them, do you think there’s a causal relationship here? They’ll say, yeah, I guess there’s a causal relationship because you gave me a really strong relationship. But if you ask them, you know, why did this museum patron make an optional donation? Because they visited the portrait gallery? They don’t find that satisfying. They feel like there’s something they’re not getting about how these are connected.
So for half of our participants, we fill that in for them. We give them a causal story. And in this particular scenario, the causal story is that research has shown that when people are surrounded by faces and eyes or watchful others, that makes them think more about their reputation and what they look like to others. And that seems to trigger more pro-social or generous behavior. And this idea came from the fact that there actually are psychology studies that have found effects that when you put people in a situation where they feel like they’re being watched, they actually are more likely to behave generously. So we tell people the story, you know, the data haven’t changed really.
Annie: Right, no.
Tania: The strength of the correlation between visiting the portrait gallery and making an optional donation is whatever it was. But now you have this sort of causal mechanism that allows you to link visiting the portrait gallery to making the optional donation. Now people are much more likely not only to say that there is a real causal relationship there, but also to think that there’s an explanation. Now if you see somebody give an optional donation and you say, why did that person make an optional donation? Well, because they visited the portrait gallery. People now find that much more satisfying.
And so that experiment, I think isolates part of what’s going on in these real world cases. In our experiment, we control whether or not people have this background information that would allow you to tell a causal story connecting the two variables. But in the real world, people have lots of information they come to data with. And they use that information to rightly or wrongly connect these dots between—
Annie: Right.
Tania: —various variables, and that makes it very easy for them to go straight from a description or a correlation to something like an explanation, in some cases where it might not be warranted.
Annie: How do you get people to take that extra step and say, okay, that feels really good, but like, this is going to drive my decision-making going forward, so I should actually check on it. I should test it.
Tania: Part of the way I think about this is that people do think about this at the lens of explanation, and that means that if you want them to scrutinize one particular explanation, you have to have made an alternative explanation salient. And so if you point out alternative ways to account for the same data—you know, for example, maybe the portrait gallery is just the best part of the museum, and so people are more motivated to make an optional donation because they’ve had a better museum experience. Maybe the portrait gallery really needs basic maintenance. Right? The floor is really scuffed up and so on, and so people make an optional donation because having visited the portrait gallery, they really realize that the museum needs money. There’s a dozen more of these we could come up with, right?
And so I think when those are made salient to people, I think that gives them a representation in a form that they’re able to interrogate. They can now think to themselves, oh, okay, well here’s one explanation is that it’s about the eyes. Another explanation is that it’s about the fact that the floor was really scuffed up, and so now I think the museum needs money. How could I differentiate those? Well, now you realize you need more data.
Now, how can you use this to get people to be better reasoners? I haven’t done studies where I try to get people to generate or interrogate these alternative explanations, but there are some studies that systematically have people consider alternatives or consider the opposite to their original perspective. And just prompting people to do that, they are often able to come up with alternatives themselves. Then once they do that, you can often do some amount of, like de-biasing or moving them away from whatever their initial position was. And part of what I think is so interesting about that strategy is that what the experimenters had to do is just tell people to consider the opposite. They didn’t have to actually give them the alternatives to consider. And that means that this is a kind of strategy that all of us can get better at using spontaneously.
Annie: That’s amazing. I particularly love when we know that there’s some sort of issue, right? Because we have the upside to explanation, but then there are these strong downsides that have to do particularly with bias or what feels good or what we want to be true about the world.
Tania: Mm-hmm.
Annie: And to actually have a concrete strategy to try to counteract that, I find very satisfying. So now let’s take it one step further and I want to just kind of ask you about conspiracies and sort of conspiratorial thinking as explanations, because I think that there’s a lot of things in life that do have an explanation, that do have a causal explanation. We like to know why things are happening, but sometimes there’s this whole category of things where the explanation is: that was random. Like if you have enough things happening in the world at once, sometimes things are going to happen at the same time. And there’s going to be no causal relationship between the two things at all. It’s literally random.
And it feels like, given what you’re talking about and this drive toward explanation, that when you talk about explanatory satisfaction, that random is not satisfying for anybody. So first of all, like I just want to check that intuition on my part, and then secondly sort of ask you like, well, what happens when this sort of like drive to an explanation goes wild, and now all of a sudden you end up in a conspiracy, you know, a conspiracy thinking? Right?
Tania: Right. So yes, I share your expectation that randomness is just very unsatisfying, and in fact, people often have a hard time even understanding what it would mean for events to be random and accepting them as random. I think one reason to make sense of that is that if you’re—if having an explanation helps you identify the patterns or structure in the world that would allow you to make predictions about the future, to control the world and so on, we’re much better off erring on the side of thinking that there might be some structure and randomness then failing to appreciate some structure that really is there, right?
So if there really is some structure in the world and we say, ah, it’s just random, we can ignore it, we’re missing out on a real opportunity to identify structure in the world that we could exploit for predictions and to better inform our decisions and so on. If we make this other error where something’s in fact random, but we impose some sort of interpretation on it, then at least in a lot of cases, worst case we’ll have wasted a little bit of time thinking about it.
I think the case of conspiracy theories is really interesting and also kind of subtle and complicated because there’s one way in which conspiracy theories push a lot of our explanatory buttons, but there’s other ways in which they don’t, right? So the ways in which they do is that they take a lot of things that might be seemingly unrelated or seemingly random, and they unify them under some grand conspiracy, and they also take all of those events and they typically explain them by appeal to one person or one entity trying to accomplish some goal.
Annie: Right.
Tania: It pushes away from randomness and towards an explanation that is simple in the sense that it basically fits the schema agent or agent’s satisfying goal. But in other ways they can be really convoluted, really complicated, really ad hoc.
Annie: I was just thinking about like a line that I use a lot, which is that difference between could it be true and must it be true.
Tania: Mm-hmm.
Annie: And the different standards that we apply to explanations that conform to our beliefs versus explanations that don’t conform to our beliefs. If there’s an explanation that doesn’t feel good to me, say politically, for example, then I am going to say, well, does that have to be true? Right. I’m going to all of a sudden become a scientist and be like, well, what would be the alternative explanations and how could I disprove that? And like all of a sudden you’re skeptical. Whereas if it does fit with your prior beliefs, you just are like, could that be true? Sure. And you’re kind of done. And I’m just thinking about that as kind of a distinction.
Tania: Yeah. I think that’s really interesting there. It reminds me of a phrase I love that comes from Alfred North Whitehead, the philosopher and mathematician, and he’s talking about the relationship between philosophy of science and looking for simple explanations and science. And he says that you should seek simplicity, but distrust it.
Annie: Mmm.
Tania: And I think there’s a more general idea there. There’s some real value to seeking the explanations that we find satisfying. That does a lot of work for us in terms of our learning. And often it’s what leads to discovery and allows us to create the kinds of mental representations that are very effective for navigating the world.
But there’s also a way in which we have to recognize the ways in which the explanations we come up with don’t match reality exactly, either because we have these preferences and motivations or because we’re driven by these cognitive limitations. Because our explanations are ultimately not identical with reality. They’re sort of our models of reality. And so I’m tempted to sort of recast your point in a variant of Whitehead and think about, you know, sort of seek satisfying explanations, but distrust them. You know, you kind of have to always have this second layer of scrutiny so that it shifts us from, you know, could it be true to must it be true even in the cases where we want it to be true? That’s right.
Annie: Yes. No matter what that you always have to think, must it be true? Like—
Tania: Yeah.
Annie: Well what if this disagreed with me?
Tania: Mm-hmm.
Annie: Right? What if this didn’t confirm my prior beliefs? What if—then how would I be treating this? Like I’d be writing a dissertation about why it’s wrong.
Tania: Yeah. Yeah. What would my extremely intelligent enemy say? Right?
Annie: That’s a good way to think about it. My super smart enemy, even if I think all my enemies are dumb.
Tania: Yeah.
Annie: But what if one of them were really smart?
Tania: Yeah.
Annie: All right, so now I want to shift a little bit to talk about, like the benefits, right?
Tania: Mm-hmm.
Annie: Like we’ve been talking about, okay, there’s this danger and it’s really bad and all this, but let’s think about the benefits. And one of the things that I’ve heard you talk about that I think is really fascinating is, you know, explanation is the way toward learning.
Tania: Mm-hmm.
Annie: And I think that feels obvious, but you know, I think if you ask most people like, what does learning mean? Right? They would be like getting new information.
Tania: Mm-hmm.
Annie: That you learn new stuff. And I think they generally would think facts, right? You learn what the circumference of the earth is or how the tides work or whatnot. But you actually talk about something that I think is really interesting, which is that even just the act of attempting to explain something can cause learning even when you haven’t learned anything new.
Tania: Yeah, that’s right. So let me start with the version that I think everyone probably has experience with, and that’s a phenomenon that’s called the self-explanation effect. And what this is, is that often when you explain something to yourself or to somebody else, you come to understand it better, even when you don’t get new information.
A lot of parents have this experience where their children will ask very everyday factual questions. You know, what causes lightning and so on. Why are there storms? And the parent will start to explain and, in the course of trying to explain it to a five-year-old, realize that there’s these huge gaps in their own understanding. The standard example of this is thought experiments.
Annie: Mm-hmm.
Tania: Right? So normally we think about science progressing by going out and doing real experiments. But at least some of the time in the history of science, there have been these cases where it seems like there’s been real advance from a thought experiment. You know, Einstein’s famous for having several very influential thought experiments that seem to advance the field.
So here’s an example that sort of illustrates the phenomenon. So if I asked you right now to close your eyes and tell me how many chairs are in the room that you’re in—
Annie: Well that’s easy because I’m in a little tiny room.
Tania: Alright, we’ll make it a little bit harder. How many chairs are in your dining room?
Annie: Oh, that’s an interesting question. Let me think how many people sit on each side. There’s 1, 2, 3, 4, 5, and 12.
Tania: 12. Okay. And, and how did you get to that answer?
Annie: I looked at my dining room.
Tania: All right, perfect. Okay, so you didn’t already have the pre-compiled answer 12.
Annie: No.
Tania: But you had a different mental representation that you could do something with in order to extract the number 12, right? You created something like a mental image. You went through a process of, like counting involving that mental image, and so that’s a process of what is sometimes called rerepresentation, right? Like you had a representation in one format, you applied a process to it, in this case of counting, in order to extract a representation that was in the right format to answer my question.
I think when we look at concrete cases like that, we realize there’s nothing mysterious. There’s no magic. But it’s an example of learning by thinking. Before I asked you, in some sense, you knew how many chairs you had in your dining room and in some sense you didn’t know. And as a result of going through that process, you were able to rerepresent the information you had in a way that made it available to this new cognitive process. Now you can give me a number when I ask you.
Annie: You know, I teach effective decision-making, judgment and decision-making. I also used to teach people poker. And what I found in both of those processes, but I’ll take poker as the example that when I would say teach a poker seminar over the weekend and then I would go and play poker the next week, it would always be like the best poker group that I played. And part of it was because I would sort of figure out through that teaching process that some of the things I thought were wrong.
Like I wonder is there an extra boost? Do you get something extra in terms of being able to spot where your explanations might be lacking when you’re having to teach something to somebody else, or you’re having to explain it to another person as opposed to self explanation?
Tania: I think so, yes. So there have been some studies that have explicitly compared explaining to another person versus not, but rather than studies with that specific comparison, there’s a lot of reasons to think that you would see those differences. And I think there’s three in particular.
One of them is just sort of something like accountability. When you’re just doing it to yourself, you know, you can get away at 70% capacity, and when you’re doing it in this higher stake setting, maybe you’re really operating at a hundred percent capacity. So it’s not sort of a fundamentally different process. You’re just kind of holding yourself to higher standards. That’s one.
The second is that you’re engaged in a different kind of perspective taking or theory of mind. You know, I think this is very salient if you explain something to children, but happens really in all cases, is that you have to do more cognitive work because you’re not just explaining it to someone who has the exact same background knowledge and background beliefs that you do. You have to explain it to somebody who might not understand certain things. And so you have to unpack that. You might have to think about what’s a good analogy for this particular audience, and that extra cognitive work that comes from the perspective taking can itself be really valuable.
And the third reason is that you get feedback, right? So in the case where you’re explaining to other people, you are getting new information.
Annie: They’re looking confused.
Tania: Exactly, exactly. And I was going to say, sometimes it’s explicit verbal feedback, but sometimes it’s just that. Sometimes it’s that they look confused. Sometimes it’s that they nod, you know, they got that part and they didn’t get the other part. And so I think the feedback is the third part, the new information that you’re getting by virtue of interacting with other people.
And one thing that I think happens very naturally when you’re explaining to other people is this phenomenon that I find super fascinating, which is that sometimes you explain things in more than one way. And I think we’re less likely to do that when we’re explaining just to ourselves. But there’s something really powerful about having more than one explanation for why something is the case. And even better if you can understand how they relate to each other.
One of my favorite examples of this is if you think about something like the Pythagorean Theorem, I think the first time I learned a proof for it, it was like an algebraic proof. It was just sort of—
Annie: Yeah.
Tania: —you know, moving around numbers. But then there’s all these nice geometric proofs that will show you kind of visually in a way that gives you a visual intuition about what’s going on. And if you get a few of those, it’s really cool. It seems like even though you’re not getting any more evidence that the Pythagorean Theorem is true, you’re probably already convinced after the first proof that it’s true. It seems like you’re getting some value out of these additional proofs, and I think part of what you’re getting is an additional kind of understanding.
Annie: You can like, know something is true, but not know it know it.
Tania: Mm-hmm. Mm-hmm.
Annie: Right? Like it’s kind of like, yeah, I did the math, but I can’t see it. And I think maybe that’s what’s happening when you’re teaching, where intuitively you’re giving different explanations because you recognize some people might resonate with one versus another versus another one. And you don’t want them to know it because you told them so.
Tania: Right.
Annie: You want them to know it because they understand. You’ve actually done a lot of work in social cognition, exploring kind of how we reason about other people’s minds and behavior. So can you talk a little bit for me about your work on understanding these sort of social cognitive processes and how it might relate to the types of decisions that people might make or not make?
Tania: Yeah. Yeah. Let me tell you about some of our most recent work, which I think you’ll appreciate in particular because it has a link to probabilistic thinking that I think you’re going to love.
Annie: You’re speaking my language.
Tania: So one of the things that many people have worked on in psychology is what’s sometimes called theory of mind or folk psychology, which is basically the intuitive theory we carry around with us about how other people’s minds work, right? So if I want to explain your behavior, I’m probably going to do it by positing that you believe certain things and have certain desires, and that your behavior follows from those beliefs and desires and so on.
And so what psychology tells us about beliefs in particular is that we use beliefs to do lots of things. One thing our beliefs do for us is they’re kind of our best guesses about what the world is like to guide our actions, right? So if I really want to get the bar of 90% dark chocolate that I hit in the cupboard, whether I go to cupboard A or cupboard B is going to depend on my belief about where the bar of chocolate is.
But our beliefs are also really important social signals to other people. They sort of tell you what club I’m a member of. They tell us whether or not you and I are on the same side or a different side, whether or not you can trust me. And so there’s kind of this interesting tension where on the one hand, beliefs play all of these roles, which philosophers might call epistemic. They have to do about tracking the state of the world, but they also play all of these non-epistemic or social roles for us.
And so we were interested in whether or not in people’s intuitive theory of mind we’re tracking this about other people. Right? So for example, you tell me that you believe that the earth is flat. How am I representing that in terms of my theory of mind thinking about you? Do I think that that’s your best guess about what the state of the world is like? Or do I think that that is a belief that you hold because it plays a social role in terms of your identity?
Annie: Gotcha.
Tania: So are people tracking this sort of difference in the kinds of roles, beliefs, play? And so our hypothesis was that people do track this, not not very explicitly, but kind in kind of subtle ways. Research typically finds that the majority of people think they’re better than average drivers.
Annie: Yes.
Tania: Right?
Annie: Although I do not, I just, I’m one of the few that’s like, no, I’m definitely not better.
Tania: All right. That’s a peculiar belief, right? Um, especially the two of us who know these data should have good reason to think, you know, be good reason to be suspicious of our beliefs. But, you know, suppose somebody had this belief. You could think that that belief is just their best guess about reality. Or you could think that belief makes them feel good about themselves and that the reason, you know, my neighbor, who’s a terrible driver, say, you know, thinks that he’s a great driver is because it makes him feel better about himself. And in that case, I’m attributing a belief to him, but I’m attributing it in a way that sort of keeps track of the fact that I’m attributing to him a belief that he holds kind of this motivational reason that’s not really about tracking the truth.
Annie: Yeah.
Tania: Right? So I think the big distinction is psychology tells us we hold beliefs for lots of reasons. Is it true that our folk psychology, our intuitive theory of mind that’s tracking other people’s way, do we keep track of that so that when I attribute a belief to you, I’m also kind of thinking, and I think it’s one of the truth tracking ones or, and I think actually there’s something else going on there.
And we don’t expect people to be very explicitly conscious of the fact that they might be doing this kind of tracking. And so what we did is try to identify, like what are some kind of indirect signatures of what kind of a belief people are attributing to others? And can we find that they’re systematically differentiating the truth-tracking ones from these others that play these social or motivational or emotional kinds of roles? And the short answer is yes, but I’ll tell you about two of the signatures. And we’ll get to probabilistic thinking, I promise in a minute.
Annie: Okay. I’m so excited.
Tania: So one of the signatures is whether people think it’s more natural to express the belief using the word think or believe. And what we find is that believe goes more with this social-motivational kind of commitments and think is a little bit more just like, no, it’s just your best guess about what the world is like. Okay. So suppose that somebody’s been accused of a crime.
Annie: Okay.
Tania: And Alex thinks that the person who’s been accused is innocent. All right. Alex believes that this person is innocent based on the balance of evidence. They think the person is innocent.
Annie: Okay.
Tania: Okay. That seems very natural to me.
Okay. Somebody’s been accused of a crime. Alex is really great friends with this person. Feels a sense of loyalty.
Annie: Mm.
Tania: Out of loyalty this person thinks or believes that the person’s innocent? Do you feel the pull of believe there? A little bit? Like maybe when the belief is based on loyalty?
Annie: Yeah.
Tania: Yeah. So when the belief is based on evidence, think seems quite natural. And when the belief is based on loyalty, and across our vignettes we have other kinds of social emotional kinds of considerations, believe seems more natural. Now, this is not categorical, right? I mean, either English word is okay in either case.
Annie: Yeah. I mean they’re somewhat interchangeable, but there’s, like, a subtle difference. The semantics are different.
Tania: Yeah. The second signature that we identified that differentiates these types of beliefs is whether people think it’s natural to recast the belief in probabilistic terms.
So let’s go back to this person who’s trying to decide based on the balance of evidence, whether or not somebody’s guilty. They come to think the person’s innocent. In fact, they would say that there is an 85% chance that the person is innocent. Now to my ear that sounds very reasonable to sort of, like, recast the belief in probabilistic terms.
Okay, now let’s look at our second case. Out of loyalty, someone comes to believe that the person is innocent. In fact, they’d say there’s an 85% chance that the person is innocent. Does that seem a little weird?
Annie: Yeah, that seems way too high.
Tania: Okay. But even if you change the probability, I mean, to me, part of what seems weird is that if you have a belief based on loyalty, to then give any probability . . .
Annie: No, any probability is really weird. I’m saying if they—
Tania: Yeah.
Annie: So it’s different. If they said, I’m 85%, I would be like, no, you’re not.
Tania: You wouldn’t believe them.
Annie: No, I wouldn’t believe them.
Tania: Yeah. Yeah.
Annie: Whereas if the person was doing it based on evidence and they said 85—so I’m thinking of a case where you forced them to put a probability on it.
Tania: Yeah, yeah, yeah.
Annie: Right. And if they both say 85%, I don’t believe the second person.
Tania: Yeah. Yeah. Yeah. That’s interesting. Which is, I think in part, responding to this mismatch when a belief is about your best guess about what the world is like, it should be graded, it should reflect the strength of the evidence. It’s very natural to express that in terms of a probability. On the other hand, if you have a belief that’s maybe serving a social function, maybe a motivational function.
Annie: Or based on faith.
Tania: Yeah, that’s based on faith.
Annie: Yes.
Tania: It seems much more categorical. Right? It seems much less natural to express it in this probabilistic way.
Annie: Yeah.
Tania: And so that’s the second signature that we identify that seems to differentiate the way people are thinking about these kinds of beliefs.
Annie: How did you actually test that?
Tania: Yeah, so we’ve done a few different studies, but I think the ones that are kind of the best controlled is we have a vignette where we describe a character who comes to believe some proposition. So, you know, it’s a little bit like the case I gave you, but more detail. Somebody’s been accused of a crime and so on. And then we will, the character will ultimately believe that the person is innocent. We’ll ask people, we’ll have a sentence who will say, you know, Alex blanks that Bob is innocent and they have to fill in thinks or believes. And then we give that sentence back to them and you say, you know, would it be natural for Alex’s belief to be expressed in terms of a probability, like for example, Alex believes it’s 97% likely? Or do you think Alex just believes this as opposed to not believing it, but it’s not natural to express in terms of probability and they do it like that. We—for the probabilistic measure, we’ve also done a version where people classify their own beliefs
Annie: Mm-hmm.
Tania: as what we call binary or probabilistic. So they first tell us whether they agree or disagree with a bunch of statements. We then give them back the statements that they agree with and we say, how would you describe your own belief? You just think it’s true, as opposed to not thinking it’s true? Or you think it’s true with some probability where you think it’s natural to sort of, like describe it in those terms. And you find variation even within and across people in how they do this classification.
Annie: Does this go so far as I think that there are things that I have evidence for that I think are just a hundred percent to be true.
Tania: Mm-hmm.
Annie: Right. But I have evidence for them. Right, like the earth is round-ish. You have to put ish, because it’s not perfectly round. If you say, do you believe that the earth is not flat? I would say yes. And I would also say it’s a hundred percent.
Tania: Mm-hmm.
Annie: That it’s not flat. Okay? So does it go so far as people are willing to say, like for some things, yes, and I’m willing to say it’s a hundred percent. And for other things, even though like if you’re saying binary, that implies a hundred or zero.
Tania: Yeah.
Annie: So can they make that distinction that it’s not like about whether you know for sure it’s true?
Tania: So we do find a relationship between how extreme people’s beliefs are and whether they classify their beliefs is just like all or not what we, you know, what we call binary. So it does seem like in some way those things are kind of confounded. But we can also statistically take out the variation in our data that’s accounted for by strength of belief, and then see is there anything left over that this binary, probabilistic distinction correlates with? And the answer is yes.
So it doesn’t seem to just be capturing something like belief extremity. It seems like there’s something above and beyond that, which something more like, is it natural to have a probabilistic construal in the way that you think about this?
Annie: Yeah, that’s what I was getting at because like my intuition was that it would be. Like if you asked me the earth not being flat. Can you put a probability on that? I would be perfectly happy to. I would consider that a probabilistic thing.
Tania: Yeah. And that is what we’re trying to get at. And I mean, and just to give you another, like kind of intuition here, I suspect you’re somebody who probably is happy to put a probability on just about everything. But some of the things—
Annie: True.
Tania: Some of the things that we find people are less inclined to think about probabilistically are moral claims. So, for example, take the claim that eating meat is immoral. That’s not a claim that all of our participants agree with. In fact, the majority don’t agree with it. But a lot of people think there’s something a little bit weird about saying that there’s a 70% chance that eating animals is immoral, right? So when you get to claims that are moral, for many people, not everybody, it just sort of seems like probability’s not the right way to express what you might think of as your uncertainty. And that’s something that we see in our data.
Annie: Yeah. You know, so when my book Quit came out, which is about getting people to think about, like just this distinction between like, should you stick to something or should you not, and then maybe make a plan for that. Right? I had a lot of people telling me, well, obviously this doesn’t apply to a decision about marriage, who to marry.
Tania: Mm.
Annie: And I was like, what? Why not? Right? “Well, that isn’t probabilistic.”
Tania: Yeah.
Annie: What, why not? Right? Like, so I feel like, you know, you’re thinking about morality, which I think for people sort of feels like the truth, right? But if you’re thinking about, like choosing a partner, that the person that you choose to marry is really a bet. I could describe that bet, right? That that person compared to other people that you might meet in the future, given your time constraints, is going to help you to achieve the goals that you’re trying to achieve through marriage, more than other options that might be available to you. But that that’s probabilistic, that they’re actually going to do that. That’s—people get divorced, right? Like, I mean, it would be probabilistic even if people didn’t. But the people really were very resistant to thinking probabilistically in that realm.
Tania: Yeah.
Annie: You know, and I sort of used to say there’s some things, and maybe this is kind of getting into what you’re getting at, that people just prefer to be magic.
Tania: Oh, that’s interesting. I think there’s an explanation that doesn’t require going all the way to magic.
Annie: Yeah, because I would love to hear it. I don’t, I don’t, I don’t know what it is.
Tania: So we have a handful of studies. This is from a paper that was published a few years ago where we looked at something that’s very similar to what you’re describing. We looked at people’s preferences for decisions that are made on the basis of deliberation or intuition. And so the decisions that were made on the basis of deliberation, it’s basically a process of weighing pros and cons, thinking about the reasons. We didn’t explicitly have probability or forecasts built in there. But I, I feel like that kind of goes with that bucket.
Annie: It is built in even if it’s not.
Tania: That’s right. That’s right. Yeah. And so we compared that to decisions that were made by people’s sort of gut reactions or intuitions. And for lots of things people think you should decide on the basis of deliberation. So for example, if you’re deciding on a retirement plan or which stock to invest in, or which medical treatment to follow, which computer to buy and so on. But for romantic decisions . . .
Annie: Ooh.
Tania: As well as some others, various aesthetic decisions, and so on, people think you should go with your gut. And in our studies, part of what we identified that seems to predict the judgment that you should go with with your gut for certain decisions is whether or not it’s a decision for which something like someone’s authenticity matters.
So when people want a decision that is made authentically, I should say, whatever that means, because unpacking that is complicated. It seems like they care about coming from something like intuition of your gut, not magic, I think, as opposed to it being something like a calculated decision. And one way to maybe think about whether or not it would seem wrong to outsource your decision making to a very informed advisor. So if I have to decide on a retirement plan or a medical plan, and I tell you, you know, the way I decided is that I found this expert and I gave them all of the information that I think is relevant, and then they thought about it and they told me what to do. That seems very reasonable, I think, for a retirement plan or an investment portfolio and so on.
But what if I told you my boyfriend proposed and I wasn’t sure whether to say yes, so I shared everything I know about the relationship with this expert, and they told me what to do and that’s why I’m accepting this marriage proposal. That seems a little off to most people.
Annie: Yeah.
Tania: And I think part of the reason it feels off is because in some way the decision should come from me. Like that’s not the kind of decision you can just outsource. And so I think people have the sense that intuition reflects something deeper about you and who you really are that for some decisions is important.
Annie: So I don’t want to be remiss in not ask you about AI because it’s all the rage. And I know that at Princeton there’s some really cool, like interdisciplinary work that’s happening. So I’d just love to just get a flavor and hear sort of how you’re involved and kind of what’s happening at Princeton in terms of AI.
Tania: Yeah, absolutely. So one of the newer developments at Princeton is that the university has started at Princeton Laboratory for Artificial Intelligence and. In some ways it’s intended to be something like an incubator or facilitator for lots of interesting disciplinary AI connections across campus.
I mean, maybe I’ll just give you two examples that I know of because I’m involved in them, but really there’s many, too many to name. So one has been actually an incredibly fun interdisciplinary project that involves me, somebody from a human computer interaction background, somebody from a computer science background, and somebody from a philosophy background, and then me as a cognitive scientist slash psychologist.
And the question we’ve been tackling is what it means for an AI system to possess understanding. So you’ll hear all these claims made, right? Like do large language models understand language? And people will argue about that, yes and no. Or you know, if you’re interacting with ChatGPT does ChatGPT understand what you’re asking? Does ChatGPT understand you as a person? And so lots of these claims are being made, but you know, really in order to assess these claims and be able to say yes or no to why the system understands, you need to be able to say, what does it mean for a system to understand?
And this is incredibly complicated in the human case as I’m sure you can appreciate, right? What does it mean for a human to understand something? And that’s a question that in various ways, I think people in psychology and education have been thinking about for ages.
Annie: Yeah. Yeah.
Tania: And that philosophers of science and epistemologists have been thinking about too, like, for example, what is scientific understanding comprised of? And so what we do in this paper is basically draw on all of these existing ideas from philosophy and from the cognitive sciences about understanding to think about what it would mean for an AI system to possess understanding? And then depending on what account of understanding you have, that has implications for how you assess understanding.
Annie: Would you say that kind of the default for humans is to believe that the LLMs possess understanding? Or not?
Tania: That’s interesting. I think I’d probably say yes. There haven’t been very many studies that have explicitly asked laypeople about understanding. I think the main place where you see these claims is in headlines and news articles and to some extent in the academic literature. But there have been studies that have looked at the extent to which people are willing to attribute LLMs with consciousness, emotions, other sorts of human-like characteristics.
Annie: Which would imply understanding.
Tania: Which would suggest understanding. That’s right. And a decent number of people are willing to attribute conscious experience to large language models. And in fact, people are slightly more willing to do so when they’ve had more experience interacting with them. And so that suggests to me that most people, they interact with the system and it say, gets a bunch of addition problems correct. And you say, does it understand addition? I think most people will say yes.
Annie: I don’t know if you do yet or not, but do you have just like a TL;DR, like what have you figured out?
Tania: I think it depends enormously on the kind of system, and I think the people who work most closely with those systems and with assessing them tend to be much more hesitant to attribute understanding than what you see in the news headlines and then what I think laypeople are willing to do. So, you know, insofar as there is some generalization to be made, I think it’s that they have quite limited understanding.
And part of the reason why I think we end up with that judgment is because the kinds of mistakes that they make are often very unhumanlike. You know, so even systems that are extremely good in the sense that they get what we consider to be the right answer, let’s say 95% of the time. If you look at the mistakes they make for the 5% that they get wrong, those mistakes will tend to look quite different from a lot of the mistakes that humans would make, at least for many cases. And that suggests that whatever they’re doing, it’s at least not what we think of as human-like understanding. Yeah.
Annie: Right. So do you have an example of a mistake where a human would be like, whoa?
Tania: Yeah. Yeah, so this is from a few years ago, but you get sort of contemporary equivalents. You can have a system that’s extremely good at classifying pictures of objects and saying what they are. You know, that’s a school bus, that’s a duck, that’s a crow, and so on. Until you change the image in some slightly arbitrary way, you change the perspective that it’s looking at it from a little, you know, a little bit, you add a little bit of random noise in a particular way. You rotate the image slightly. Things that for a human would not really make a difference to their classification, but all of a sudden the system is saying that the school bus is a duck. Something very different. You know, the kind of mistake that a human wouldn’t make.
Annie: It feels like, as an example, it would be like pretty much every human on earth can tell the difference between a cat and a dog.
Tania: Mm-hmm.
Annie: If you show them pictures. But when you actually try to list out and have them write down like what are the differences between a cat and a dog? It’s actually really hard, right? It doesn’t have to do a size, they’re furry. One meows and one barks. But what is that really like? It’s actually quite hard to list it out.
So it sounds like what you’re saying is these things of categorization that are relatively easy for humans, even if they can’t explicitly explain how that’s occurring, like. You know, if it’s an amphibious vehicle, we understand that it’s not a duck.
Tania: Yeah. Right.
Annie: Like it doesn’t matter.
Tania: Yeah.
Annie: It can be yellow, it can be in the water, it can be all sorts of stuff. But like we’re always going to understand that’s not a duck. And it sounds like it’s getting into that thing of like there’s something going on that we may not be able to explicitly write down what the differences between them are, but we can identify one thing versus another.
Tania: That’s right. And if you think about the kinds of disagreements you have with other humans. Often there are disagreements where you have a different judgment, but I don’t think we are fundamentally different. Right. So, you know, I can imagine us finding some picture on the internet where you think it’s a cat and I think it’s a dog.
Annie: Or I think the dress is gold and you think it’s blue.
Tania: Yeah, sure. Fair enough. But if we show that cat-dog image to a particular classification system and it tells us that it is love.
Annie: Yeah. Then you’re like, what?
Tania: Then we’re like, whoa, something else is going on here. Right. So part of what’s interesting is that sometimes these systems that have extremely good performance, I mean really impressive performance. There’s still some way in which what they’re doing is very unhuman-like.
Annie: Yeah.
Tania: And I think those cases are extremely interesting because they tell us not just something about how the systems are working, which can be hard to figure out and, you know, people are trying to figure out, but also something about how human cognition works because whatever we’re doing is not quite the same as what we see in some of these systems.
Annie: Yeah. You know, it’s interesting, like, it makes me think about like memory storage. Right. So like a computer, it’s like date time.
Tania: Mm-hmm.
Annie: Stamped. And for humans, storage is much more contextual. Right? So. Asking about presents might bring up Christmases, which then might make me think about Thanksgivings, which might make me think about my mother, which might make me think about whatever. It’s like everything’s kind of connected in this kind of contextual way and it, retrieving things date and time. It’s not the same, even though a computer might retrieve something and I might retrieve something. The underlying processes for, like how that retrieval is occurring are, like really different where you could expose that difference. And it sounds like you are saying something similar here that like you may overlap in kind of what the end result is, but whatever those underlying processes are, one thing is sort of firmly human.
Tania: Mm-hmm.
Annie: Right. And the other thing is, something else is going on that we can sort of feel is not human.
Tania: Yeah. Yeah. I mean, with the extra caveat, I say that of course, and I know you appreciate this, but just to, I think it’s worth saying, is that there’s lots of different kinds of AI systems, right? So AI is not one category of systems. Even if we restrict ourselves to large language models like ChatGPT, there’s still lots of versions and variations and so on. And so I think it’s hard to make generalizations about all of these. And even across the AI systems, we’re going to see very different kinds of systems. So I don’t think in principle, you couldn’t create an AI system that’s going to have the same kind of associative retrieval kind of structure that you’re describing for humans.
Annie: Right.
Tania: But I think it’s extremely informative both in understanding AI and understanding ourselves to think about the different ways that those systems could be structured to figure out which corresponds to the way that humans are working. And then I think that broader question is: figure out under what conditions each of those is best. Right? There’s going to be some things for which the human form of memory retrieval is fantastic, and then some things for which it’s not great, right? Like if I wanted to know what you had for breakfast on, you know, July 3rd, 2006, your memory is probably not great for that. Right? But this—
Annie: No, that’d be terrible.
Tania: But the system you described, right? Right. So sort of having a characterization, not just of a description of the, you know, similarities and differences, but can we really understand the relationship between those kinds of like architectural or structural features of these systems, and then the kinds of intelligence or kinds of problems they’re going to be really good at versus the cases where they’re going to make systematic errors.
Annie: Gotcha. Gotcha. That’s so fascinating. So I just want to ask you just super fast, like lightning questions. The first one is just what decision-making tool or idea or strategy would you want to pass down to the next generation of decision makers?
Tania: Gosh, that’s a great question. I’ll tell you one that I find myself using all the time. Even though it’s not one that comes from my own research and that is that whenever I say yes to anything, I realize I shouldn’t be thinking about it as yes versus no. It’s, yes, I’m going to do this, versus what’s the alternative that I would be doing if I said no? Right. There’s always an opportunity cost when you say yes to something, and I have found it enormously valuable to reframe all of my decisions in those terms. It is never yes/no, it’s always this versus an alternative and being explicit about—
Annie: Versus what I could be using that time for.
Tania: That’s right. That’s right. And usually multiple alternatives. And I have found that to be really helpful in actually making it easier to say no as well as prioritizing my own time.
Annie: Yeah. You know, I love that as an answer. I think you’re the only person so far who’s given that as an answer because opportunity cost neglect is like such a huge thing. Is there any book that you would recommend for listeners who really want to improve their decision-making?
Tania: I would definitely recommend Algorithms to Live By, for listeners who aren’t familiar with it, it’s a book that’s written by Brian Christian and Tom Griffiths, that looks at tools from computer science and statistics really for how to think about various kinds of decision problems, but then applies them to everyday cases. And it’s a book that’s quite accessible even for people who don’t come at it from a computer science background.
Annie: One of my favorites. What do you think the impact on society will be when the Alliance for Decision Education succeeds and its mission to ensure that decision education is part of every K-12 student’s learning experience?
Tania: I think that would be phenomenal. I mean, I think we’re at a historical moment where we are seeing so acutely why it is so important for people to have basic critical thinking skills, basic evidence-based decision-making skills. I think it’s proven to be a really difficult empirical problem to figure out the best way to teach those skills. And that’s part of why it’s so important for people to be thinking about it, doing research on it, trying out different approaches to education so that we can figure out what really works and helps people make the best decisions, both locally in terms of their own lives, but all the decisions that we make that affect society and that have global consequences.
Annie: Thank you. So I love that answer. Just like, huge effect. So if listeners want to go online and learn more about your work or follow you on social media, where would you send them?
Tania: The best place to go first is my personal website, which is my full name, tanialombrozo.com, and from there you can go straight to my social media. You can join a mailing list to find out when my book comes out and you can find out more about my research.
Annie: I hope, and I assume that after this podcast, everybody’s going to be running to buy your book because this has been an amazing conversation. For listeners who want to check out any of the things that we’ve mentioned in this conversation, books, whatever, those are going to be linked in the show notes, and I am just so excited that you agreed to come and have this chat with me. This has been an incredible learning experience for me. I really appreciate it and I know that the listeners feel the same way. Thank you.
Tania: Thank you. This has been a lot of fun.
Show notes
Books
Quit: The Power of Knowing When to Walk Away – Annie Duke (2022)
Algorithms to Live By: The Computer Science of Human Decisions – Brian Christian; Tom Griffiths (2016)
Articles
Resources
Websites



An excellent interview. It will take a couple of readings, copious mark ups, and some research to unpack all the details. Any reading recommendations?