6 Comments

My favourite part of the podcast was when David challenged the idea that we can know probabilities of certain events occurring when we make decisions & your response that these expected value / probabilistic calculations have to occur "under the hood" and we can improve our decisions by making the probabilities transparent.

This made me realise I fundamentally disagree with you on how minds work. I am very much in the camp of David Deutsch (& Karl Popper) who argues that knowledge (& decisions based on it) does not grow incrementally in probabilistic terms based on updated priors (Bayes) but by conjecture and criticism in which we take guesses and refute them or accept them as the best explanation available (Deutsch/Popper).

I would love to hear you debate this with someone like David Deutsch or Brett Hall, a guy who is promoting David's ideas & has talked and written extensively on the problems with Bayesian knowledge.

Expand full comment

I'm not sure I really have a strong disagreement. I think a few things can be true:

1) Under the hood, we certainly think probabilistically.

2) Any probability that we conjecture will be just that, our best educated guess given what we know combined with our experiences (and hopefully anchoring to a base rate to start)

3) Those probabilities (and rationales) ought to be made explicit so that we can critique and refute them.

So I think this is a blend. Any decision process that I run is specifically meant to increase dispersion of opinion so one can see clashing models and hash it out. This does not exclude the core premise that all decisions must be probabilistic forecasts though?

Let me know if I am understanding your point. thanks!!

Expand full comment

Thanks Annie. I think my intuition on how minds work is different than the one you express in #1 "under the hood, we certainly think probabilistically". #2 and #3 flow from that statement and are logically consistent with #1 but only work if #1 is correct.

2 nuances:

1.I think we agree that under the hood we create options about the world and about what to do. And then we criticise in our heads those options based on the explanatory power of those options. Teams of rivals battling it out with a winning idea emerging which informs our decision. However, the way we adjudicate which option is choose is not by using a max function of numerical likelihood x value of consequence for each option - I think this is what you suggest. What I think we do is that we use a (sub)conscious lens to pick an option that looks best based on our values, the circumstances at hand etc without any numerical / expected value calculus. The reason why David kept on going after 200 tries is because of who he is as a person, the confidence he had in the quality of his book, the fact that he was willing to live with the opportunity cost of chasing editors vs focusing on synestisia, perhaps he wanted to prove to his parents he can be an accomplished author etc. All this cannot be neatly or at all captured in likelihoods but in explanatory theories.

2.Numerical probability estimates in "Trump has a 55% chance of being elected" or "There is a 20% chance of rain" or "there is 10% chance of the war ending this year" or there is "0.1% chance of an asteroid hitting the earth" have limited value. Firstly, because the base rates are made up, typically by "experts" and typically using Gausian distributions in complex fat-tailed domains. Incremental credence based on new data on top of made up numbers is of limited value. Secondly, because epistemologically they are empty statements about the real world. Things don't happen in reality with X% chance in reality, they happen or don't happen.

Am riff-raffing here but I think how this view cashes out in a decision-making model for me is:

Starting point is how much do I care about the outcome of my decision and why? If I don't care that much, I allow for a lot of randomness. If I care a lot, then the next thing I assess is whether I can live with the worst case scenario occurring (as far as I can make it up) and what can I do to protect against that outcome. Once I do that, I allow again, for a decent amount of experimentation/randomness. There is a massive amount of "I dont know that I dont know & I cannot possibly know" in front of me which will invariably throw off any carefully curated plan based on expected value calculations. "Avoid catastrophe and experiment" is a good summary I think.

(btw, podcast transcripts posted on substack are a great idea. Made it easy to revisit some of the points made in live conversations to understand them better)

Expand full comment

This requires a much longer answer but, in short, however we experience it our neurons fire probabilistically and there is no way to evaluate an option without assigning some probability. I think, partly, from a natural language perspective, this is why our explanatory narratives are rife with words like probably and likely and possibly and maybe. These are all probabilistic terms. How can we choose an option without considering whether some outcome is low or high probability or somewhere in between.

So perhaps we do have a fundamental disagreement. I high recommend the book Everything is Obvious by Duncan Watts. I think his thoughts on commensense and intuition are fantastic.

Expand full comment

I have now read Duncan's book "Everything is obvious". Decent stuff on issues with common sense/intuition, historical determinism & problems with the scientific method to understand history.

Not sure however, how it bridges our disagreement re using specific numeric probabilities to guide a more rational decision making.

As far as I understand it, he argues against any forecasting in complex social systems. It is true that he also argues against individual story telling as we are excellent justification machines post fact but I didnt take away from his work that adding things like "53% chance that it will rain" solves that problem.

What am I missing?

Expand full comment

We can choose an option based on its explanatory power. If its cloudy outside and you are in London and its winter, its wise to get your umbrella. Not because there is an 80% chance of rain but because you know that clouds + London +winter could mean rain & in case it rains you don't want to be caught without an umbrella. Or perhaps you love walking in the rain without an umbrella and you don't care about getting wet. Its the explanation that matters. In addition, we would punt when assessing both on the likelihood of something happening and on the value of that something - hence, I cant see why would consider the resulting expected value reliable in any way.

In any case, it would be great to explore this point with someone on your podcast as I think it gets to the core of a lot of consequential decision making, precautionary principle, the pitfalls of expected value in complex domains etc. I think David Deutsch would be a good interlocutor for that. Or Brian Klaas who a political scientist also focussed on complexity science with a fantastic book called Fluke (he also has a substack). Or of course Taleb.

I will check out Duncan Watts' work, not familiar with it, thanks for recommending.

Expand full comment