You’re Wrong About Nearly Everything — Here’s What To Do About It

Why you should embrace a philosophy of radical uncertainty

Matthew Born
9 min readSep 4, 2022
Photo by Justin Luebke on Unsplash

We live in a time of certainty. The public square is no longer a place for debate, but rather an arena for screaming into the void. Nuance has been excised from the conversation; from all conversations.

A dozen converging trends have created an information landscape that incentivises extremity and certainty. Information has become performative, opinions a way of demonstrating loyalty and enforcing conformity.

Of course, demagoguery isn’t exactly a new phenomenon. The flowering of Athenian democracy was a tumultuous time of popular mobilization that easily lapsed into tyranny. The Founding Fathers deliberately constructed American political institutions to mitigate the power of the populace.

“It has been observed by an honorable gentleman, that a pure democracy, if it were practicable, would be the most perfect government. Experience has proved, that no position in politics is more false than this.” — Alexander Hamilton

They recognized implicitly that humans have a tendency to conflate certainty with the truth. This quirk of our psychology can be both mitigated and exploited, and history sketches a pendulum that swings from one to the other. With information as uncontrolled and accessible as it is — the internet provides certainty on demand — the pendulum has decisively swung back towards exploitation.

Certainty is Philosophically Untenable

Certainty is dangerous because it tricks us. It seems “truthy”. We don’t have the time or often knowledge to assess claims on their content, so we default to examining their delivery. The mind loves a heuristic. We think we’ve shown great insight in intuiting the truth of something, but our intuitions stem from how confident the deliverer sounds, how authoritative they seem, and how good-looking they are.

We should be wary of certainty in others because it exploits our biases. But beyond that, it’s philosophically untenable. We can’t ever be certain a belief or a theory is right. This is the Problem of Induction, or as Nassim Taleb coined it: The Turkey Problem. As Taleb explained it in The Black Swan:

Consider a turkey that is fed every day, every single feeding will firm up the bird’s belief that it is the general rule of life to be fed every day by friendly members of the human race ‘looking out for its best interests,’ as a politician would say. On the afternoon of the Wednesday before Thanksgiving, something unexpected will happen to the turkey. It will incur a revision of belief.

This beautifully demonstrates the fundamental asymmetry lurking at the core of knowledge. It only takes one observation to the contrary to disprove a belief, but no matter how many confirmatory pieces of evidence you gather you can never prove it.

Each piece of evidence the turkey gathered confirmed its belief in human benevolence…until it didn’t.

This philosophical framework — first elucidated by the Austrian philosopher Karl Popper, popularized by Taleb, and built on by the physicist David Deutsch — forms a Theory of Knowledge that delineates the limits of knowledge itself. It constrains the boundaries of our certainty to a few, limited tenets. The entire edifice of human knowledge is made of misconceptions resting on misconceptions. Despite being our best current explanations of the world, they contain flaws and misunderstandings that will one day be improved upon. And those improvements will also be wrong, “as we advance from misconception to even better misconception.”¹

Human Fallibility and the Arbitrariness of Culture

Popper’s philosophy of knowledge is based on the fallibility of human perception. Over the last few decades, as the field of Psychology was released from the intellectual straight-jacket of Behaviourism, our picture of the brain has evolved from an observer to a predictor. The predictive coding framework for understanding the brain tells us we don’t experience reality, but rather a sophisticated virtual reality model based on our past experiences. This is why confirmation bias is ubiquitous and so pernicious. We literally see what we expect to see.

This begins to explain the baffling diversity of human opinion. All of us are unique, so we all generate different models of the world depending on the particular sequence of events played out in our lives. Because we’re all living in distinct virtual realities, we interpret identical events differently. We actually experience them differently.

But the story gets more complicated. Culture shrouds the chaotic luck of our genetics and the things that have happened to us, homogenizing within cultures and evoking remarkable variance between cultures². It profoundly influences the way we think and is an interpretive frame for everything that happens to us. We can only see the world through the lens our culture provides to us, but that culture is completely arbitrary. The vagaries of chance — where and when we happen to be born — are the only things that determine the culture we internalise. It is perhaps the primary mediator for what we think, and it is not in our control.

The upshot of all this is that we should harbour a distrust of certainty. Our perception is deeply biased and our thoughts are channelled by the culture we’re accidentally born into. Our knowledge has fundamental, philosophical limits: we can never be sure that we’re right.

In fact, we can only be sure that we’re wrong.

This necessitates intellectual humility both at the individual and societal level. The only tenable philosophy is one of uncertainty.

A Philosophy of Uncertainty

Imagine how the world would change if such uncertainty was internalised.

It would upend not only how we make decisions, but the very way we see the world. The implications cascade from the way we should arrange society down to the most intimately personal choices.

At a societal level, the fundamental insight is that we should only change society in a way that’s reversible. This recognises our inherent fallibility and allows future generations to correct our inevitable mistakes. I’ve explored that in more detail here.

This essay maps the philosophy onto the personal domain, where things are a bit messier.

There are fundamental similarities in how this philosophy of uncertainty manifests at different levels. It’s not a profound insight to say you should probably avoid doing things that ruin your life. It’s wise to avoid bankruptcy and going to jail. But we can’t live making only reversible decisions.

We signal commitment by making decisions that are difficult to renege on. We get married and have kids. We choose degrees and career paths. We sign business deals and invest. In certain contexts making a decision difficult to reverse is a feature, not a bug. Introducing friction means we don’t default to the easy options when things get hard.

Instead of being something to be avoided, in the personal domain irreversibility can be used as a decision-making tool. The more irreversible a decision, the more care we should take before making it. Jeff Bezos explained a similar framework in a 1997 letter to shareholders:

Some decisions are consequential and irreversible or nearly irreversible — one-way doors — and these decisions must be made methodically, carefully, slowly, with great deliberation and consultation. If you walk through and don’t like what you see on the other side, you can’t get back to where you were before. We can call these Type 1 decisions. But most decisions aren’t like that — they are changeable, reversible — they’re two-way doors. If you’ve made a suboptimal Type 2 decision, you don’t have to live with the consequences for that long. You can reopen the door and go back through. Type 2 decisions can and should be made quickly by high judgment individuals or small groups.

Beyond decision-making, this philosophy can be applied most impactfully to our beliefs. What we believe is core to who we are because we often identify with our beliefs. We see ourselves as a person who believes X, and are proud of that. Over time, we fortify that identity in conversation with the world.

Remember predictive coding: our beliefs are a key input into our brains’ predictive model and are strengthened when we see what we expect to see. This makes us more likely to see things that reaffirm those beliefs in the future, which strengthens them further. This self-reinforcing cycle accelerates us towards certainty as our identities and beliefs are solidified, and it becomes ever more difficult to see the world through fresh eyes.

That’s the danger of identities. They trick us into grasping a certain set of misconceptions tighter and tighter. They manouever us into a position where a challenge to these misconceptions seems like an existential threat that must be combated. They make us forget we’re all blind, groping in the darkness at a universe too complex for us to comprehend.

The Path From Certainty

I think there’s a narrow path out of this conundrum. It isn’t possible to be ‘identityless’.³ After all, our identities are simply our dialogue with the world, in constant construction and destruction. I think we must use identity to overcome identity, by cultivating an identity of uncertainty.

To make that concrete, an identity of uncertainty puts intellectual humility at the core of who you are. Your foundational belief is that you are uncertain about your beliefs. This is clung to, sanctified, amongst all the ideas that threaten to dislodge it. It is the eternal caveat to everything you say, a prosaic reminder of your imperfection. If fully internalised, it is the checks and balances on the human tendency to unjustified certainty — the power of identity, often an ambivalent force, used for good.

The fact we can consciously construct our identity runs counter to a lot of the current cultural narrative. Indeed, it isn’t a trivial task. Identity is complicated and unpredictable, its winding path often only clear with hindsight. But it can be given direction. By deliberately exposing ourselves to certain ideas and environments, our identity naturally evolves in dialogue with what it’s exposed to.

Confirmation bias is the enemy of this effort. Guiding your identity towards uncertainty requires a war against it. There’s a simple but deceptively difficult way to carry that out: embracing the Socratic dialogue of opposing ideas. For every belief you hold, seek out both sides of the argument.

Most contested opinions have smart, reasonable people arguing persuasively for both sides. As Scott Alexander said in his insightful essay Searching for One-Sided Tradeoffs:

Political debates are pre-selected for “if it were a stupider idea no one would support it, if it were a better idea everyone would unanimously agree to do it.” We never debate legalizing murder, and we never debate banning glasses. The things we debate are pre-selected to be in a certain range of policy quality.

However, we generally shelter ourselves from conflicting opinions or only see the most exaggerated, strawman versions of their arguments. Deliberate, systematic exposure to all reasonable sides to most issues disrupts the foundations of our certainty⁴.

Search out the most persuasive, sensible mouthpieces for views you disagree with and consume their content. Look to truly understand their beliefs, and pinpoint areas of agreement and disagreement. Steelman them. Do this consistently, live in the grey between black and white thinking, and only heroic cognitive dissonance will allow certainty to endure.

However, this isn’t a panacea. Certain issues, which skirt deeply-held, often unarticulated values, will still evoke emotion and sometimes blind dismissal of any contrary views. As Jonathan Haidt adroitly explored in The Righteous Mind, our moral intuitions preceed logic. The feeling of rightness or wrongness comes first, the post-hoc explanations later. These reactions are hard to eliminate, but they can be reframed and used as an indicator of the lapse back to certainty. Instead of encouraging us to double down, they can be used as a trigger for detachment and reflection.

Perhaps this is a lightweight solution to a heavyweight problem. Fighting against our biases and biological foibles is a worthy endeavour, even if it is doomed to fail.

The contravening force towards certainty is powerful, but each person that pushes against it is making the world a better, more balanced place.

[1] David Deutsch, the Beginning of Infinity

[2] The power of culture explains both how entire cultures can demonstrate surprising levels of conformity despite diverse backgrounds, and how different cultures can hold fundamentally irreconcilable worldviews.

[3] Paul Graham has a classic essay Keep Your Identity Small which argues closer to the ‘identityless’ approach (or at least keeping your identity small). I’m skeptical that this can be a successful, or even desirable, approach. I think having a coherent identity is important for a strong sense of self, and helps us interpret the messiness of our day to day experience.

[4] Obviously, there are time constraints here. This can’t be done for every single issue because there’s only so much time in the day. But we can use it as a heuristic to identify what things we shouldn’t have strong opinions on. If we haven’t taken the time to expose ourselves to every side of an argument, we’re probably biased and shouldn’t die on a hill for that belief.

--

--

Matthew Born

28 year old Londoner working in Tech, thinking a lot about productivity, philosophy, politics, happiness and far too much more to fit in 160 characters