Why We Think Poorly: Cognitive Biases That Cost You
Episode 1Trial access

Why We Think Poorly: Cognitive Biases That Cost You

7:26Philosophy
Explore the cognitive biases that cloud our judgement and decision-making processes. Learn to identify these biases to avoid the pitfalls they create in your professional and personal life.

📝 Transcript

Right now, your brain is quietly editing reality. In a job interview, on a dating app, deciding whether to click “buy now”—the same hidden shortcuts shape every choice. In this episode, we’ll pull those mental strings into the light and ask: how much is bias really costing you?

You’re not just biased in big, dramatic moments—you’re biased while scrolling reviews, replying to emails, and choosing what to work on first. Think about how you judge a restaurant after reading three glowing comments, or how one bad meeting can tint your mood for the whole week. Those same invisible tilts show up when you estimate deadlines, evaluate colleagues, or react to news headlines. In this episode, we’ll zoom in on a few especially expensive thinking traps: the ones that nudge you to stick with bad plans, trust first impressions too much, and misread risk. We’ll connect them to money, careers, health, and even public policy—then look at practical ways researchers, doctors, and companies have learned to fight back. Not perfectly. But enough to save real time, cash, and regret.

But here’s the twist: the most dangerous errors don’t come from rare, dramatic mistakes—they come from tiny, repeated misjudgments that compound quietly. A slight tilt in how you weigh feedback can stall your career; a habitual overconfidence in deadlines can sink projects; a biased read of health advice can nudge you toward chronic risks. Researchers talk about “System 1” and “System 2” thinking, but in real life they blur together: you skim a report, feel a snap judgment, then dress it up with reasons after the fact. This episode is about that blurry zone—and how to catch yourself in it.

You can feel these distortions most clearly when something important is on the line. Think about three specific moments: deciding whether to stick with a floundering project, diagnosing a problem at work, and estimating how long something will take.

First, sticking with bad plans. Sunk cost bias quietly whispers, “You’ve already invested so much—don’t quit now.” That’s how people keep pouring money into a renovation that’s already over budget, or defend a failing product quarter after quarter. Airlines, for example, once clung to unprofitable routes for years because “we’ve always flown this city pair.” When leaders finally ran the numbers as if they were starting from zero, whole route maps changed—and profits rose.

Second, getting stuck on first ideas. Anchoring means your initial number or story clings to your thinking like wet paint. A manager hears “this feature should take two weeks” and everything afterward orbits that guess, even as obstacles appear. In medicine, that anchoring can turn deadly: a patient comes in with “anxiety” in their chart, and subtle cardiac symptoms get misread through that lens. The Harvard data on emergency-room mistakes isn’t about stupid doctors; it’s about smart people locking onto the wrong starting point.

Third, misreading likelihoods in noisy situations. Availability bias makes whatever is vivid feel common. A sensational plane crash gets days of coverage; the quiet safety record of air travel gets none. The result? People fear flying yet text while driving. Governments do this too—funding spikes after a headline-grabbing disaster, while slow-burn threats like antibiotic resistance stay under-resourced.

Here’s where the dual-process dance matters. You won’t—and shouldn’t—turn every choice into a spreadsheet. The real skill is learning to notice “red flag” contexts: high stakes, unfamiliar territory, strong emotion, or group pressure. That’s when professionals shift gears. Some teams run a premortem: “It’s a year from now and this project failed—why?” Others force themselves to check base rates before committing: How often do similar projects actually hit their deadlines? How many start-ups in this niche survive five years?

Your goal isn’t to become a cold calculator. It’s to build a few simple habits that gently pull your snap judgments back toward reality when it matters most.

Consider three quick snapshots.

First: a product team debating whether to launch a feature. Half the room falls in love with the first mockup; the rest quietly sense problems but don’t speak up. That silence isn’t just shyness—it’s social proof and status quo bias nudging smart people to “go with the flow.” Six months later, they’re stuck supporting something customers never really wanted.

Second: a hiring committee reviewing candidates. They linger on one person who “just feels like a good fit.” Instead of asking, “Fit for what outcomes?” they mentally smooth over gaps in the résumé. Now halo effect and similarity bias are quietly steering an expensive, long-term choice.

Third: a public-health team choosing campaigns. They overinvest in flashy one-off events and underinvest in boring, proven nudges—like reminder texts—that the World Bank and others have shown can save millions. The lesson: bias often hides inside reasonable-sounding stories, especially when everyone in the room shares the same blind spot.

Soon your calendar might include “bias maintenance” the way cars get tune‑ups. Teams could run quick “judgment drills” before big calls, just as hospitals rehearse rare procedures. Personal finance apps might flag, “This looks like loss aversion—still want to proceed?” while AI tools highlight where a group converged too fast on one story. The frontier isn’t removing your gut reactions; it’s teaching them to share the wheel with quieter, slower checks.

You won’t catch every bias, but you don’t need to. Progress looks more like learning to “tap the brakes” at key moments than rewiring your whole mind. Think of each small check—asking for one more perspective, pausing before a big yes or no—as adding a new instrument to your cockpit. The view won’t be perfect, but it will be clearer than flying by instinct alone.

Try this experiment: for the next 48 hours, every time you’re about to make a money-related decision over $20 (buying something online, choosing a subscription, deciding to “wait for a better deal”), quickly say out loud your first instinct *and* the opposite choice. Then, before you act, ask yourself two questions: “If I weren’t already doing/buying this, would I start today?” (status quo bias) and “If this weren’t *my* idea/purchase, would I still think it’s smart?” (confirmation bias/endowment effect). Track which choice you actually make and, at the end of the 48 hours, total up how many times your “opposite choice” would have saved you money or future hassle. Notice where your gut was right—and where your brain’s shortcuts almost cost you.

View all episodes

Unlock all episodes

Full access to 7 episodes and everything on OwlUp.

Subscribe — Less than a coffee ☕ · Cancel anytime