A government spends billions, not on roads or weapons, but on “making people happier.” A charity claims a few thousand dollars can likely save a life, and compares that to buying a new laptop. Are these cold calculations… or a radically compassionate way to think about morality?
Economists rank policies by “quality-adjusted life years.” Tech companies A/B test tiny design tweaks to see which version keeps users more “engaged.” Psychologists run massive surveys to track how satisfied people feel in different countries. Slowly, a picture emerges: powerful institutions already behave as if happiness can be counted, compared, and optimized.
Utilitarianism steps in and says: don’t just use those numbers for profit or efficiency—use them to decide what’s right. Jeremy Bentham tried to turn pleasure and pain into something like a ledger. John Stuart Mill then argued that reading a great novel and eating a donut both give pleasure, but not the same *kind*.
In this episode, we’ll ask: if happiness can guide moral choices, how far should we take that—and what, or who, might get left out of the calculation?
So far we’ve met a world where happiness can be tallied—policies scored, charities ranked, even lives “valued” in spreadsheets. But step closer and the picture blurs. Not all happiness is loud or easily surveyed: the relief of breathing clean air, the quiet stability of knowing you won’t go bankrupt if you get sick. These don’t always show up in headlines, yet they shape entire lifetimes. When governments and philanthropists act like moral accountants, they face hard questions: whose joy counts, how far into the future, and what about those who never got a seat at the table?
Bentham’s bold move was to say: if morality is about producing good outcomes, then in principle we should be able to **measure** those outcomes. He even drew up a menu of what to consider when weighing pleasures and pains. Not just how *intense* they are, but how long they last, how likely they are to occur, how soon they arrive, how pure they are (free from later suffering), and how widespread their effects might be. The details are dusty, but the ambition is striking: turn ethical hunches into something like a scorecard.
Mill thought this was too flat. Some ways of living, he argued, are *qualitatively* better for us—using our minds, forming deep relationships, developing our capacities. His famous claim that it’s “better to be a human being dissatisfied than a pig satisfied” isn’t snobbery for its own sake. It’s a recognition that people who’ve experienced both richer and simpler lives tend to prefer the richer, even when they involve more frustration. That preference, he says, should guide how we weigh different forms of well-being.
This outlook ripples into today’s decisions in quiet but powerful ways. When New Zealand earmarks billions for psychological support rather than short-term tax cuts, it’s acting on the hunch that more fulfilled, less distressed lives are worth the upfront cost. When analysts estimate that certain environmental rules will save trillions in health benefits compared to their expense, they’re effectively saying: cleaner lungs, longer lives, clearer skies—all of that counts, even if no one ever knows which particular cough or cancer was prevented.
Critics push back: can you really add up one person’s flourishing and another’s? Is it fair to treat a stranger’s heartbreak and a stranger’s joy as tokens in the same currency? And what happens when boosting “total happiness” seems to justify harming a smaller group?
Modern utilitarians respond in different ways. Some emphasize **rules**—arguing that systems which protect free speech, due process, and privacy tend, in the long run, to make almost everyone better off, even when breaking the rule might look tempting in a single case. Others widen the circle across borders and generations: your decision to donate to an efficient health charity, or to curb emissions today, may benefit people you will never meet, decades from now.
A tech team debates a new notification feature. One version nudges users to take breaks; another keeps them scrolling longer. Both boost “engagement,” but the first might support sleep, focus, and offline friendships. A strict numbers-only view could favor the stickier design, yet a more Mill-style approach asks: which design supports richer lives, even if it trims short-term metrics?
In global health, the same tension appears. You might fund cheap deworming pills that quietly improve school performance for millions, or splashy hospital projects that save visible lives in dramatic ways. Utilitarian-minded evaluators often favor the low-cost, low-drama option, because tiny gains multiplied by huge numbers of people can outweigh a few headline-grabbing successes.
But edge cases bite back. Suppose banning a protest today seems to spare unrest and fear for thousands. A rule-focused utilitarian might still defend the protesters, arguing that a culture of open dissent prevents deeper, slower harms that don’t fit neatly into a single spreadsheet.
“Utility” may soon be updated in real time. Wearables, social data, and behavior patterns could feed models that predict which laws, apps, or city designs boost well-being, like a navigation app constantly rerouting to avoid traffic. That opens space for more humane policy—but also surveillance, manipulation, and majority-biased systems. The frontier question shifts from “can we measure?” to “who sets the target, and what must never be traded away for a marginal gain?”
Utilitarianism leaves us with a live experiment, not a finished recipe. As technology tracks mood like a stock index and policies tweak life at population scale, we’re nudged to ask: whose feelings get logged, whose don’t, and what—if anything—should stay off the bargaining table, even when the numbers insist the “trade” makes sense?
Before next week, ask yourself: 1) “If I honestly ranked the *actual* consequences of my last three big decisions, would they maximize overall happiness, or mostly my own comfort—and what would I change if I used ‘greatest good for the greatest number’ as my real decision rule?” 2) “Looking at my time and money this week, which concrete choice (e.g., donating $20 to an effective charity instead of a treat, helping a colleague even if it costs me an hour) would most increase *overall* well-being, and what stops me from doing it right now?” 3) “When my gut moral intuition clashes with a utilitarian calculation (like telling a difficult truth that may hurt one person but help many), which side do I usually follow—and what does that reveal about the values I’m actually living by?”

