A single ethics course from Harvard has drawn over 7 million learners worldwide. Now, drop into three tense meetings: a car company weighing a deadly defect, a tech team debating user data, and an AI lab facing bias. In each room, “right” means something radically different.
In each of those rooms, people are not just disagreeing about “what to do” — they’re quietly using different mental playbooks for deciding what *counts* as a good decision. One engineer is tallying harms and benefits like a careful budget. A lawyer is asking, “Are we allowed to do this at all?” A product lead is wondering, “What would a decent, trustworthy person choose here?” Same case, three invisible scoreboards.
This series is about making those hidden playbooks visible.
We’ll unpack three major approaches that quietly shape boardroom debates, policy memos, and your own everyday choices. Instead of memorizing jargon, you’ll see how each approach guides concrete moves: greenlighting a risky launch, pushing back on data sharing, or slowing an AI rollout. By the end, you won’t just know their names—you’ll know which “lens” you’re reaching for, and when to switch.
Those invisible scoreboards don’t come out of nowhere; they’re rooted in centuries of argument about what makes actions right or wrong. Philosophers like Bentham, Kant, and Aristotle weren’t writing policy manuals, but their ideas quietly shaped the laws, norms, and professional codes we inherit today. That’s why the Ford Pinto memo or a modern data-privacy debate can feel oddly familiar to students of ethics: they replay old disagreements in new settings. As we go, we’ll pair each framework with real cases, so you can see how changing your “rulebook” can flip your verdict on the very same choice.
Start with the loudest voice in many modern workplaces: outcomes. Utilitarian thinking asks, “If we add everything up, which option leaves the world better off overall?” That “everything” is broader than most instinctive gut checks. It tries to count not just obvious wins and losses, but subtle ripples: long-term trust, quiet stress on a team, downstream effects on people you’ll never meet.
Jeremy Bentham took this so seriously he tried to *measure* it. He proposed a kind of moral spreadsheet, a “hedonic calculus” that scores experiences along several dimensions: how intense they are, how long they last, how likely they are to repeat, how soon they arrive, how free they are from side effects, how surely they’ll happen, and how far they spread across people. Add the pleasures, subtract the pains; the highest net score “wins.”
On paper, that sounds straightforward. In practice, it can twist into something eerie. At Ford in the 1970s, analysts facing a fuel-tank hazard used a government figure of US$200,000 as the “value” of a human life in a cost–benefit table. Repairing the cars cost more than the projected payouts for deaths and injuries, so the fix lost. The logic was utilitarian; the outrage that followed showed how many people felt a boundary had been crossed, even if the numbers “added up.”
That clash reveals both the ambition and the danger of this framework. Its ambition: to treat *everyone’s* well-being impartially, refusing to give extra weight to whoever is richest, loudest, or closest to you. Its danger: once everything becomes a variable in a calculation, some values that feel non-negotiable start to look tradeable.
You already see softer versions of this in safety thresholds, risk assessments, and A/B tests. A product team might tolerate a small error rate to unlock a large benefit for many users, or a hospital might allocate scarce resources to the patients likeliest to gain the most years of life. The core move is the same: zoom out, total up, and ask which option produces the greatest overall balance of good over harm.
In the next segment, we’ll contrast this with a view that says: some lines shouldn’t be crossed *even if* the totals look tempting.
Think of a product sprint where release dates are tight and a bug pops up late. A utilitarian-minded lead might say, “Yes, a few users might hit this, but shipping now unlocks value for millions—let’s monitor and patch fast.” Contrast that with a teammate who’s uneasy, not because the numbers are wrong, but because *treating* those unlucky users as acceptable collateral feels off. You’ve just bumped into the edge of outcome-based thinking.
Or take content ranking on a platform. One option slightly boosts average engagement but leaves a small group regularly exposed to material that worsens their anxiety. Another option lowers global metrics a bit but protects that vulnerable slice. There’s no formula on the whiteboard that tells you how much one group’s deeper suffering should weigh against a shallow uptick in overall “time on site.”
These are the moments where people quietly switch lenses: from “total impact” to questions about fairness, consent, or what a responsible teammate would refuse to sign off on, even under pressure.
A silent shift is coming: teams will need to justify tough calls not just with “it works” or “it’s legal,” but *which* ethical logic they used. Think roadmap docs that explain trade-offs like post‑match analyses in sports: what we optimized, what we protected, what kind of “player” this choice makes us. As tools record more of our decisions, expect dashboards that surface ethical patterns over time—not to punish, but to nudge cultures toward clearer, more consistent reasoning.
As you start noticing these lenses in your own choices, patterns appear: maybe you sacrifice sleep to help a friend move, but refuse to fudge numbers for a quick win. Like reviewing game footage, those contrasts hint at your default style. Next, we’ll test how deontological “rules” push back when the math looks good but something in you still says no.
Try this experiment: For the next 24 hours, pick one small decision (like whether to work late, what to buy, or how to handle a disagreement) and run it through three lenses back-to-back: “What creates the best outcome for everyone involved?” (consequences), “What rule or principle would I want everyone to follow here?” (duties), and “What kind of person do I become if I keep choosing this way?” (character). Actually make your choice based on each framework in turn for three similar decisions today—one guided by outcomes, one by rules, and one by character. At the end of the day, notice which decision felt most satisfying, which felt most uncomfortable, and where the frameworks pulled you in different directions.

