You’re offered a deal: sacrifice one stranger to save a packed stadium. Almost everyone says, “I’d do it.” Yet many of those same people insist torture is always wrong—even to stop an attack. How can we believe both? That tension sits at the heart of today’s episode.
Kant thought our moral life shouldn’t feel like balancing a messy spreadsheet of pleasures and pains. Instead, he argued, some actions are off-limits—full stop—because they fail a deeper test: could you will a world where everyone acted this way, and still respect every person as an end in themselves? Deontologists say morality is more like following the internal “operating system” of rational beings than crunching external outcomes. That’s why Kant’s 1785 Groundwork aims to locate morality in the very structure of reason, not in shifting social opinions. Later, W.D. Ross complicated this picture by claiming we’re bound by several basic duties at once: keeping promises, doing no harm, showing gratitude, and more. These duties can clash, but they’re not just suggestions; they function like moral “default settings” we’re responsible for honoring—even when it’s inconvenient or costly.
Modern law quietly sides with this stricter vision of morality more often than we notice. When Germany’s Constitutional Court rejected shooting down hijacked planes in 2006, it wasn’t maximizing outcomes; it was drawing a hard line around each person’s inviolability. Medical ethics codes and the GDPR do something similar: they don’t just ask, “Will this help overall?” but “Are we honoring this person’s status and choices?” Deontology, in practice, shows up whenever rules protect someone precisely when breaking them looks temptingly useful.
Kant sharpens the stakes with his Categorical Imperative. One key test asks: can you consistently will that everyone act on the same principle you’re about to follow? Not “would the world be nicer,” but “could this rule even function as a universal law without collapsing into contradiction or disrespect?”
Take lying. If your rule is “I may lie whenever it benefits me,” universalizing it empties promises and testimony of content; no one could reasonably trust anyone. The very practice you’re exploiting—being believed—would disintegrate. For Kant, that’s not just impractical; it’s irrational. You’re trying to carve out a private exception while still relying on a shared practice that your own rule would destroy.
A second Kantian test asks whether you’re treating people merely as tools or also as self-governing agents with projects of their own. This is why things like non-consensual medical experiments strike many as not just harmful but fundamentally degrading: the person becomes equipment for others’ goals. That’s the moral line the German court, the AMA, and data regulators are all, in different ways, trying to police.
Ross complicates this tidy picture. Instead of one master rule, he argues we’re pulled by multiple basic obligations: fidelity, reparation, gratitude, justice, beneficence, self-improvement, non-maleficence. None of these is automatically supreme; context matters. You might break a trivial promise to prevent serious harm, not because promises don’t matter, but because here another duty presses more urgently. Deontologists in Ross’s vein don’t deny consequences; they just refuse to let outcome-maximization erase these standing claims on us.
A helpful way to see the difference from pure outcome-thinking is to look at how good people describe their hardest choices. They often say, “I couldn’t live with myself if I did that,” not, “The numbers didn’t add up.” The “self” they’re referencing isn’t just a bundle of preferences; it’s a sense of the kind of agent they are willing to be, and the lines they will not cross, even for impressive gains.
That’s the heart of this view: moral life is not only about producing good states of the world but about honoring the status of each person—and one’s own integrity—through what one refuses to do.
A courtroom gives deontological thinking a concrete stage. Picture a defense lawyer who knows her client is guilty. She could stay silent while a key witness is discredited on a technicality, virtually guaranteeing acquittal. A pure outcome focus might say: less prison time, less suffering—good. Yet many lawyers feel bound by a duty not to mislead the court, even indirectly. The point isn’t that the result doesn’t matter; it’s that some tactics feel like crossing a line about what one may do to another person’s right to a fair process.
Consider a software engineer asked to add a “dark pattern” that quietly tricks users into sharing more data. There’s no dramatic harm, just higher engagement metrics. Still, she might refuse, not because the numbers look catastrophic, but because it treats users’ attention and choices as exploitable levers instead of something to be straightforwardly respected.
Chess offers a parallel: reprogramming the app so your pieces have hidden powers might “win,” but it corrupts the very practice you’re participating in.
In tech and bioethics, deontological “red lines” may function like circuit breakers in a power grid: rarely triggered, but non-negotiable when they are. As AI systems optimize for efficiency, regulators might insist on hard stops—no secret scoring of citizens, no nudging past informed refusal, no lethal decisions without human authorization. The open question is how many such constraints a complex society can carry before they start to clash—or whether those clashes are the real test of our values.
When duties feel heavy or rigid, that friction can be revealing: it shows where our habits quietly favor convenience or loyalty to our own group. Deontological lines nudge us to ask, “Whose basic standing am I tempted to discount here?” Your challenge this week: watch for every moment you say “it’s just business,” and pause to test whether that’s actually true.

