By the early eighties, the Soviet Union employed about one out of every eight workers just to plan the economy. Not to build, not to sell—just to plan. Yet store shelves were still empty. How does an empire drown in data and still fail to see what people actually need?
So let’s zoom in on the part everyone forgets: the incentives. Gosplan didn’t just set targets; it quietly rewired how millions of people thought about “doing a good job.” Factory bosses learned that survival meant hitting numbers, not serving actual needs. Engineers discovered that it was safer to overuse steel than to risk missing a tonnage quota. Local managers hoarded resources “just in case,” while reporting success on paper.
This wasn’t simple laziness or corruption—it was rational behavior inside a warped system. When your promotion, housing, and sometimes freedom depend on a single metric, you optimize your life around that metric. The result was a culture where creativity, initiative, and honest feedback became liabilities instead of assets.
Over time, this logic seeped into everyday decisions far beyond factories. Teachers inflated grades to meet “excellence” norms. Farm directors delayed harvesting until inspectors arrived, then staged perfect fields while grain spoiled off-camera. Local officials rushed to complete showcase projects before Party visits, leaving leaking roofs and unfinished plumbing behind the freshly painted façades. Metrics became a kind of official fiction everyone had to participate in. Publicly, the numbers sang success; privately, people traded stories of shortages, breakdowns, and workarounds just to keep things moving.
On paper, the solution to all this fiction was always the same: “better control.” More forms, more inspections, more layers. By 1980, planners were tracking so many indicators that whole ministries existed just to reconcile contradictions between them. A factory might answer to its sector ministry, regional committee, supply bureau, and party organs—each with slightly different numbers and deadlines. No one person saw the full picture, but everyone felt the pressure to certify that things were “normal.”
This is where the system quietly crossed from inefficient to unmanageable. Each new control mechanism generated more data, which demanded more staff to process it, which created more distance from reality. At the top, thick statistical yearbooks showed smooth progress. At the bottom, people queued for hours to buy basics. The gap between dashboards and daily life widened every year.
You could see the strain in how projects unfolded. A new factory might be approved on the basis of optimistic projections, receive machinery that didn’t quite match the blueprint, improvise workarounds to meet a launch date, then operate for decades on that improvised foundation. Nobody wanted to halt production to admit the plan had been wrong. Instead, downstream plants adapted in their own clumsy ways, compounding the original mistake.
The computing shortfall made this brittleness worse. Even as other countries began using networks to monitor logistics, most Soviet plants still reconciled shipments with carbon paper and ledger books. Delays of months weren’t unusual between a shortage appearing on the shop floor and it becoming visible in official statistics. By then, people had already built informal “shadow systems”: barter agreements between directors, unofficial side deals with suppliers, quiet understandings with inspectors.
Those shadow systems kept things from collapsing earlier—but they also undermined the very notion of truthful reporting. When the only way to get parts, fuel, or spare labor was to bend rules, straight talk became dangerous. A manager who accurately described bottlenecks could be accused of “pessimism” or sabotage. A manager who embellished results might get an award. Over time, the entire information stream curdled.
Think about how this plays out in a modern company obsessed with dashboards. A sales team is praised for “pipeline value,” so deals get stuffed with discounts and fragile terms. On paper, projections soar; three quarters later, cancellations and churn quietly erase the victory. Or a hospital leans hard on “average wait time”: patients with complex cases are nudged to reschedule, or parked in categories that don’t count toward the metric, while the official chart shows stunning improvement.
A famous case: when a big social platform pushed “daily active users” above all else, teams learned to design features that were sticky, not necessarily useful. Endless notifications, autoplay, dark patterns—all rational inside that scoreboard. The product still “worked,” but the organization became blind to harms it wasn’t measuring.
Music offers a similar trap. If a band judges success only by how loud the crowd gets during the chorus, every new song drifts toward the same formula—until audiences stop caring, and no one remembers how to listen for anything subtler.
By the time we add AI to this mix, the danger quietly shifts. Systems that forecast demand or “optimize” schedules can end up amplifying yesterday’s distortions: biased inputs, missing voices, outdated norms. A metric that feels neutral—response time, utilization, “engagement”—starts steering behavior long after anyone remembers why it was chosen. Like a composer writing only for streaming algorithms, we risk forgetting there was ever more than one way to sound “right.”
So the real lesson isn’t “planning is bad” or “more data is good.” It’s that any system, from a product team to a family budget, goes blind when feedback can’t travel upward and assumptions never expire. Like a jazz group that only plays old hits, the score still exists—but the music stops evolving, and soon the room is full, yet nobody’s really listening.
Before next week, ask yourself: How would my daily life, beliefs, or relationships look different if I’d grown up under the political and social conditions described in Historical Case 1, and what does that reveal about how much my current context shapes me? If I suddenly found myself in the exact situation of the key decision-maker in this story (facing the same pressures, risks, and limited information), which option would I *honestly* choose—and what does that choice say about my real values versus the ones I claim? Looking at one specific event from the episode (like the controversial law passed, the betrayal, or the turning-point protest), what’s a modern situation in my community or workplace that most closely resembles it, and what’s one conversation I could start this week that shows I’ve actually learned from that parallel?

