A Fortune 500 team once spent months building a flawless strategy deck—then copied a rival’s move in a ten‑minute hallway chat. In organizations, the “official” decision is rarely the real one. Today we’re going to explore where choices *actually* get made, and who’s really steering.
McKinsey once reported that tweaking *how* decisions are made—using insights from behavioral science, not just better spreadsheets—can double the ROI of those decisions. That suggests something important: in many organizations, the biggest lever isn’t *what* they decide, but *how the decision game is set up*. And that game is changing fast.
AI systems are now scanning data, ranking options, and even proposing actions before humans enter the room. Power no longer sits only with the person at the head of the table; it also sits in dashboards, algorithms, and the way choices are framed on a single slide.
In this episode, we’ll look at how data, incentives, and subtle design choices tilt outcomes—and how individuals at any level can influence decisions without having the loudest voice or the biggest title.
So where do you fit in all this? Think of your organization’s choices as a stream fed by many small channels: meeting agendas, dashboard defaults, risk policies, incentive plans, even who gets invited to comment on a draft. None of these looks like “the big decision,” yet each nudges outcomes in a particular direction. Power shows up less in big speeches and more in who controls timing, information flow, and criteria. As AI tools enter, a new question appears: which choices stay human, which get automated, and who gets to design that boundary? That’s the real battleground.
Listen closely in your next meeting: when someone asks, “Can we just align on the goal first?” the real action is starting. They’re not being polite; they’re quietly editing the *criteria* by which every later option will live or die.
Most corporate choices are shaped in three invisible layers: **framing, filtering, and forcing functions**.
**1. Framing: what problem are we “allowed” to solve?** The way a question is posed tilts every answer. - “How do we cut costs by 10%?” invites layoffs and budget freezes. - “How do we hit the same margin with happier customers?” makes different ideas legitimate. Slides, dashboards, and AI summaries often do this framing before anyone speaks. The default metric shown on page one becomes “what good looks like,” even if no one voted on it.
**2. Filtering: whose information and judgment enters the room?** Decisions feel data-driven, but the upstream filters are human: - Which metrics the data team chose to instrument three quarters ago - Which risks Legal insists must be highlighted - Which customer anecdotes leaders find emotionally sticky Even AI “recommendations” reflect these filters: train on historic pricing wins and you privilege margin; train on complaints and you privilege satisfaction. Either way, someone set the lens.
**3. Forcing functions: what quietly constrains the choice set?** These are rules, habits, and incentives that make some paths easy and others exhausting: - Approval workflows that make cross-functional bets painful - Quarterly targets that punish long-term experiments - Bonus schemes that reward local wins over system-wide health Over time, smart people stop proposing options that can’t survive these gauntlets. The choice set shrinks, and it feels like “we had no alternative.”
Here’s the twist: **small structural tweaks often matter more than big speeches**. Change the meeting cadence and suddenly different people are available. Change the template and suddenly risk sits next to upside, not buried in an appendix. Change who must sign off and a whole category of ideas reappears.
The opportunity for you isn’t to “own the decision,” but to influence these levers: how the question is asked, what gets seen early, and which constraints are treated as fixed versus negotiable.
In one tech company, the “frame” shift was as small as renaming a weekly “Defect Review” to “Reliability Lab.” Same people, same hour—but leaders added one rule: every bug discussed had to generate at least one “future-proofing” idea. Within a quarter, the roadmap tilted toward stability features no one had time for before.
Elsewhere, a consumer brand experimented with **who** filters information. They gave frontline reps a structured, one-page “field brief” that went straight to the exec team before pricing meetings. The surprising pattern: leaders began rejecting profitable changes that would have quietly overloaded support.
Forcing functions can be tweaked just as quietly. A fintech firm rewired its approval tool so that saying “no” to a cross-team proposal required a short written rationale, visible to peers. “Yes” stayed one-click. The political cost of blocking went up, and suddenly “too risky” turned into “let’s pilot it with guardrails” far more often.
Choices will soon feel less like single moments and more like streams you continuously tune. As AI helpers quietly test scenarios in the background, your leverage shifts from “making the call” to **designing the conditions** under which calls get made. Think recipe, not dish: you’ll be judged on which ingredients you standardize (data, guardrails) and where you leave room for local flavor—context, judgment, dissent—especially as ESG pressures redefine what “done right” means.
Treat each meeting like adjusting a recipe: you can’t control every ingredient, but you can quietly change the heat, timing, and order. Tiny shifts—who speaks first, which risk is named, what “good” is called—can snowball into different futures. Your challenge is to notice one such lever this week and deliberately test a new setting.

