Half the managers I coach think they’re setting clear goals—yet their teams quietly admit they’re guessing what “success” actually means. In today’s episode, we’re going to step right into that gap and explore why your “pretty good” goals might be quietly holding your team back.
If your goals are the headline, Key Results are the fine print your team actually works from day to day. This is where “do better on customer experience” quietly turns into “answer 90% of tickets within 2 hours and raise NPS by 8 points.” The shift seems small, but it changes the questions your team asks: from “are we trying hard enough?” to “are we moving the right needles, fast enough?”
We’ll look at how strong KRs blend hard numbers with real-world evidence—think response times, yes, but also what customers are actually saying—and why you need far fewer of them than you think. You’ll see how weekly KR check-ins can turn vague ambition into visible momentum, and how choosing the right mix of “right now” signals versus end-of-quarter results can shape what your team does next Monday morning.
Here’s where many managers quietly slip: they jump straight from a big Objective to listing every metric they can think of. The result is a cluttered scoreboard and a team unsure what to prioritize when the week gets messy. The craft is in deciding what *not* to measure. Your job is to surface the few indicators that, if they move, almost guarantee the Objective follows. Think of a product launch: instead of tracking twelve stats, you might focus on trial-to-paid conversion, activation within 24 hours, and one qualitative signal from user interviews that tells you *why* those numbers are moving.
Here’s the quiet test of whether a KR is any good: could a reasonable person, at the end of the quarter, say “yes, we did it” or “no, we didn’t” without debate, excuses, or a 20‑minute backstory? If not, it’s still fuzzy—no matter how sophisticated the language sounds.
This is where managers often drift into two traps:
- They describe activity: “Hold weekly stakeholder meetings.” - They describe vague ambition: “Improve collaboration.”
Neither tells you if anything *changed*. A stronger KR forces you to name the visible shift. For example, instead of “hold weekly stakeholder meetings,” you might commit to “reduce requirement rework from 25% to 10%” or “cut decision lead time from 12 days to 5.” The meetings are just one possible way to get there.
A useful test is to temporarily strip out numbers and ask: *what will look or feel unmistakably different if we’re successful?* Only then layer in how you’ll recognize that difference. Sometimes that’s a metric; other times, it’s structured evidence. For instance, “Secure three referenceable customers who agree to be named in case studies and sales calls” is binary, even though it isn’t a percentage or a rate.
Notice how this starts to separate KRs from your standing KPIs. A KPI might be “support CSAT above 4.5.” A KR might be “raise CSAT from 4.1 to 4.5 on the new chat channel by end of Q2, based on at least 300 responses.” Same underlying measure, completely different level of commitment and time box.
Another way to sharpen KRs is to check whether they would *change behavior next week*. If a KR doesn’t meaningfully influence who your team talks to, what they design, what they test, or how they deploy effort, it’s probably ornamental. Managers at companies like Intel learned this the hard way, which is why they capped themselves at a small handful per Objective—once you pass that point, people revert to their old habits and use KRs as status wallpaper.
One practical trick: draft your KRs, then ask your team to each write down—privately—the three moves they’d make *this month* to hit them. If the answers are all over the map, your KRs are still too vague or too many. Alignment shows up in the plans people generate when you’re not in the room.
Think about how different KRs feel in practice. A SaaS renewal team could write, “Increase renewals,” and technically be directionally right—but nobody knows what to do tomorrow. Contrast that with: “Lift renewal rate in accounts over $50k from 82% to 90% by Q3, with at least 10 customers explicitly citing ‘ease of implementation’ in quarterly feedback.” Now your CSMs know *who* to focus on (larger accounts), *what* to influence (implementation experience), and *how* progress will be judged (both numbers and words).
Real teams often discover that the most useful KRs are slightly uncomfortable. A product lead at a growth-stage startup once set: “By end of quarter, 60% of new signups complete three key actions in their first 24 hours, and 8 of 10 usability test participants can find feature X in under 10 seconds.” Those numbers forced engineering, design, and support to collaborate—not because someone said “improve collaboration,” but because hitting those bars was impossible in silos.
As AI makes data cheaper to collect, the real advantage will shift to managers who can *choose* the few signals that matter and ignore the rest. Expect KRs to move from static lists to living systems that adapt mid‑quarter as patterns emerge. Think less “fixed contract,” more “scientific notebook”: hypotheses, experiments, course corrections. Teams that treat KRs this way will spot weak bets faster and reallocate talent before work hardens into sunk cost.
Treat this as an ongoing craft, not a one‑time template. As your team experiments, you’ll notice which KRs spark debate, which quietly gather dust, and which actually change next week’s calendar. Follow that trail. Over time, the patterns you choose to track will sketch a kind of career self‑portrait, showing what you truly value as a manager.
Your challenge this week: Take one existing goal and turn it into 3–5 sharper KRs. First, ask your team privately what “done” would look like in concrete terms; then synthesize their answers into testable end‑states. Share the draft, and only keep KRs that clearly change someone’s priorities in the next 14 days.

