About half of startup deaths come from a brutal truth: customers just don’t care enough. A team ships feature after feature, ads are running, investors are patient—and still, growth feels forced. Why do a few products suddenly “catch” while most need constant pushing to move at all?
42% of failed startups say the core problem wasn’t money, talent, or timing—it was that the market simply didn’t need what they built. Up to now in this series, we’ve looked at how teams work, how products launch, and how founders recover when things break. In this episode, we zoom in on the moment when things *start* to work for real: when the market begins to pull your product forward instead of you pushing it uphill.
Founders often confuse early enthusiasm, press, or a few big customers with that turning point. But product‑market fit isn’t a vibe, a launch day, or a single metric. It’s a pattern in behavior: users who come back unprompted, recommend you without being asked, and stick around even when competitors knock on their door.
We’ll unpack what that pattern actually looks like, how to measure it, and why getting there *before* you scale might be the most important decision you make.
Here’s where it gets tricky: before PMF, everything you see is noisy. A spike in signups after a tweet? Could be novelty. A few power users who love you? Could be hobbyists. To make sense of it, you need to separate *pull* from *push* in your numbers and in your calendar. Look at how much of your week is spent chasing growth—cold outreach, discounts, manual onboarding—versus handling growth that shows up on its own. It’s a bit like checking how much of your “sales” are actually friends doing you favors versus strangers showing up with their wallets out.
So how do you *see* that turning point early, before the headline metrics look impressive?
Start with the people who already use you, however few. Not all users are equal: some are casual, some are obsessed. Map three groups:
- **Tourists** – tried it, bounced - **Residents** – use it regularly - **Zealots** – complain loudly when anything breaks
PMF lives in that third group. Your job is to understand why they’d be “very disappointed” to lose you—and then turn more residents into zealots.
This is where tools like Superhuman’s PMF survey help. Ask active users: “How would you feel if you could no longer use this product?” and offer three choices. Don’t obsess over the exact percentage; study the *why* behind their answers. Read every comment from people who say they’d be very disappointed. They’re telling you the job you actually fulfill in their life, which might not match your pitch deck at all.
Next, connect those stories to behavior:
- What do your zealots do in their first session that tourists don’t? - How quickly do they reach that moment? - What’s the smallest action that predicts they’ll still be around 30 or 90 days later?
For Slack, internal data pointed to teams that sent a certain number of messages in the first week being far more likely to stick. That kind of “activation event” becomes your temporary north star: reduce everything that gets in the way of it.
This is also when pricing and positioning quietly decide your fate. Too cheap, and people don’t take you seriously. Too broad, and nobody instantly knows who it’s for. Narrowing the promise—“email for founders,” “chat for internal teams,” “analytics for Shopify stores doing $10k+/mo”—often makes the right users lean in harder.
Think of each iteration like rebalancing an investment portfolio: you shift weight toward the segments, features, and channels that compound, and away from the ones that just look good on paper. You’re not chasing *more* users yet; you’re trying to deepen the conviction of the few who already act like you’re indispensable.
When their behavior becomes predictable—and starts to spread without you forcing it—you’re closing in on the milestone you can safely scale.
Watch what happens when a tiny change quietly reshapes the way people use you.
A founder notices that teams who create a second project within 24 hours tend to stick. So they add a nudge: after finishing the first project, a subtle prompt says, “Spin up another in 10 seconds, using the same settings?” Suddenly, more teams cross that line without thinking. A week later, support tickets shift from “how does this work?” to “can it also do X?”—a different kind of problem, and a better one.
Another startup sees power users obsessively exporting CSVs at midnight. Instead of polishing the dashboard, they ship scheduled email reports. Within a month, managers start forwarding those emails to peers, and new accounts sign up referencing “that weekly summary.”
These small, specific bets don’t look glamorous. Yet each one is like re‑routing a path through a forest: you notice where people are already walking, clear the branches, lay a bit of stone. Over time, those improvised trails harden into roads—and eventually, traffic maps itself around them.
Founders may soon watch “fit” like traders watch a stock ticker. Live dashboards could surface tiny shifts in sentiment or usage, long before revenue moves. You might notice a niche in Brazil lighting up, or a specific workflow quietly outpacing the rest—more like discovering an unexpected profit center in a messy spreadsheet. As regulation and climate pressure rise, teams will layer in impact and trust scores, not just growth curves, and treat erosion in those metrics as early warning signs too.
As tools improve, you’ll see earlier signals: tiny shifts in who signs up, which features cluster together, where referrals quietly spike. Treat these like faint footprints after rain—follow, don’t erase them. The next breakthrough rarely arrives as a big launch; it sneaks in through an overlooked workflow, a niche segment, a strange-but-repeatable use case.
Before next week, ask yourself: 1) “If I had to choose just one customer segment from the episode’s examples to obsess over for the next 30 days, who would it be—and what exact problem are they hiring my product to solve today?” 2) “Looking at how those founders in the episode measured ‘must-have’ status, what’s one concrete signal (e.g., % of users who’d be ‘very disappointed’ without us, weekly active teams, repeat usage of a core feature) I can start tracking this week to know if I’m actually moving toward product–market fit?” 3) “If I had to run a single scrappy experiment in the next 72 hours—like a quick landing page test, 10 customer calls, or a stripped-down feature prototype—what would I test, what would success look like in numbers, and what decision will I make based on the result?”

