About half of new business apps at Microsoft now come from people who don’t even call themselves developers. In one corner, a teen is shipping a 3‑D game after a couple weeks of prompts. In another, a product manager is “describing a vibe” and getting a working prototype back.
60% of Copilot users say they feel “in the flow” more often when coding, and that feeling is the real story behind AI‑powered vibe coding. We’re not just auto‑completing lines; we’re reshaping who gets to shape software, and how fast ideas escape your head and hit a screen.
In this episode, we’ll zoom in on that gap between “I kind of know what I want” and “here’s a real thing people can click.” Think less about features, more about intentions: *I want customers to feel guided, not overwhelmed;* *I need my team to stop copying data between tools;* *this workflow should feel as smooth as sketching in a notebook.* Those fuzzy, emotional specs are becoming first‑class inputs the AI can work with, not messy afterthoughts engineers must decode.
Instead of starting with “build me an app,” vibe coding works best when you narrate micro‑moments: “the first 10 seconds after signup,” “the panic when someone loses a password,” “the delight when a dashboard finally makes sense.” Those tiny stories become raw material the AI can restructure into flows, components, and guardrails. You’re not locked into one big prompt, either. Modern tools let you riff: adjust the tone of an error message, reshape a layout, or tighten a workflow the way a photographer nudges light, angle, and focus until the scene matches the mood in their head.
Start by treating your first prompt less like a spec and more like a casting call. You’re not listing tasks; you’re describing the personality and purpose of the thing you’re trying to build. That means leading with statements like: “This tool is obsessed with clarity,” “This workflow should never make someone feel stupid,” or “This dashboard rewards curiosity.” Then you layer in concrete situations where that personality is tested: a confused first‑time user, an expert who wants shortcuts, a manager who just needs one number before a meeting.
From there, think in three passes rather than one perfect prompt. Pass one is *broad intention*: how should this product treat people, what outcomes matter, what behaviours do you want to encourage or discourage? Pass two is *constraints*: which platforms you live on, what data sources are allowed, what must *never* happen (like deleting records without confirmation). Pass three is *edge feelings*: frustration, urgency, boredom—moments where the experience either fractures or earns trust.
Relatable metaphors help you talk about those edges without slipping into pseudo‑technical language. You might say, “When someone hits an error here, I want it to feel like a calm coach on the sidelines, not a referee blowing a whistle.” The model can turn that into tone guidelines, copy variations, and guardrails, and you can keep tightening with follow‑ups like “more direct” or “less apologetic.”
Notice you’re co‑directing, not delegating. When the AI suggests a flow, interrogate it with the same language you started with: “This step feels bureaucratic,” “This screen is too shouty,” “This path rewards rushing instead of care.” Each critique becomes new training data for this particular conversation, steering future suggestions closer to your intent.
Over time, you’ll build a personal vocabulary the tools learn to respect: words you use for speed versus safety, for exploration versus execution. That shared language becomes a shortcut; instead of re‑explaining, you can say, “Use my ‘calm coach’ pattern here,” and watch the system draft screens, messages, and logic that already understand the mood you’re aiming for.
Think of this phase like sketching three small paintings instead of drafting a single giant mural. In one canvas, you’re working on a support chatbot: you prompt with, “When someone is stuck, respond like a patient senior teammate who has seen this problem a hundred times.” Then you stress‑test it: “Now show me how you’d answer when the user is clearly frustrated and in a hurry,” and compare the two drafts. Does one sound too robotic, the other too chatty? Nudge the language and phrasing until both feel aligned.
Next canvas: a simple onboarding flow for a sales tool. You might say, “Treat every question you ask as if the user is standing at the door with their coat on, halfway out.” The model will likely shorten forms, compress steps, and prioritize only what’s essential right now. You can then ask it for a contrasting “deep dive” mode for power users and see how your original tone stretches—or breaks—when the context shifts.
"Most of the software we need hasn’t been imagined yet," says Bret Victor—and vibe‑driven tools shrink the cost of that imagination. As models stretch context and reasoning, your prompts can span an entire workflow, not just a screen. That makes *story‑level* questions practical: “How should this evolve over a quarter?” or “What breaks at 1,000 users?” Like a coach reviewing game footage, the AI can replay flows, surface brittle spots, and propose alternates tailored to your team’s real constraints, not a generic best practice.
Your challenge this week: pick one tiny, real workflow you own—approvals, weekly reporting, or onboarding a teammate. Describe its *ideal future state* to an LLM in under 10 sentences, focusing on how it should feel at the stressful parts. Then force it to disagree with you: ask the model to argue for a different flow that optimizes for a competing value (speed vs. safety, focus vs. flexibility). Compare both drafts, and choose one concrete change you’ll test with an actual user.
Treat this as a skill you can keep leveling up. Each time you “talk through” a flow, you’re sharpening both your product sense and the AI’s sense of you. Like a musician rehearsing scales, the repetition looks simple from the outside, but it compounds fast—until sketching a new tool in words feels as natural as jotting an idea in your notes app.
Before next week, ask yourself: 1) “If I treated my ‘vibe’ as real data, what 3 concrete signals (e.g., songs I replay, words I keep using in chats, moments I feel most energized) could I start tracking today so an AI could actually learn my creative patterns?” 2) “If I gave an AI a ‘vibe brief’ for my life or work this week—mood, tempo, aesthetic, constraints—what would I include so its suggestions feel less generic and more eerily ‘me’?” 3) “When I get an AI suggestion that doesn’t feel right, how can I respond (what feedback, examples, or corrections can I give it) so that next time its output moves one step closer to the vibe I actually want to live in?”

