About half the people using advanced AI tools are quietly wasting time—because the AI has no idea who they are. In one test, custom setups cut chat length by about a quarter. So here’s the paradox: your “smart” assistant is powerful, but effectively starts every day with amnesia.
Most people respond to that amnesia problem by overcompensating: they copy‑paste mini‑essays about themselves into every prompt, or they give up and accept bland, one‑size‑fits‑all answers. Both approaches leak time and attention. The real unlock is to teach the model the *right* things once, then reuse that context everywhere: who you are, what “good” looks like in your world, and which mistakes are unacceptable. That’s where custom instructions come in as a practical workflow, not a gimmick. Instead of treating each chat as a blank slate, you define a standing agreement with the AI—your roles, your preferences, and your shared “project.” In this episode, we’ll turn that into something concrete: a reusable instruction set you can refine over time, plus a few guardrails so your AI doesn’t confidently stray into the wrong lane.
Think of this as upgrading from chatting with a stranger to collaborating with a teammate who’s been through onboarding. Instead of relying only on that standing agreement you set once, you’ll add layers of context that flex with each task. That means pulling in the right documents at the right time, surfacing past decisions so you don’t revisit them, and encoding the nuances of your domain language so the model stops asking “what do you mean by X?” every day. We’re not changing how the AI thinks—we’re tightening the feedback loop between your world and its replies.
Most people stop at “tell the AI who I am” and miss the deeper layers that actually move the needle on quality and speed. The real leverage comes when you combine three ingredients: a lean permanent profile, on‑demand reference material, and a tiny bit of structured memory for very specific jargon or workflows.
Start with the persistent layer, but treat it like prime real estate. You’re paying for every token, so this is not the place to paste your CV, company history, and life philosophy. Capture only what the model can’t infer from the current prompt: your role, key audiences, deal‑breakers, and a few high‑impact style rules. In OpenAI’s own tests, people who did this once saw about a 25 % reduction in back‑and‑forth just from cutting clarification loops.
The next layer is retrieval. This is where the big performance gains usually show up. Instead of hoping the model “remembers” how your team writes proposals, you attach or reference the latest template, brand guide, or policy doc at query time. Systems like Morgan Stanley’s internal GPT setups, or GitHub Copilot with organization‑wide rules, work this way: they point the model at a changing library of documents so answers stay tied to current reality. Stanford’s 2024 work shows this type of setup can lift domain accuracy by up to half compared with one‑off prompts.
Then there’s light personalization for niche language: your internal acronyms, product nicknames, or proprietary frameworks. You don’t need a full retrain here. A small glossary, a few high‑quality examples, or an embedding‑backed memory of past decisions is often enough. The goal is that when you say “run the Q4 coverage play,” the system already knows the sequence, the format, and the usual pitfalls—without you rewriting instructions.
One helpful mental check: your base profile should change rarely; your retrieved docs can change weekly; your “micro‑memory” of examples can evolve daily. When those three stay in sync, the model starts to feel less like a generic chatbot and more like a teammate who quietly keeps up as your work changes.
A senior engineer at a fintech company keeps a short profile about his stack, risk tolerance, and review style. On top of that, he links a rotating folder of “current project specs” so the model always sees the latest API contracts and security notes. The result: his code reviews shift from “does this compile?” to “does this leak edge‑case risk for our European customers?”—and he spends more time deciding, less time re‑explaining context.
A marketing lead goes further. She pins a house tone guide, then adds per‑campaign briefs as she works. Over a quarter, she trains a tiny glossary of recurring taglines, product nicknames, and segmentation rules. Now, when she asks for a launch sequence, the AI proposes channel mixes she actually uses, avoids banned phrases, and mirrors the pacing she prefers for short social bursts versus long‑form explainers.
One way to see this stack of context is as a multi‑track recording session: your base profile is the steady rhythm section, retrieval adds dynamic layers, and micro‑memory carries the riffs that make each track recognizably yours.
As models carry richer context across apps, they’ll start quietly coordinating on your behalf: drafting updates that match your calendar, nudging you when a plan conflicts with past decisions, even flagging when a teammate’s “standing agreement” clashes with yours. That power cuts both ways. Misaligned profiles could fork your digital twin into inconsistent personas—useful in silos, risky in regulated work—so expect audits, version control, and “profile diffing” to become normal management tools.
Treat this episode as your draft blueprint, not the final architecture. As you work, notice where your current instructions still force you to “translate” for the model—those are upgrade points. Over time, you’re tuning an instrument: the more precisely you set it up, the more often you get answers that resonate on the first note.
To go deeper, here are 3 next steps:
1. Open ChatGPT’s **Custom Instructions** panel and, using the episode’s examples, paste in 3–5 concrete details about your work (e.g., “I’m a product manager in fintech,” “I prefer bullet‑point summaries,” “assume I’m familiar with Agile but new to LLMs”) and save that as your default profile. 2. Grab **Ethan Mollick’s “Co-Intelligence”** (book or audiobook) and read/listen to the chapter on prompting alongside the episode’s advice, then update your instructions to include how you want the model to challenge your assumptions or expand your thinking. 3. Install and test one specialized tool mentioned on the show—such as a **browser extension or workspace integration for saving prompt presets** (e.g., ShowGPT, FlowGPT, or Notion templates for custom instructions)—and create a “Podcast Learning” preset that automatically applies your new instructions whenever you’re summarizing or brainstorming from new episodes.

