A teenager calmly describes a dream internship at a famous studio—team, projects, even office décor. One problem: none of it exists, and they know it. Here’s the twist: their brain may be wired to lie, even when there’s nothing obvious to gain.
That invented internship isn’t a one‑off tall tale; for some people, it’s the opening act in a whole season of fabricated jobs, illnesses, achievements, even relationships. Friends start to feel like fact‑checkers instead of confidants. Partners describe arguing not about *what* happened, but whether *anything* happened at all. Over time, the stories can get oddly cinematic—recurring side characters, callbacks, dramatic twists—while real life quietly frays in the background. This isn’t just “being bad at honesty”; it can become a stuck pattern where lying feels less like a choice and more like a reflex. In this episode, we’re going to step away from the moral lens for a moment and ask a harder question: when does persistent dishonesty stop being simply wrong, and start looking like a psychological disorder that needs treatment rather than punishment?
For clinicians, that shift from “won’t stop lying” to “maybe *can’t* stop” matters, because it changes both the questions we ask and the tools we use. Instead of only asking, “What are they hiding?” we start asking, “What problem is this story solving for them—emotionally, socially, even neurologically?” Some people report a strange rush right before they fabricate, followed by brief relief, then shame. Others notice their lies cluster around identity—status, talent, belonging—like scaffolding propping up a shaky sense of self. Today, we’ll step inside that experience and examine what’s known so far.
Clinical researchers often start by stripping away the moral language and mapping out patterns. One of the clearest patterns is *function*: ordinary deception tends to serve an obvious, short‑term goal—avoiding a late fee, smoothing over a social misstep, securing an advantage. By contrast, pseudologia fantastica often runs at a loss. People lose jobs, relationships, even legal standing because of stories that don’t deliver any proportional benefit. That mismatch between cost and payoff is one of the first clues clinicians look for.
A second pattern is *scale*. These aren’t just small edits to reality; the narratives can spiral into parallel lives—alternate careers, secret talents, dramatic crises. Yet when pressed, some individuals will quietly admit they felt oddly detached from the tale even as they told it, like listening to someone else talk through their mouth. Others, in the moment, feel something closer to conviction than calculation, only later confronting the gaps.
The emerging neuroscience adds another layer. In Yang’s MRI study, people who chronically fabricated showed about 22 percent more white matter in key prefrontal regions than controls. More “wiring” is not automatically an upgrade; if the circuits that generate ideas, scenarios, and social predictions are overactive relative to the circuits that vet and inhibit them, you can end up with a mind that produces vivid narratives faster than it can responsibly filter them. It’s less mastermind scheming than a kind of mental overflow.
Clinically, this overflow rarely appears in isolation. High rates of borderline personality disorder and substance‑use disorders suggest that story‑spinning often lives alongside volatility in mood, identity, and impulse control. In practice, that means treatment has to operate on several channels at once: reducing overall emotional turbulence, strengthening reality‑testing, and giving the person alternatives to the micro‑“high” they may get from crafting a dramatic version of events.
One analogy from medicine is helpful here: just as an overactive immune system can start attacking the body it’s meant to protect, an overactive narrative system can begin to attack trust, which relationships depend on. The goal of therapy isn’t to destroy that narrative talent; it’s to help the person aim it back toward truth, creativity, and connection rather than constant, compulsive fabrication.
A university student insists they’re collaborating with a famous researcher; classmates later discover the “lab” is just a quiet corner of the library where the student spends hours reading papers and taking detailed notes. The fantasy sits on top of a real, hungry effort—just inflated beyond recognition. Another person repeatedly claims to be recovering from rare surgeries, joining online support groups, learning all the medical jargon, and even reminding others to attend their check‑ups. Underneath the fabrication is a wish to matter, to be cared for, and a genuine interest in health.
In therapy rooms, these stories sometimes shrink, not with confrontation, but with small, concrete wins. A client who once boasted about “consulting for start‑ups” might channel that same dramatic flair into building a modest but real freelance profile—less impressive on paper, far more stable in life. The narrative “upgrade” gets replaced by narrative *craft*: using the same imagination to write music, fiction, or comedy, where exaggeration is not a breach of trust but the whole point.
If we start treating compulsive deception more like a treatable condition than a moral verdict, whole systems might shift. Courts could weigh intent and control the way referees distinguish fouls from accidents. Workplaces might move from “zero tolerance” to early intervention, like offering coaching before someone’s CV myths explode. And as digital traces reveal patterns of embellishment, platforms may face a choice: quietly flag risk, or design feeds that don’t reward the most dramatic self‑reinventions.
Pathological lying sits in a murky zone between choice and constraint, blame and diagnosis. That ambiguity can be oddly hopeful: if stories can spiral out of control, they can also be rewritten. The task isn’t to “catch” every false note like a hostile audience, but to help people retune their narrative voice until truth starts to feel playable again.
Before next week, ask yourself: “In the last few days, when have I ‘told a better version’ of the truth (exaggerated, left key parts out, or changed details to avoid discomfort), and what exactly was I hoping to gain or protect in that moment?” Then ask: “If I replay just one of those moments, what is the most honest version of that story I could have told—and what discomfort, rejection, or consequence was I afraid would happen if I said it that way?” Finally, ask: “Who is one safe person I could experiment with telling the full, unpolished truth about a small recent lie to, and what specific sentence could I use to open that conversation (for example, ‘I realized I wasn’t fully honest with you about X, and I’d like to correct that’)?”

