A patient with a surgically split brain sat down for an experiment. A researcher flashed an instruction to the right hemisphere: walk to the door. The patient stood up and walked. Then the researcher asked why.
The left hemisphere hadn't received the instruction. It had no idea why the body was moving. But the patient didn't say "I don't know." He said "I wanted to get a Coke."
Michael Gazzaniga ran variations of this experiment for decades. Show the right hemisphere a snow scene and ask the left hand to pick a matching card. The left hand picks a shovel. Ask the patient why, and the left hemisphere, which saw a picture of a chicken claw, says "You need a shovel to clean out the chicken shed." The explanation arrives instantly, feels perfectly reasonable, and has nothing to do with the actual cause.
The brain never hesitated. Not once, across hundreds of trials. The left hemisphere is a compulsive storyteller. Give it fragments and it will find a plot. A sequence becomes a cause.
Nassim Taleb saw this pattern everywhere he looked. In his book The Black Swan, building on Gazzaniga's experiments, he called it the narrative fallacy. The brain takes a sequence of facts and compresses them into a causal story. The compression makes the story easier to remember, easier to feel certain about, and less accurate. You can't tell the difference from the inside.
Taleb borrowed an example from E.M. Forster. "The king died and the queen died" is a sequence. "The king died, and then the queen died of grief" is a plot. We added a cause, lost the randomness, and made the whole thing stickier in memory. That trade is probably happening every time you read a resume.
A resume is a sequence of facts. Degree, Company A, Company B, current role. But nobody reads it that way. A CS degree, eighteen months at Google, an MBA, then a startup becomes "technical foundation, learned to operate at scale, entrepreneurial drive." The story feels like you discovered it on the page. You didn't. You wrote it. Maybe the candidate left Google because their manager was terrible, the MBA was a visa requirement, and the startup failed in four months. The facts support all of these plots equally, but your brain picked one and moved on.
The candidate knows this. They sequenced those resume facts with a reader's brain in mind. Google goes near the top because it signals scale. The startup goes last because it implies initiative. And when they rehearse behavioral answers, they're running the same machine. A messy, ambiguous eighteen-month project becomes "led a cross-functional effort, faced resistance, built consensus through weekly conversations." The story is polished not because the candidate is dishonest but because their brain runs the same machine the evaluator's does. Sequence in, clean arc out.
Two narrators, one conversation, and the facts underneath belong to neither version.
The evaluator keeps narrating long after the interview ends. The story runs hardest in the interview debrief, when the candidate isn't even in the room.
You've sat in those rooms. Four interviewers, one candidate, forty-five minutes of discussion. Someone speaks first. Maybe they say "strong technically but I'm not sure about the leadership presence." The next interviewer re-reads their own notes through a slightly different lens. The candidate's detailed follow-up questions, which could signal curiosity or thoroughness, start looking like "nitpicking."
Each interviewer walks in carrying fragments. Some strong moments, some weak ones, a lot of ambiguity. The brain wants to bind those fragments into a character. And when the first person speaks, they offer a binding story that the rest of the room adopts, mostly without realizing it. Gazzaniga's patients, distributed across a conference table.
I watched this happen on a debrief last year. The SVP, the hiring manager's boss, spoke first and named his preferred candidate. I knew what was about to happen and I was too late to set up the conversation differently. The rest of the panel didn't suddenly agree, but his candidate became the baseline. Every other candidate got measured against the one the SVP had named. The outcome was probably correct. The panelists were professionals who had done their own assessments. But "probably correct" is the best you can say when the most senior person in the room speaks first.
The problem is that it feels exactly like rigor. It looks like rigor. People cited specific moments. There was disagreement on one dimension and consensus on the rest. But the process underneath was the same: take the fragments, find the plot, deliver it with confidence.
You probably can't eliminate the narrative instinct. It's biological. But you can create friction.
Separate evidence collection from story construction. If every interviewer writes down specific observations before the debrief begins, the fragments stay as fragments a little longer. What the candidate said, what they did, what they produced. No interpretations, no trait language. Most teams skip this step because they trust the conversation to surface the evidence. It does surface evidence, but only the evidence that fits whoever's story gets told first.
Observations alone aren't enough, though. If interviewers write notes but score after the discussion, they'll re-read their own observations through the lens of whoever won the room. "I wrote down that she asked a lot of clarifying questions" becomes "Yeah, that supports the nitpicking read." Submitting scores before the conversation creates a fixed anchor that the room's narrative can't easily move.
A smaller change that adds up over time. Rotate who speaks first. The opening speaker sets the story, and in most teams that's the most senior person or the hiring manager. If the most junior interviewer goes first, the group has to work harder to build meaning from less confident raw material. The conversations get messier. The decisions probably get better.
These changes work on the evaluator's side. The candidate's narrative arrives more rehearsed and is harder to disrupt. But structured interviews and work samples create moments where rehearsal runs out.
Gary Klein, the psychologist who built the naturalistic decision-making framework, would push back here. Klein's research suggests that expert intuition is often accurate when the expert has spent years in environments with consistent, timely feedback. An experienced hiring manager who has seen hundreds of candidates and tracked their outcomes might genuinely be recognizing real patterns, not constructing fiction. That's a fair objection.
But Klein's own conditions for reliable intuition are strict. The environment needs to be regular enough to be predictable, and the person needs consistent feedback on their past judgments. Most hiring environments meet neither condition. Feedback arrives months later, gets confounded by onboarding quality and team dynamics, and the sample size for any individual manager is small.
Every interview is two people in a room, both constructing a story, neither aware that the other is doing the same thing. One compresses a messy career into a clean arc, the other compresses a messy interview into a clean character, and the decision happens in the gap between those two fictions. You can't close that gap. Written observations, pre-submitted scores, structured formats, they narrow it. And in hiring, a narrower gap between fiction and fact is probably the most honest outcome you can aim for.
Models in this article
Narrative Fallacy: The brain constructs causal stories from sequences of facts, creating false certainty. From Nassim Taleb's The Black Swan (2007), building on Michael Gazzaniga's split-brain research.
Naturalistic Decision-Making: Expert intuition works when the environment is predictable and feedback is consistent. From Gary Klein's Sources of Power (1998). Used here as a counterargument.
The Recruiting Lattice applies mental models from diverse disciplines to the daily work of talent acquisition. Each article introduces one idea and shows where it's already operating in your hiring process.
