In 1996, two researchers published a book about American millionaires. They interviewed hundreds of wealthy people, cataloged their habits, and found a pattern. The millionaires were frugal. They drove used cars, avoided flashy spending, and invested steadily. The book sold millions of copies. The advice felt obvious once you heard it. Live below your means and you'll get rich.
A few years later, Nassim Taleb pointed out the flaw in his book Fooled by Randomness. The authors had studied people who were rich now. They never looked at people who lived frugally, invested steadily, drove used cars, and still went broke. Those people exist in large numbers. Some bought the wrong stocks, some started businesses in the wrong decade, and some saved in currencies that collapsed. The traits looked causal because the researchers only examined winners. The losers had the same habits but nobody interviewed them.
I think something similar happens in talent acquisition all the time. Organizations study their best performers, extract shared traits, and turn those traits into interview scorecards. Most people would call this good practice. The problem is that nobody checks whether people with the exact same traits also fail. You're studying survivors and calling it a formula.
The error feels like rigor, which is what makes it stick. You went to the data, studied your best people, extracted patterns. But you studied one cell of a four-cell table. People who have the trait and succeeded. The other three cells never got checked. People who have the trait and failed, people who lack it and succeeded, people who lack it and failed. In most organizations, you can't check those cells because nobody tracked trait data on people who were managed out or left. The evidence that would disprove the model was never collected. And you have no idea whether a trait that appears in 90% of your top performers also appears in 85% of your worst hires, because you never looked.
An engineering team notices their strongest performers contribute to open-source projects and build side projects after hours. Those traits become informal screening criteria. Candidates who don't show them get passed over. But among all the engineers the team has ever hired, do open-source contributors actually perform better? Or does the team just remember them because they're visible, active, and still around?
The thing worth doing is adding a failure question to any competency modeling exercise. Before you finalize a trait as a hiring criterion, ask whether anyone has checked it against people who left, underperformed, or were managed out. If nobody has, the trait is descriptive. Treat it as a hypothesis, not a standard. This is a conversation you can have with a hiring manager without needing new tools or a data team. "We know our top five people all share this trait. Do we know if any of the last three people who left also had it?" Most of the time, the honest answer is that nobody checked. That gap is the finding.
And when nobody knows, the next question follows naturally. Could a candidate who lacked this trait still succeed in the role? If the honest answer is yes (and it usually is for traits like "entrepreneurial mindset" or "passion for the industry"), the criterion is filtering candidates without predicting outcomes. Downweight it or replace it with something you can observe directly in the work.
This critique becomes unnecessary when someone has actually tested criteria against people who failed, not just people who succeeded. Few organizations do this, which is the whole problem. It also weakens when you're testing what people can do directly, like watching someone solve a problem or reviewing the work in a portfolio. That kind of evidence is different from guessing whether a personality trait will translate into performance.
Most competency models were built by studying the people who stayed. The question worth asking is how many of your current hiring criteria would survive if you checked them against people who left or underperformed, not just your top performers.
Models in this article
Survivorship Bias: The tendency to study only the people or cases that made it through a selection process while the ones who didn't are invisible. It's what hides the "losers" from your view.
Discipline: Statistics, Decision Science
Affirming the Consequent: The logical leap that assumes the traits shared by survivors must have caused their success, without checking whether people without those traits also succeeded or people with them also failed.
Discipline: Logic, Philosophy of Science
Key research: Stanley & Danko, The Millionaire Next Door (1996) as illustrative case of the error
Source: Nassim Nicholas Taleb, Fooled by Randomness (2001)
The Recruiting Lattice takes mental models from fields like behavioral science, sociology, and decision theory and turns them into practical tools for talent acquisition.
Author: Joonatan Hongell
