Browse apartment listings in Helsinki and you'll notice the same words showing up everywhere. "Bright." "Spacious." "Central location." "Modern updates." Each word sounds descriptive. Together they describe roughly half the listings on the market. A buyer who reads "bright, spacious apartment in a central location" has learned that the apartment is an apartment. The words create the feeling of information while carrying almost none of it.
Hiring criteria do the same thing. A hiring manager opens an intake meeting and says "I need a strong communicator who's strategic, collaborative, and has good leadership skills." Those words appear on the scorecard and get assigned numerical weights. But they could apply equally to a marketing director, a site reliability engineer, and a supply chain analyst. They describe every professional role in the building.
Anthropologists have a term for categories like these: "fake universals." Clifford Geertz describes the problem in his book The Interpretation of Cultures, and I'm probably flattening his argument here, but it goes something like this. Mid-century anthropologists tried to identify values shared by all human cultures. They kept arriving at statements like "Zuñi and Kwakiutl both prize the distinctive norms of their culture." Which is a tautology. Every group values what it values. The attempt to be universal had stripped out everything that made the observation useful. Geertz called the results "vague tautologies and forceless banalities," which is probably the best description of a generic competency framework I've ever read.
The mechanism is progressive abstraction. You start with something concrete and observable, like "this engineer debugs by isolating variables systematically and writing reproduction steps before touching code." You abstract it to "problem-solving." Abstract further to "all roles require problem-solving." At each step the statement becomes harder to argue with and easier to ignore. The final version is true of every candidate who has ever held a professional job.
This happens in intake meetings even when the recruiter sees it coming. The hiring manager says "strategic thinking." The recruiter asks what that actually looks like. She gets a good answer. Managing the tension between the plant manager and the regional HRBP. Keeping a vendor relationship alive during a contract renegotiation. Specific enough to score against.
But somewhere between that conversation and the scorecard, the specificity dissolves. Three interviewers who weren't in the room need criteria they can all use. The timeline tightens. Every specific edge gets sanded down until only things everyone can nod at remain. The scorecard ends up saying "strategic thinking" anyway.
Some criteria are genuinely meant to be universal. Safety culture in mining, ethical standards in regulated industries. Their universality is the point. The problem starts when the dimensions that should tell candidates apart have that same character.
The scoring stage is where I find the damage most interesting. Two interviewers both rate a candidate 3 out of 5 on "communication," one of them interviewing in her second language. They feel like they agree. But one was evaluating whether the candidate spoke clearly in conversation, while the other was thinking about written documentation skills under deadline pressure. The numbers match. The things being measured don't. There's research on structured interviews that confirms exactly this. Without a shared definition of the dimension, numerical agreement is noise. You get a tidy spreadsheet that looks like measurement but is closer to decoration.
When those scores reach the debrief, vague criteria make the conversation worse. "I just didn't feel the strategic thinking was there" is unfalsifiable when "strategic thinking" was never defined in terms anyone could observe and verify. The debrief becomes a contest of confidence. Whoever speaks with the most authority wins, and the scorecard sits in the background providing cover for whatever conclusion the room lands on.
The intake meeting is the last moment where you can prevent this. Once vague criteria reach the scorecard, they spread through every interview, every score, every hiring decision.
Most experienced recruiters already ask what "strong leadership" means in practice. They get a real answer. The hard part is making that answer survive contact with the process. What helps is writing the specific answer onto the scorecard as the actual criterion, not as a note in the intake doc that gets translated back into "strategic thinking" when the panel is assembled. When the hiring manager says "the role needs someone who can rebuild trust with two team leads who are considering leaving and ship Q3 despite inheriting a demoralized team", that sentence goes on the scorecard. The interviewers design their questions around it and score candidates against it.
The other useful move is a filter I keep coming back to. Take each criterion on the scorecard and ask whether it would appear just as well for a marketing director, a plant operations manager, and a software architect. If it would, it's probably a fake universal. "Communication skills" fails this test every time. "Translates technical constraints into business trade-offs for non-technical stakeholders in writing, under time pressure" passes it. The difference in scoring power between those two is enormous, and the only thing separating them is thirty seconds of specificity work during intake.
Most hiring managers who say "strategic, collaborative, strong communicator" know exactly what they need. Picture is in their heads. When the intake meeting doesn't draw it out, the scorecard fills with words that could mean anything. And the debrief becomes an argument about what the words mean. The candidate disappears from the conversation.
Model in this article
Fake Universals (Consensus Gentium Trap): Universal categories become substantively empty through progressive abstraction. The broader you define a trait, the less it distinguishes any particular instance. A category that applies to everything describes nothing.
Discipline: Anthropological theory
Key research: A. L. Kroeber (coined the term); Clifford Geertz, The Interpretation of Cultures (1973)
The Recruiting Lattice takes mental models from fields like behavioral science, sociology, and decision theory and turns them into practical tools for talent acquisition.
