The most useful explanations for what happens in hiring come from fields that have nothing to do with hiring.

I recruit senior and executive-level people globally for a €5.2B industrial company with 18,000 employees across 51 countries. Most of the searches I run have a candidate pool of a few hundred people worldwide. The stakes on each hire are real, the evaluation processes are imperfect, and the decisions get made by groups of smart, busy people under genuine pressure. And the patterns I keep seeing in those rooms are described better by behavioral economics and decision science than by anything the TA profession has produced on its own.

Take what Nassim Taleb calls narrative fallacy. A hiring manager walks out of an interview and builds a coherent story about the candidate. It sounds convincing in the debrief. It flows. But if you listen closely, you notice the story has quietly filtered out the data points that didn't fit the plot. The candidate's weak answer on the technical question gets reframed as "they were nervous," and a strong answer from a different candidate gets downplayed because it didn't fit the narrative the panel was already building. Before I read Taleb, I didn't have a name for this. I just knew something felt off when a group of smart people walked out of an interview and all agreed on a version of the candidate that didn't quite match what I'd heard in the room.

Anchoring was everywhere too. The first candidate in a search sets the bar for everyone who follows, and nobody notices. So was commitment escalation. A search that's been open for five months starts bending the requirements because the team has invested too much to question whether the brief was right from the start.

TA has its own internal vocabulary for talking about itself. Pipeline, funnel, employer brand, candidate experience, time-to-fill. Most of the discourse stays inside that vocabulary. When I looked for writing that applied mental models from other disciplines to talent acquisition, there was very little. I don't mean that as criticism. Every profession tends to develop its own language for explaining itself to itself. But the most useful explanations for what I was seeing at work kept coming from books that had nothing to do with recruiting.

A chapter about how ecosystems degrade when you extract from them without rest helped me understand something I'd been watching happen to sourcing channels for years. Why the same InMail approach that worked eighteen months ago now gets silence. A completely unrelated passage about jury deliberation helped me rethink what goes wrong in interview debriefs, because jury dynamics and debrief dynamics turn out to be remarkably similar when you look at how groups make decisions when nobody has the full picture.

Charlie Munger had this idea that the person who only knows one field well doesn't actually understand that field at all. The fix, in his view, was to build what he called a latticework of mental models, big ideas from enough different fields that you have more than one lens for any problem. I came across Munger's idea through Shane Parrish's Farnam Street blog and the Knowledge Project podcast, probably around 2019 or 2020. That led me to Taleb, then deeper into decision science and behavioral economics. People who had spent careers studying the gap between what we think we're doing when we decide and what we're actually doing.

The Recruiting Lattice started as a way to do this work in public. Each article takes a mental model that has survived scrutiny in its home discipline, explains how it works on its own terms without smuggling in recruiting or academic jargon, and then shows how it plays out in the real situations that TA professionals face every day. If an article is just a summary of someone else's research, it has failed. If it gives you a way to think differently about a conversation you're going to have with a hiring manager next week, it has worked.

I should be honest about the limits of this kind of project. I'm a recruiter, not an academic. My understanding of behavioral economics or systems theory or evolutionary psychology is limited and comes from reading, not from running experiments. When I write about Kahneman's work or Taleb's arguments, I'm applying them as a practitioner who found them useful, not as someone who can evaluate the statistical methods behind the original studies. Some of the models I'll write about will turn out to be wrong, or at least less robust than I thought. I think that's fine. The goal is to be useful, not to be authoritative. If a model helps you see something in your hiring process that you didn't see before, it has done its job even if the underlying research gets revised later.

This newsletter is not a framework factory. I'm skeptical of anyone who arrives at a new framework every week. Most good ideas are old ideas that have been forgotten or that haven't been connected to the right domain yet. A model from decision science that has held up for forty years is probably more useful to you than a new hiring methodology someone published on LinkedIn last week.

There's a version of talent acquisition work that feels like problem-solving with a rich toolkit, where you're drawing on ideas from many fields to work through situations that are genuinely complex. And there's a version that feels like following a process with the same small set of tools, hoping the tools happen to fit. This newsletter is about the first version.

I'm Joonatan Hongell, a talent sourcer and recruiter at Metso, where I run senior and executive-level searches globally. The Recruiting Lattice is a personal project and has no affiliation with my employer. You can find me on LinkedIn.

Reply

Avatar

or to participate

Recommended for you