Charlie Munger had this idea that the person who only knows one field well doesn't actually understand that field at all. He said the person with a hammer sees every problem as a nail, and he didn't mean it casually. He meant that if you only have the tools your own profession gave you, you'll keep getting things wrong in ways you can't see. The fix, in his view, was to build what he called a latticework of mental models. A collection of big ideas from different fields (biology, psychology, economics, physics, history) that you hang together in your head like a lattice structure, where each node connects to others and helps you see what's actually in front of you.
I came across Munger's idea through Shane Parrish's blog the Farnam Street and the Knowledge Project podcast, probably around 2019 or 2020. Parrish had built an entire body of work around this concept. He took Munger's latticework idea seriously enough to catalog the models, explain them, and show how they applied to decisions in business, investing, and daily life. His work felt different from most productivity or business content because it was aimed at improving your thinking, not your output. I found that distinction compelling.
Around the same time I started reading Nassim Taleb's Incerto books. Antifragile first, then The Black Swan, then Fooled by Randomness and Skin In The Game. Taleb's writing cracked something open for me. He made me pay attention to how much of what we experience is randomness that we dress up as skill or signal. He introduced me to the Lindy effect, the idea that the longer something has survived, the longer you can expect it to survive. A book that has been read for two thousand years is more likely to be read in fifty years than a book published last month. Ideas that have persisted across centuries probably capture something true about how humans and systems work. Fresh ideas are worth exploring, but old ideas that are still around have passed a harder test.
That stuck with me. I started gravitating toward thinkers and books that dealt with how people actually make decisions under uncertainty, rather than how they should make decisions in theory. Kahneman. Gigerenzer. Tetlock. Thaler. People who had spent careers studying the gap between what we think we're doing when we decide and what we're actually doing.
And at some point I started noticing that my work in talent acquisition was full of the patterns these thinkers were describing.
I recruit senior and executive-level people globally for a €5.4B industrial company with 18,000 employees across 51 countries. Most of the searches I run have a candidate pool of a few hundred people worldwide. The stakes on each hire are real, the evaluation processes are imperfect, and the decisions get made by groups of smart, busy people under genuine pressure. Every mental model I was reading about in behavioral economics or decision science seemed to have a direct parallel in the hiring rooms and debrief conversations I was sitting in.
Take narrative fallacy. A hiring manager walks out of an interview and builds a coherent story about the candidate. It sounds convincing in the debrief. It flows. But if you listen closely, you notice the story has quietly filtered out the data points that didn't fit the plot. The candidate's weak answer on the technical question gets reframed as "they were nervous," and a strong answer from a different candidate gets downplayed because it didn't fit the narrative the panel was already building. I think this happens more often than most people realize, and before I read Taleb, I didn't have a name for it. I just knew something felt off when a group of smart people walked out of an interview and all agreed on a version of the candidate that didn't quite match what I'd heard in the room.
Anchoring and commitment escalation were everywhere too. The first candidate in a search sets the bar for everyone who follows, and nobody notices. A search that's been open for five months starts bending the requirements because the team has invested too much to question whether the brief was right from the start. I'd been watching these patterns for years without knowing what to call them
But here is what I think is the interesting part. The insight almost never went the other direction. TA literature, TA conferences, TA thought leadership, it almost never borrowed from behavioral economics or decision science or systems thinking. The field had its own internal vocabulary (pipeline, funnel, employer brand, candidate experience, time-to-fill) and most of the discourse stayed inside that vocabulary. When I looked for writing that applied mental models from other disciplines to talent acquisition, there was very little.
I don't mean that as criticism. Every profession tends to develop its own internal language and its own ways of explaining itself to itself. But I kept finding that the most useful explanations for what I was seeing at work came from books that had nothing to do with recruiting. A chapter about how ecosystems degrade when you extract from them without rest helped me understand something I'd been watching happen to sourcing channels for years. Why the same InMail approach that worked eighteen months ago now gets silence. A completely unrelated passage about jury deliberation helped me rethink what goes wrong in interview debriefs, because jury dynamics and debrief dynamics turn out to be remarkably similar when you look at how groups make decisions when nobody has the full picture.
Munger's latticework is exactly this. You don't need every model from every field. You need models from different enough fields that when you encounter a problem, you have more than one lens to look through. And the models need to connect, because isolated facts don't help you think. Connected frameworks do.
The Recruiting Lattice started as a way to do this work in public. I wanted to take the mental models I was finding in these books, test whether they genuinely transferred to talent acquisition (many don't, and I try to be honest about which ones break), and write about the ones that survived the transfer.
A lattice is a structure where nodes connect to other nodes in multiple directions. That's the newsletter in one image. Each model in the library links to other models, and each article tries to show how an idea from one discipline illuminates something specific about how hiring decisions actually get made.
I should be honest about the limits of this kind of project. I'm a recruiter, not an academic. My understanding of behavioral economics or systems theory or evolutionary psychology is limited and comes from reading, not from running experiments. When I write about Kahneman's work or Taleb's arguments, I'm applying them as a practitioner who found them useful, not as someone who can evaluate the statistical methods behind the original studies. Some of the models I'll write about will turn out to be wrong, or at least less robust than I thought. I think that's fine. The goal is to be useful, not to be authoritative. If a model helps you see something in your hiring process that you didn't see before, it has done its job even if the underlying research gets revised later.
I also want to be clear that this newsletter is not a framework factory. I'm skeptical of anyone who arrives at a new framework every week. Most good ideas are old ideas that have been forgotten or that haven't been connected to the right domain yet. The Lindy effect applies here too. A model from decision science that has held up for forty years is probably more useful to you than a new hiring methodology someone published on LinkedIn last week.
Each article takes a mental model that has survived scrutiny in its home discipline, explains how it works on its own terms without smuggling in recruiting or academic jargon, and then shows how it plays out in the real situations that TA professionals face every day. If an article is just a summary of someone else's research, it has failed. If it gives you a way to think differently about a conversation you're going to have with a hiring manager next week, it has worked.
I think there's a version of talent acquisition work that feels like problem-solving with a rich toolkit, where you're drawing on ideas from many fields to work through situations that are genuinely complex. And there's a version that feels like following a process with the same small set of tools, hoping the tools happen to fit. I'm interested in the first version, and I suspect you are too if you're reading this.
That's why The Recruiting Lattice exists. One recruiter's attempt to borrow better thinking from wherever it can be found and bring it home to the work we do.
I'm Joonatan Hongell, a talent sourcer and recruiter at Metso where I run senior and executive-level searches globally. The Recruiting Lattice is a personal project and has no affiliation with my employer. I’m based in Helsinki, Finland. You can find me on LinkedIn.