Author: Joonatan Hongell
By the early 1990s, the standard approach to reducing industrial pollution was regulation. Inspectors, compliance mandates and penalties. Then several European countries introduced a tax on emissions and used the revenue to lower other taxes, instead of hiring more inspectors and threatening fines. Polluting became more expensive. Cleaner alternatives became cheaper. The whole thing was revenue-neutral, and emissions dropped more than they had under decades of regulation with far fewer inspectors.
Around the same time, the United States had spent over a trillion dollars on the traditional approach. Inspectors, compliance mandates, penalties. Thousands of companies monitored by thousands of regulators. The whole system depended on changing how people behaved and then watching to make sure they kept behaving that way.
Both approaches wanted the same outcome. One changed a single thing in the system. The other tried to change thousands of people.
Paul Hawken described it plainly in The Ecology of Commerce. Good design changes as few things as possible to get the greatest result. And it removes stress rather than adding it. Bad design does the opposite. It targets the most uncontrollable element and adds enforcement to hold the change in place.
Take interview training. You're asking fifty hiring managers spread across sites in Finland, USA, and Brazil to remember a two-hour workshop three months later, in the middle of an interview with a candidate they already have a gut feeling about. The target is hiring manager behavior, which is the most uncontrollable element in the system. Now there's a technique to recall on top of everything else happening in that room.
The alternative changes the form instead of the person. When a scorecard requires evidence-linked ratings before an overall recommendation, instead of an overall impression followed by reasons invented to match it, the evaluations get better without anyone attending a workshop. The form does the thinking that the training tried to install temporarily. The evaluator's load actually decreases, because the scorecard tells them what to think about and in what order.
The structural fix often feels too small to matter when you propose it. Changing a few fields on a form sounds trivial compared to a company-wide training program. But the training program depends on fifty people sustaining a behavior change that the system does nothing to support. The form works every time someone opens it, regardless of whether they remember anything from the workshop.
If you're evaluating a TA improvement project, two questions probably tell you most of what you need to know. First, count how many people have to change their behavior for this to work. If the answer is more than a handful, look for the one lever underneath. The default, the form, the sequence, the one thing that could change once and produce the same result without anyone having to keep doing something new. Second, count what the initiative adds versus what it removes. New steps, new meetings, new reports on one side. Eliminated approvals, removed redundancies, simplified workflows on the other. If you're adding more than you're removing, you're probably adding stress and the system will push back.
When time-to-fill spikes, the typical response is to add process. Pipeline reviews, escalation procedures, approval gates. Each addition means more meetings and more to track. All of it targets the thing you control least, which is whether busy people prioritize speed on any given Tuesday. Meanwhile, the actual bottleneck is often one approval step where a sign-off duplicates approvals already obtained by other stakeholders, or a scheduling sequence that requires four calendars to align before a panel interview can happen. Remove that one chokepoint and the system speeds up without anyone changing their behavior.
Hiring manager intake is where this principle has the most room to run. In my experience, the first draft of the job description comes from the hiring manager and it's usually raw. I don't send it back with feedback. I run an intake meeting with the right questions, and by the end I have what I need to write a job ad that actually sells the role. What works is redesigning the intake meeting so the questions on the form do the thinking. When you ask "What will success look like at 12 months?" instead of "List the requirements," the output improves because the question forces a different kind of answer. You don't need the training because the structure is doing what the training was supposed to do.
Sourcing a VP when the entire addressable market is a few hundred people globally depends on judgment, timing, and trust that no intake meeting redesign can produce. You're reading the same profiles everyone else is reading, and the difference between landing the hire and losing them is relationships and timing, not process structure. But for most recurring TA work, the filter holds.
Training programs are visible. They have budgets, timelines, slides. A TA leader can point to them and say we're working on it. That's why, when something goes wrong in hiring, the instinct is often to train people and create another round of reviews.
An intake meeting redesign is invisible the moment it's working. Nobody gets credit for the approval step that was quietly removed, the scheduling bottleneck that disappeared, the intake question that started producing better job descriptions without anyone noticing. The behavioral fix looks like investment, and the structural fix, when it works, looks like nothing happened at all.
I once recommended removing the president of a business area from the final interview step because I knew that would add 2-3 weeks to the process and the panel had already done the work. The panel accepted it quietly and we moved on. Nobody tracked that the time-to-fill didn't stretch by three weeks. That's what a structural fix looks like from the inside, a problem that never became visible because it was removed before it could.
Looking like you're doing enough and actually doing enough are not the same thing. The catch is that doing it right looks like nothing happened.
Models in this article
Good Design Principles. Good system design changes the fewest structural elements to get the greatest result and removes stress rather than adding it. Bad design targets human behavior and layers on enforcement.
Discipline: Systems design
Key research: Donella Meadows (leverage points), Thaler & Sunstein (nudge/default effects)
Source: Paul Hawken, The Ecology of Commerce (1993)
The Recruiting Lattice takes mental models from fields like behavioral science, sociology, and decision theory and turns them into practical tools for talent acquisition.
