By the early 1990s, the standard approach to reducing industrial pollution was regulation. Inspectors, compliance mandates, penalties. Then several European countries tried something different. Instead of hiring more inspectors and threatening fines, they introduced a tax on emissions and used the revenue to lower other taxes. Polluting became more expensive. Cleaner alternatives became cheaper. The whole thing was revenue-neutral, and emissions dropped more than they had under decades of regulation with a fraction of the enforcement overhead.
Around the same time, the United States had spent over a trillion dollars on the traditional approach. Inspectors, compliance mandates, penalties. Thousands of companies monitored by thousands of regulators. The whole system depended on changing how people behaved and then watching to make sure they kept behaving that way.
Both approaches wanted the same outcome. One changed a single structural variable. The other tried to change thousands of people.
Once you see this distinction, it's hard to look at any TA improvement initiative the same way.
Paul Hawken, who spent years writing about why industrial systems resist improvement, put the principle cleanly in his book The Ecology of Commerce. Good design does two things. It changes the fewest possible elements to get the greatest result. And it removes stress from the system rather than adding it. Bad design does the opposite on both counts, targeting the most uncontrollable element and layering on enforcement to hold the change in place. There's a body of work in behavioral economics that backs this up, mostly around how changing defaults moves people more reliably than training or persuasion ever does.
Here's the parallel to talent acquisition.
Take interview training. You're asking fifty hiring managers spread across sites in Finland, USA, and Brazil to remember a two-hour workshop three months later, in the middle of an interview with a candidate they already have a gut feeling about. The target is hiring manager behavior, which is probably the most uncontrollable element in the entire hiring system. And the training adds another thing to remember. Now there's a technique to recall on top of everything else happening in that room.
Run it through Hawken's two questions. Are we changing the fewest elements? No, we're trying to change every interviewer in the company. Are we removing stress? No, we're adding one more thing for people to hold in their heads.
The alternative that tends to work better almost always involves changing the form instead of the person. When a scorecard requires evidence-linked ratings before an overall recommendation (instead of an overall impression followed by reasons invented to match it), assessment data improves without anyone attending a workshop. The form does the cognitive work that the training tried to install temporarily. One structural element changed. The evaluator's load actually decreases, because the scorecard tells them what to think about and in what order.
This is the part I find most interesting. The structural fix often feels too small to matter when you propose it. Changing a few fields on a form sounds trivial compared to a company-wide training program. But the training program depends on fifty people sustaining a behavior change that the system does nothing to support. The form works every time someone opens it, regardless of whether they remember anything from the workshop.
The same pattern shows up with time-to-fill problems. The typical response is to add process. Pipeline reviews, escalation procedures, approval gates. Each addition means more meetings and more overhead. All of it targets the thing you control least, which is whether busy people prioritize speed on any given Tuesday. Meanwhile, the actual bottleneck is often one approval step where a sign-off duplicates approvals already obtained by other stakeholders, or a scheduling sequence that requires four calendars to align before a panel interview can happen. Remove that one chokepoint and the system speeds up without anyone changing their behavior.
Hiring manager intake is where I think this principle has the most room to run. Every organization tries some version of training managers to write better job descriptions. The resistance is high. Whatever improvement you get fades within weeks. What works is redesigning the intake form so the questions themselves do the thinking. When the form asks "What will success look like at six months?" instead of "List the requirements," the output improves because the question forces a different kind of answer. You don't need the training because the structure is doing what the training was supposed to do.
If you're evaluating a TA improvement project, two questions probably tell you most of what you need to know. First, count how many people have to change their behavior for this to work. If the answer is more than a handful, look for the structural variable underneath. The default, the form, the sequence, the one thing that could change once and produce the same result without anyone having to keep doing something new. Second, count what the initiative adds versus what it removes. New steps, new meetings, new reports on one side. Eliminated approvals, removed redundancies, simplified workflows on the other. If you're adding more than you're removing, you're probably adding stress and the system will push back. These limits are real, though. Sourcing a VP from a global talent pool of two hundred people depends on judgment, timing, and trust that no form redesign can produce. And crisis hiring sometimes really does need the added pressure of daily standups and compressed timelines. The filter works for recurring organizational patterns, not for one-time emergencies or relationship-dependent work.
What I keep coming back to is the question of why the structural fix is so rarely the first idea. When something goes wrong in hiring, the instinct is almost always behavioral. Train people. Add oversight. Create a new review cadence. The structural redesign, the one that changes a form, redesigns an intake meeting or removes a step, feels too small. It doesn't look like doing enough. But the question worth sitting with is whether looking like enough and being enough are the same thing. Mostly, I don't think they are.
Models in this article
Good Design Principles: Good system design changes the fewest structural elements to get the greatest result and removes stress rather than adding it, bad design targets human behavior and layers on enforcement.
Discipline: Systems design
Key research: Donella Meadows (leverage points); Thaler & Sunstein (nudge/default effects)
Source: Paul Hawken, The Ecology of Commerce (1993)
The Recruiting Lattice applies mental models from diverse disciplines to the daily craft of talent acquisition.