Allocation As The New Leadership Skill
Daniel Pink recently identified six human skills that AI won’t replace. One of them is allocation: the ability to direct intelligence resources effectively, deciding which tasks belong to humans and which to machines. It’s a clean formulation. I’ve been turning it over because I think it’s pointing at something larger than the AI conversation.
The allocation question has always been the leadership question. AI just makes it harder to ignore.
#The Old Problem
Peter Drucker argued in The Effective Executive that what distinguishes great executives isn’t intelligence, creativity, or work ethic. It’s how they allocate their attention. Where they choose to spend their finite cognitive resources, and equally, what they choose not to spend them on.
His most quoted line captures it: “There is nothing so useless as doing efficiently that which should not be done at all.”
This observation was published in 1967. AI has made it urgent. A machine can now do the wrong things far more efficiently than any human. If you haven’t figured out what the right things are, the tool just accelerates the misallocation. Drucker’s line remains the most precise diagnosis of how leader calendars go wrong. They are efficient. They are productive. They are working extremely hard on things that don’t require their particular judgment, experience, or pattern recognition. They are, in Drucker’s terms, doing the wrong things well.
The reason this persists is structural, not intellectual. The conceptual understanding is usually there: delegate more, protect their strategic thinking time, and spend fewer hours in the weeds. The problem is that the weeds are where they feel useful. The strategic work is ambiguous, slow to produce visible results, and psychologically uncomfortable in ways that tactical execution isn’t. So the calendar fills with meetings they don’t need to attend, decisions they don’t need to make, and work they don’t need to do. The wrong work just feels more like work.
#Attention Has A Hard Budget
Kahneman established something in his research on attention that most people accept in theory but ignore in practice: attention is a finite resource with a hard capacity limit. You dispose of a limited budget that you can allocate to activities, and if you try to exceed that budget, you fail. Effortful activities interfere with each other. You cannot do several demanding things at once without degrading all of them.
This is a constraint, like physics. Every hour you spend in a meeting where your presence isn’t actually required is an hour subtracted from work where it is. Every decision you make that someone on your team could make is a decision’s worth of cognitive load consumed. The budget doesn’t expand because the task felt important. It just gets spent. AI compresses this further: if a machine can draft the memo, summarize the data, and prep the analysis, the honest accounting of what genuinely requires your cognitive budget gets very short.
Allocation is the management skill. The quality of a leader’s thinking depends directly on how much capacity is left after everything else has taken its share. And by the time you get to the work that actually requires your judgment, the budget is usually already spent on things that didn’t.
#The AI Extension
Pink’s framing adds a new dimension to this old problem. When the only question was how to allocate work among humans, you could afford to be somewhat loose about it. If you kept doing work a team member could handle, the cost was your time and their development. Real costs, but manageable.
Now the question extends to machines. And the machines are getting better at a category of work that many leaders use to fill their days: summarizing, drafting, analyzing data, preparing presentations, processing information. If a significant portion of what you do each day can be done by AI, and another portion could be handled by someone on your team, then the honest accounting of what genuinely requires you gets very short.
This is where the two problems compound. If you can’t let go of work that a human on your team could own, you’re almost certainly not going to navigate the harder question of what to hand to a machine. The muscle is the same: assessing where your particular intelligence adds genuine value, and having the discipline to stop spending it everywhere else.
Drucker’s line about doing efficiently what shouldn’t be done at all becomes sharper in an AI context. The machine can do the wrong things far more efficiently than you can. If you haven’t figured out what the right things are, the tool just accelerates the misallocation.
#Why The Honest List Is Hard
Two columns.
Column one: tasks where your judgment, relationships, or pattern recognition are genuinely irreplaceable. Where you see things nobody else on the team can see. Where your specific experience makes a qualitative difference in the outcome.
Column two: everything else. Tasks that feel important. Tasks you’re good at. Tasks that produce the satisfying feeling of getting things done. But tasks where a capable team member, given clear context, could produce 80% of what you produce. Or where a machine, given the right prompt, could do it faster.
The honest version of this list is uncomfortable, because column one is almost always shorter than leaders expect. The gap between “I’m the best person to do this” and “I’m the only person who can do this” is wide. Most of what fills a leader’s day lives in that gap: work where they add value, but where the value doesn’t justify the cost of their attention.
The reason the list is hard to make honestly is that column two often contains the work that built your career. The deal you could close better than anyone. The product decision you’d make differently. The operational detail you’d catch that others miss. These are real skills. They produce real results. And the role has moved past them. The work that only you can do now is probably the work you find hardest to point at: reading patterns across the business, seeing what’s missing, asking questions nobody else is positioned to ask, making the calls that require integrating information from every function simultaneously.
#The Same Principle, Extended
Pink’s framing of allocation as a human skill in the AI age is useful because it makes visible something that was always true but easier to avoid. The question was never about working harder or being more disciplined with your calendar. The question was always about what deserves your attention and what doesn’t.
The AI era just makes the accounting more precise. When a machine can draft the memo, analyze the data, and summarize the meeting, the thing that’s left is the thing that was always the real job: deciding what matters, reading people and situations, exercising judgment in conditions of uncertainty, and asking the questions that reframe the problem.
These are allocation decisions. They always were. The only difference now is that the cost of getting them wrong is more visible.
Make the two lists. Be honest about how short the first column is. That gap between what you do and what only you can do is where your attention is leaking. And attention, once spent, doesn’t come back.