Skip to content

Delegation Judgment

The Bottleneck Is You

Look at how you've been working: write a user story, hand it to AI, wait for the output, verify, move on. That pipeline works — you proved it in Lifts 1 through 3. But it's serial. One thing at a time. And AI builds fast enough that you spend more time waiting for yourself to move on than waiting for AI to finish.

The temptation is to throw everything at AI in parallel and see what comes back. Don't. Parallelization amplifies whatever you put in. If you delegate well-specced work in parallel, you get multiple verified features back. If you delegate fuzzy work in parallel, you get multiple outputs that don't integrate, don't match your intent, and take longer to fix than they would have taken to build sequentially.

Scaling requires judgment first, speed second.

The Delegation-Ready Test

In Lift 1, you learned: "If you can't write acceptance criteria for it, you're not ready to delegate it." That principle is even more important when running things in parallel, because you won't be watching every step.

Here's the test:

Question If Yes If No
Can you write 2-4 specific acceptance criteria? Delegation-ready Needs more Explore/Plan work first
Is the scope bounded — you know where it starts and stops? Delegation-ready Decompose further (Lift 2)
Can it be built and tested independently of other in-progress work? Safe to parallelize Needs to be sequenced, not parallelized
Would you know a good result if you saw one? Delegate with confidence Spend more time understanding the problem

This isn't new — it's the same judgment you've been building all day. Right-sizing from Lift 2. Acceptance criteria from Lift 1. Independently shippable pieces from Lift 2's decomposition. The difference is that when you're running three things simultaneously, a fuzzy spec doesn't just produce one bad output — it produces a bad output that conflicts with the other two.

Team Discussion: What's Ready?

Format: Team Discussion Time: ~2 minutes

Your Final Sprint challenge includes: integrating live avalanche forecast data, adding weather enrichment, building observation trend analysis, and creating a "trip report" generator that bundles observations by date and zone.

Discuss: Walk through each feature against the delegation-ready test. Which ones could you write clear acceptance criteria for right now? Which ones need more exploration first? If you had to pick two to run in parallel, which two — and why those two?

Key Insight

Scaling delegation doesn't mean delegating everything at once — it means knowing what's ready. The same judgment you've been building since Lift 1 (can I spec it? is it bounded? would I know a good result?) becomes your filter for what to run in parallel. If you can't spec it, AI can't deliver it — and that's doubly true when you're not watching every step.