Run 4: Final Sprint¶
Where You Left Off¶
In Run 3, you put the closed loop into practice — criteria, failing test, implement, passing test — and built features with confidence that your safety net would catch regressions. Your observation network is deployed, tested, and live. The pipeline works: Explore → Plan → Implement → Verify → Ship.
Then in Lift 4, you confronted the remaining bottleneck: you. One story at a time. One conversation at a time. AI builds fast, but you're serial. You learned the delegation-ready test — can you spec it, is it bounded, is it independently buildable, would you know a good result? You practiced background execution, sending work to AI while you moved on to the next task. You planned your final sprint: user stories with acceptance criteria for every feature you want to ship, assessed for parallel safety.
You also saw the full system you've built: acceptance criteria define success (Lift 1), decomposition keeps tasks bounded (Lift 2), automated tests verify results (Lift 3), and background execution lets you move faster (Lift 4). That system is what lets you trust AI to work without watching every step.
The Challenge¶
This is the final sprint. Everything you've learned — delegation contracts, skills, TDD, parallel execution — comes together. Ship as much of your backlog as you can by running multiple workstreams, batching similar work, and trusting your system to verify the results.
The observation network you demo at the end of this run is the culmination of all four runs. Make it something worth showing.
Delegate with judgment. Trust your tests. Ship when they're green.
Baseline Capabilities¶
- Delegation-ready assessment applied — your team has evaluated the sprint backlog against the delegation-ready test, decided what to run in parallel vs. sequence, and is executing the plan you built in Lift 4
- At least two features built in parallel — separate conversations, background execution, or batched similar work. Each feature has its own user story, acceptance criteria, and tests — independently built and independently verified.
- Live data enrichment — observations are enriched with real data from at least one external API: current danger ratings from the avalanche forecast API, current weather from the NWS API, or snowpack readings from SNOTEL. Your repository has API documentation for all three — ask your AI coding assistant to explore what's available.
- Final deployment — the live URL reflects the complete observation network: four runs of work, tested and shipped. All tests pass. The platform is something you'd demo with confidence.
Stretch Goals¶
- Observation trend analysis — show how observation volume or types have changed over recent days, giving forecasters a sense of whether reporting activity is increasing or declining
- Trip report generator — bundle observations by date and zone into a summary a backcountry skier could use to plan tomorrow's route — conditions, recent observations, danger level, weather
- Cross-reference observations with forecast data — show how community observations correlate with the official danger rating. Are observers reporting conditions consistent with the current forecast, or are there discrepancies that could inform a forecast update?
- Alert or notification features — surface new red flag observations or danger level changes prominently, so a forecaster checking the platform sees the most critical updates first
- Demo polish — landing page that explains what the platform is, about section crediting the team, professional styling, mobile responsiveness — make the live URL something you'd share outside the workshop
Tips¶
- Start with the sprint plan you made in Lift 4. Your team already identified which features pass the delegation-ready test and assessed parallel safety. Execute the plan — don't re-plan.
- Check the foundation before you build on it. Three runs of features may have left your codebase with inconsistencies or files that have grown too large. Before the sprint, ask AI: "Look at our codebase. Are there any files that are too large, duplicated patterns, or things you'd reorganize before we add more features?" Cleaning up now is faster than untangling it later.
- Pipeline as a team, not four builders at once. Your team has four workspaces, but that doesn't mean four people implementing simultaneously. While one person builds and syncs a feature, teammates can explore the external APIs, write user stories and acceptance criteria for the next batch, or review what was just deployed. Keep the pipeline full — Explore, Plan, Implement, Verify happening in parallel across the team, not four implementations racing to sync.
- Let the tests do the integration check. After syncing a completed feature, run the full test suite. If everything passes, the feature integrates cleanly. If something fails, you know exactly what broke — and you fix that, not everything.
- If syncing fails with a conflict, your AI assistant will usually try to resolve it automatically. If it doesn't, tell it: "I have a merge conflict. Help me resolve it." Conflicts happen when two people changed the same file — they're a normal part of parallel work, not an error on your part. After resolving, run the full test suite to make sure everything still works.
- Redeploy often. Every time a batch of tests goes green, ship it. The live URL should reflect your latest verified work throughout the sprint, not just at the end.
- Take breaks. AI does the building, but you're doing the thinking — evaluating output, making decisions, coordinating with teammates. That's genuinely tiring. If your judgment starts slipping, step away for five minutes. A rested delegator makes better calls than a rushed one.