Reflection 4¶
A facilitation guide for the final team debrief. These questions are designed to spark discussion — not every question needs to be covered. Pick the ones that resonate with what you observed during the run and across the full journey.
What You Shipped¶
- Pull up your live URL. Walk through the observation network as a team. What's the feature you're most proud of? What would a forecaster checking this before dawn actually find useful?
- How many features did you ship in this sprint compared to Run 1? What made the difference — was it speed, confidence, parallel execution, or something else?
- Did the live data enrichment change the feel of the platform? What's different about an observation network that shows current danger ratings or weather alongside community reports?
- If a forecaster used your platform tomorrow morning, what would they do differently than they do today? That's the difference between shipping features and shipping an outcome. Which did you ship?
What You Practiced¶
- Did parallel execution actually make you faster, or did managing multiple workstreams add its own overhead? What's the right number of parallel tasks for your team?
- Think about the delegation-ready test from Lift 4. Was there a feature you tried to parallelize that should have been sequenced — or vice versa? What was the signal?
- When parallel features came back, did your automated tests catch integration issues? What would have happened without the safety net?
The Full Journey¶
- Trace a single acceptance criterion through the lifts: it started as a delegation contract (Lift 1), became a manual checklist item (Lift 2), became an automated test (Lift 3), and gated parallel deployment (Lift 4). How does each layer make the one before it more powerful?
- Think about the first user story you wrote in Lift 1 versus the delegation you did in this sprint. What changed in how you communicate with AI? What changed in how much you trust it — and why?
- Did the infrastructure you built in earlier runs — the project context file, the skills, the test suite — make Run 4 noticeably faster or better? Each piece was work when you created it. Did that investment compound?
What Comes Next¶
- Did different people on your team play different roles by Run 4? Did someone focus on writing stories and criteria while someone else focused on building and testing? Did someone maintain the skills or context file so others could move faster? What does that tell you about how your team at work might organize around AI?
- You came in as someone who uses AI. You're leaving as someone who delegates to it — with acceptance criteria, automated tests, skills, and the judgment to know when a task is ready. What's the first thing you'll delegate when you're back at work?
- The intuition you built today fades without practice. A good target: build three small things in the next 30 days using the workflow you learned here. They don't have to be work projects — personal tools, automations, anything where you practice the full cycle. What would your three be?