Skip to content

Reflection 1

A facilitation guide for the team debrief following Run 1. These questions are designed to spark discussion — not every question needs to be covered. Pick the ones that resonate with what you observed during the run.

What You Built

  • Walk through your Observation Network. What's working? Where did you get further than expected, and where did you get stuck?
  • Did the submission form and the feed end up with consistent data structures — or did you notice the form collecting data one way and the feed displaying it differently? What happened when you tried to connect the two?
  • How well do the pre-seeded observations render in your feed? Did anything about the real data surprise you compared to what you assumed the data would look like?

What You Practiced

  • How did the Three Pillars (Scope, Intent, Structure) show up in your delegation? Can you point to a specific prompt where being more specific produced a noticeably better result?
  • Did you write user stories with acceptance criteria before delegating, or did you fall back to plain-English requests? What was the difference in what you got back?
  • When you verified against your acceptance criteria, did you catch something that "looked done" but actually wasn't? What did that tell you about the value of having a definition of "done" before you start?
  • Did you hit a spinning loop — re-prompting two or three times without getting closer to what you wanted? What broke the loop: tighter criteria, a smaller task, or a fresh conversation? What does that tell you about where the problem usually lives?

How You Worked

  • How did your team organize — one driver and three navigators, pairs on different features, something else? What would you keep and what would you change for Run 2?
  • Did your project context file help during the run? Did AI seem to "know" your project, or were you still re-explaining things?
  • When AI produced a working feature, did you stop to understand how it worked — or did you accept it and move on? There's a difference between building with understanding and just accepting output. Which did your team do more of, and what does that mean for your ability to build on it tomorrow?

Looking Ahead

  • You've probably noticed that every time you ask AI to build something similar — a card layout, a form, a filter — it comes out slightly different. The data model drifts. The styling is inconsistent. What would it take to make AI produce consistent results for repeating patterns?