Run 2: Consistency and Control¶
Where You Left Off¶
In Run 1, you built the core of your Avalanche Observation Network — a submission form, an observation feed, filtering, and detail views. It works. Backcountry skiers can submit observations and forecasters can browse them. But you probably noticed: every time you asked AI to build something similar — a card layout, a form, a filter — the result came out a little different. The data model drifted. The styling was inconsistent. And the backlog of things you want to build kept growing.
Then in Lift 2, you got three tools to fix that:
- Decomposition — you broke your backlog into independently shippable story-sized pieces, each with its own acceptance criteria. The pile of work became a managed list.
- Skills — you built your first skill, capturing a repeatable process as reusable instructions that AI follows consistently. No more re-explaining the same conventions every conversation.
- Manual review — you reviewed your Run 1 submission form against specific acceptance criteria and made pass/fail calls. You saw the difference between "looks good" and "AC 3 fails because..."
You also saw the honest tradeoff: manual review works, but it's slow. You'll deal with that tension later. For now, you have the tools to build with consistency and verify with discipline.
The Challenge¶
Your observation network has a foundation. Now make it reliable. Use skills to enforce consistency across your platform — same data model, same component patterns, same validation rules — so that adding a new feature doesn't introduce a new way of doing things. Use your decomposed backlog to work systematically through features. And verify every feature against acceptance criteria before moving on.
The goal isn't just more features — it's features that work together as a coherent system.
Delegate one story at a time. Review against your criteria before starting the next.
Baseline Capabilities¶
- Observation type-specific forms, built with skills — when a skier selects "avalanche," the form shows fields for aspect, elevation band, and size; "weather" shows temperature and wind; "red flag" shows instability signs. Use a skill (or combination of skills) to enforce consistency across the type-specific forms — same layout pattern, same validation behavior, same data model structure — so that adding a new observation type doesn't reinvent the wheel. The observation type taxonomy in your data defines what each type needs.
- Decomposed backlog driving the work — your team is working from the breakdown you created in Lift 2, picking up pieces in order, not just building whatever comes to mind
- At least one additional feature added, delegated as a user story with acceptance criteria and verified before moving on — the Explore → Plan → Implement → Verify cycle in action
- Manual review applied to every feature — your team can point to specific acceptance criteria that passed or failed during the run, with specific fix requests for failures
Stretch Goals¶
- Multiple skills for different concerns — beyond the form skill, build skills for other repeating patterns: a card/component layout skill that standardizes how observations display in the feed, a validation skill that ensures required fields are checked consistently, a documentation skill that keeps your context file and data model in sync
- Map view — display observations on a map by location, giving forecasters a spatial view of where reports are coming from
- Statistics panel — show observation counts by type, zone, or date range, giving forecasters a quick pulse on reporting activity
- Red flag dashboard — surface the highest-signal observations (shooting cracks, collapsing, recent avalanche activity) in a dedicated view so forecasters see the most critical reports first
- Zone-based filtering — forecasters can filter the feed by UAC forecast zone to focus on the area they're responsible for
Tips¶
- Create a skill before building new features. The consistency payoff compounds — every feature you build after the skill is in place benefits from it. Even one well-crafted skill (data model or component layout) changes the quality of everything that follows.
- Watch for "it works" vs. "it's organized." AI optimizes for making things work right now. It won't tell you when it's duplicating logic, mixing concerns, or taking shortcuts that will make the next feature harder. After each feature, ask: "Is there any duplicated logic or inconsistency you'd clean up before adding more features?" Catching this now is easier than untangling it later.
- Use the "We Do, You Do" test. Don't write skills from scratch. Do the process with AI first, refine until the output matches what you want, then say: "Capture what we just did as a skill so you can reproduce it next time." AI was there for every correction, so it knows what to capture. Then start a fresh conversation and try the skill — if AI can't reproduce the quality without your guidance, the skill is missing something. Update it.
- Review is not optional — it's the quality gate. When you finish a feature, pull up the acceptance criteria and walk through them. Pass or fail. If something fails, write a specific fix request: "AC 2 fails — expected [X], got [Y]." That specificity eliminates the spinning loop.
- Watch for side effects. When you add a new feature, spot-check that existing features still work. If adding a map view breaks the feed, that's the tension Lift 2 warned you about — and it's a real signal about what comes next.