Skip to content

Setting the Scene

The Observation Pipeline

Every morning before dawn, avalanche forecasters at centers like the Utah Avalanche Center (UAC) sit down and do something that directly affects whether people live or die in the mountains. They review hundreds of field observations — from ski patrollers, mountain guides, volunteer observers, and everyday backcountry skiers. They cross-reference automated weather stations, SNOTEL snowpack readings, and weather forecasts. Then they synthesize everything into a daily danger rating: a 1-to-5 scale that 88% of backcountry skiers check every single time before heading out (per a 2022 onX Backcountry survey).

The forecast is the most visible product. But it's built on something less visible: the observation pipeline. Community observations — what people actually see and experience in the field — are a critical input to the forecast. The UAC has only one to two staff in the field each day. Their 124 professional observers (per the UAC 2023-24 Annual Report) and the general public fill the gap.

That pipeline — observations in, forecasts out — is what you're building.

Your platform serves two distinct user types with fundamentally different needs:

  • Backcountry skiers submit observations. They're on the mountain, they see something — an avalanche, cracking in the snowpack, wind loading on a ridge, unusual weather — and they report it. Their observations are raw data: specific, timestamped, location-tagged.
  • Avalanche forecasters consume observations. They review dozens of reports each morning, filtering for relevance, looking for patterns, and using what they find to inform the daily danger rating. They need to move through observations quickly and focus on what matters for their zone.

The same backcountry skier who submits an observation in the afternoon checks the danger rating the next morning before heading out. That circular flow — contribute data, consume the processed result — is the engine that makes avalanche safety work.

How the Runs Work

Over the next two days, your team will go through four runs. Each run builds on the last — nothing gets wiped. Here's the format:

  • Your team of four builds one platform together. One observation network, one codebase, one demo at the end. How you organize — mobbing, pairing, dividing features — is up to you. Experimenting with collaboration patterns is part of the learning.
  • Each run builds on the last. Run 1 is the core platform. Run 2 adds consistency and reliability. Run 3 adds tests and deployment. Run 4 scales your ability to ship.
  • Baseline capabilities and stretch goals. Every run has a set of baseline capabilities to aim for, plus stretch goals for teams that push further.
  • The goal is learning, not just finishing. Understanding what you built and how your delegation produced it matters more than checking every box.

Delegate, Don't Dump

You know how to use AI. You may have been using it for months. But there's a difference between using AI and delegating to it effectively.

The temptation in a build sprint is to dump a wall of requirements into one prompt and accept whatever comes back. That produces output, but it doesn't produce understanding — and it doesn't build the delegation judgment that matters when the problem is harder than today's.

The value is in the cycle: write a story, delegate it, evaluate what comes back, refine. That's the Explore → Plan → Implement → Verify workflow from Lift 1. Each cycle sharpens your judgment about what makes a good delegation contract and what makes a vague one. That judgment is the skill that transfers.

Delegate one story at a time. Verify against your acceptance criteria before moving on.

When the cycle stalls: If you've re-prompted two or three times and AI keeps missing the mark, stop and diagnose. Either your acceptance criteria are too vague (AI doesn't know what "done" looks like), the task is too big (decompose it), or the conversation has drifted (start fresh with /clear and a cleaner prompt). The fix is almost always in your specification, not in asking AI to try harder.

Your Data

Your repository includes pre-seeded data for the Observation Network:

  • Sample observations — 20 realistic field observations spanning four types: avalanche sightings, snowpack conditions, weather reports, and red flags. These use real Wasatch locations and language modeled after real UAC public observations. Use them as seed data so your feed works from day one.
  • Observation type taxonomy — defines the four observation types with their required and optional fields, valid dropdown options, and field schemas. This is your data model reference.
  • Shared reference data — the North American Avalanche Danger Scale, avalanche problem types, UAC forecast zones, and SNOTEL station locations.

Ask your AI coding assistant to explore what's available: "What observation data and schemas do we have in this project? Walk me through the structure."