Skip to content

Setting the Scene

Where Software Fits

Every morning before skiers hit the slopes, avalanche forecasters at the Utah Avalanche Center publish danger ratings for 9 zones stretching from Logan to Moab. Each zone has different terrain, different snowpack history, different weather exposure. The forecasters synthesize field observations, weather data, and snowpack measurements into a single authoritative product — a danger rating and detailed discussion for each zone that hundreds of thousands of backcountry travelers will rely on to make life-or-death decisions. That synthesis is fundamentally judgment-based. The North American Avalanche Danger Scale defines zero quantitative thresholds, and the published forecast is the product of expert judgment. It's authoritative.

But the published forecast is a dense, technical document written by scientists for scientists. The people who need to act on this information — operations staff managing resources across zones, backcountry planners deciding where to ski, program managers tracking conditions over time — need that information organized, synthesized, and made accessible. That's where software fits.

You're building that system. Your platform has two layers:

  • A deterministic pipeline that ingests published forecasts from the UAC API, parses danger ratings, routes alerts through configurable thresholds, and displays conditions on a dashboard. The forecaster's expert judgment — the danger rating, the avalanche problems, the travel advice — is the input, not something your system derives.
  • An AI-powered analysis layer that calls the Anthropic API at runtime to generate enriched analysis: plain-language briefings that make the technical forecast accessible to backcountry planners, cross-zone synthesis that surfaces patterns for operations staff managing the entire program, and contextual alert messages that explain why conditions matter. This is where your system adds value beyond raw display — and it's the layer that needs evaluation to verify correctness.

The end users are operations staff who manage the forecasting program across zones and need a cross-zone overview, and backcountry recreationists who need the technical forecast translated into trip-planning decisions. The forecasters are the domain experts whose published assessments are your system's ground truth — not the primary users of your platform.

How the Runs Work

Teams of four, one deliverable. Each run builds on the last — nothing gets wiped. Baseline capabilities set the bar; stretch goals are for teams that want to push further. How you organize internally is your call.

By the end of Run 4, your team ships a single demoable system — the culmination of four runs of progressive building.

Velocity Without Visibility

You have the tools to move fast. Context engineering keeps every conversation focused. Skills encode your team's conventions. Delegation contracts and parallel execution let you spin up multiple workstreams simultaneously. You'll build more in a single sprint than most teams build in a week.

The risk isn't speed — it's confidence. When your AI analysis layer generates a briefing highlighting persistent weak layers as the primary concern across three zones, is that accurate? When a contextual alert message summarizes conditions for the Salt Lake zone, did it capture the right risk factors? When three parallel workstreams deliver results, do they integrate correctly?

You can deploy fast and often — the pipeline is already configured. The discipline is knowing whether what you shipped is correct. Build incrementally. Verify each component against real data before layering on the next. The velocity compounds, but only if the foundation holds.

Your Data

Your repository includes pre-seeded data designed for multi-zone analysis:

  • Zone configurations for all 9 UAC forecast zones — IDs, coordinates, and associated SNOTEL stations
  • A multi-zone snapshot combining forecast data, NWS weather, and SNOTEL snowpack readings across all zones — this is your primary data source for Run 1
  • Alert threshold configurations with starting rules for danger-level-based alerting and escalation
  • Golden datasets — 18 real forecast scenarios from 4 US avalanche centers with expert-verified expected outputs, covering all 5 danger levels (primarily used in Run 2 for evaluation harnesses)
  • API documentation for live data sources — avalanche.org, NWS, SNOTEL, UAC native, and CAIC endpoints

Build against the pre-seeded snapshot first. The live APIs are available when you need real-time data, but the snapshot gives you everything you need to get the engine working.

Ask your AI coding assistant to explore the data directory — it can walk you through what's available and how the pieces fit together.