Skip to content

Setting the Scene

Backcountry Safety: A Problem Worth Solving

Every winter morning before dawn, avalanche forecasters across the western United States sit down to do something remarkable. They review hundreds of field observations — from ski patrollers, mountain guides, volunteer observers, and everyday backcountry skiers. They check automated weather station data, snowpack readings, and weather forecasts. Then they synthesize all of it into a single product: a daily danger rating that tells backcountry travelers whether it's safe to go out.

It works. 88% of backcountry skiers check the avalanche forecast every single time before heading into the mountains, according to a 2022 onX Backcountry survey. The Utah Avalanche Center's website alone received 3 million page views in the 2023-24 season.

But the forecast packs a lot of information into a small space — and it has to. Avalanche forecasters are synthesizing complex science into guidance that serves everyone from first-time tourers to veteran mountain guides. The danger scale runs from 1 (Low) to 5 (Extreme), with nine types of avalanche problems, three elevation bands, and eight compass aspects. That depth is what makes it trustworthy. But a backcountry skier standing at a trailhead still needs to turn all of that into one decision: go or don't go.

That's where you come in.

Your team is going to build a digital backcountry field guide — the kind of thing a skier would bookmark on their phone before heading into the mountains. Interactive danger scale, terrain tips, decision-making checklists, safety information. Something that takes the avalanche forecast and makes it usable.

You're building for the Park City and Wasatch Range area in Utah — one of the most popular backcountry skiing destinations in the country, and the backyard of the Utah Avalanche Center (UAC).

How the Runs Work

Over the next two days, your team will go through four runs. Each run builds on the last — you never start over. Here's how they work:

  • Your team of four builds one thing together. One field guide, one project, one demo at the end. How you organize — everyone on one screen, splitting into pairs, taking turns — is entirely up to you. Experimenting with how you work together is part of the learning.
  • Each run builds on the last. What you create in Run 1 carries forward into Run 2, 3, and 4. You'll add real data, new features, and eventually deploy it live.
  • Baseline capabilities and stretch goals. Every run has a set of baseline capabilities everyone should aim for, plus stretch goals for teams that get there and want to push further.
  • The goal is learning, not finishing. Understanding what you built and why it works matters more than checking every box. Help your teammates. Talk through decisions. Celebrate the wins together.

The Most Important Thing

You're about to use AI to build something real. Here's the one thing that will make the biggest difference in what you learn today:

Build one piece at a time.

The temptation will be to write out everything you want in one massive prompt and let AI handle it all at once. Don't. That's not how you learn this skill.

If you paste a wall of requirements and accept whatever comes back, you'll have output — but you won't understand it. You won't know what worked, what didn't, or how to fix it. You won't develop the judgment for when AI nails it and when it needs a nudge. And that judgment is the whole point.

The value is in the back-and-forth. Write a user story. Send it. Look at what comes back. Does it match your acceptance criteria? If not, tell your AI tool exactly what to change. That cycle — prompt, evaluate, refine — is the skill that transfers to everything you'll do with AI after today.

Build incrementally. Verify as you go. Discuss as a team.