Shared Skills Library¶
From Personal Skill to Organizational Asset¶
In Lift 1, you recapped the director's toolkit: skills encode YOUR judgment into reusable, version-controlled files. The "We Do, You Do" pattern captures your process through refinement — you work through a task with your AI coding assistant, then capture what you did as instructions it can follow independently.
That pattern works for one practitioner. The organizational question is different: what happens when 10 contributors each build their own skills for the same workflows?
You get 10 slightly different approaches to the same problem. Pair A's ingestion skill normalizes dates to ISO 8601. Pair B's normalizes to Unix timestamps. Both skills work. Both pass their local tests. The integration fails.
A shared skills library solves this by treating skills as organizational infrastructure rather than personal tools. Skills live in version control, go through PR review, and follow the same governance as production code. When Pair A refines the ingestion skill, Pair B gets the improvement on their next pull.
The shift: tribal knowledge becomes a version-controlled, distributed organizational asset. One person's refinement becomes the entire team's capability.
The Architecture of Participation¶
Tim O'Reilly's framework for what makes skills shareable — originally about open source, now directly applicable to shared AI skills:
| Property | What It Means | What Breaks Without It |
|---|---|---|
| Legibility | Any contributor can read and understand the skill | Skills become black boxes that only the author can maintain |
| Modifiability | You can change the skill without rewriting it | Every modification requires the original author |
| Composability | Skills can reference other skills through simple interfaces | Monolithic skills that can't be recombined for new workflows |
| Shareability | The skill benefits others without requiring your whole stack | Skills that only work in one person's environment |
The design implication: skills that will be shared need to be written differently than personal skills. A personal skill can assume your context, your naming conventions, your mental model. A shared skill must make those assumptions explicit. The acceptance criteria format from the "We Do, You Do" pattern — As a / I want / So that, with Given/When/Then — becomes the interface contract for the skill itself.
Composable Skills and Role Gaps¶
Skills compose. A deployment skill can reference a test generation skill. A normalization skill can reference a validation skill. The organizational power: you build a pipeline from modular pieces, and any piece can be improved independently.
In Lift 1, you discussed role gaps — missing team functions (PM, QA, designer, tech lead) that can be filled by specialized AI roles. At organizational scale, this pattern compounds:
- A decomposition skill acts as a PM across all workstreams, enforcing consistent story format and acceptance criteria
- A test generation skill acts as QA, applying the same quality bar regardless of who delegates the work
- A normalization skill acts as a data architect, ensuring both center pipelines produce the unified schema
Each contributor inherits these roles automatically through the shared library. The organizational vocabulary becomes self-enforcing: when every AI coding assistant follows the same normalization skill, "unified format" means the same thing everywhere.
Governance: Who Edits, Who Reviews, How Skills Evolve¶
Shared skills need governance — but the governance model already exists. Skills are files. Files live in version control. Version control has pull requests. The same review process you use for production code applies to shared skills:
- Propose a change — submit a PR modifying the skill
- Review against the interface — does the change maintain composability? Does it break downstream skills that reference this one?
- Test with fresh context — the
/cleartest from the "We Do, You Do" pattern becomes a CI check: does the skill produce correct output without the author's conversation context? - Merge and propagate — every contributor gets the updated skill on their next pull
The ratchet effect applies here too. Every skill refinement that passes review becomes a permanent improvement to the organizational capability. The shared library gets strictly better over time.
Try It: Audit a Skill for Team Readiness¶
Think about a skill you would create for the multi-center platform — perhaps a normalization skill that maps CAIC's camelCase AVID format into a unified schema.
Ask your AI coding assistant to draft that skill, then evaluate it against the architecture of participation:
Draft a skill that normalizes CAIC AVID-format forecast data into our unified schema. Then evaluate: is it legible to someone who didn't write it? Could another contributor modify it without rewriting? Does it compose with other skills? Would it work in a teammate's environment?
Same prompt. Codex drafts the skill and evaluates shareability.
Same prompt. pi drafts the skill and evaluates shareability.
Note what the skill assumes vs. what it makes explicit. The gap between those two is where integration failures hide.
Team Discussion: Whose Conventions Win?¶
Format: Team Discussion Time: ~3 minutes
Your team has two pairs, each building a pipeline for a different avalanche center. Each pair will naturally develop their own conventions — naming patterns, data structures, error handling approaches.
Discuss: When you merge into a shared skills library, whose conventions win? What's the governance model — does one pair set the standard and the other adopts? Do you negotiate a shared convention before building? Do you build independently and reconcile after? Each approach has a tradeoff: upfront alignment is slower to start but cleaner to integrate; independent building is faster to start but creates reconciliation debt. For your platform, which tradeoff is the right one — and does the answer change for different types of skills (normalization vs. alerting vs. dashboard)?
Key Insight¶
A shared skills library is not a nice-to-have — it is the coordination layer that prevents N developers from independently teaching N AI assistants different things. The same skill, version-controlled and PR-reviewed, ensures that "unified format" means the same thing for every contributor. The governance model isn't new infrastructure — it's the same PR review process you already use for code, applied to organizational knowledge. When the skill library compounds through the ratchet effect, the organization gets strictly better at every workflow the library encodes.