Skip to content

Architecting Parallel Workstreams

Decomposition as the Prerequisite

In Lift 1, you recapped the delegation-ready test: Can you write 2-4 specific acceptance criteria? Is the scope bounded? Can it be built and tested independently? Would you know a good result if you saw one?

That test determines whether a single task can be delegated. At multi-contributor scale, decomposition determines whether workstreams can proceed in parallel at all. The question isn't "can this task be delegated?" but "can these workstreams be built and integrated without stepping on each other?"

The answer depends entirely on where you draw the boundaries.

Interface Contracts at Team Boundaries

When two pairs work on the same system simultaneously, the integration surface is where failures concentrate. Pair A builds the CAIC pipeline. Pair B builds the UAC pipeline. Both feed into a unified dashboard. The interface contract defines what the dashboard expects from each pipeline — and what each pipeline promises to deliver.

An interface contract specifies:

  • Data shape — the exact schema each pipeline produces (field names, types, formats)
  • Behavioral guarantees — what happens on error, what the pipeline does when data is missing, how it signals failure
  • Boundary ownership — which side is responsible for transformation (does the pipeline normalize to the unified schema, or does the dashboard accept multiple formats?)

The principle: define the interfaces before building the implementations. When both pairs build against the same interface contract, integration becomes composition rather than reconciliation. The unified schema in data/double-black-diamond/ serves exactly this role — it's the interface contract for the normalization layer.

Without explicit interface contracts, each pair makes reasonable but incompatible assumptions. CAIC pipeline returns dangerLevel: "considerable". UAC pipeline returns danger_rating: 3. Both are correct within their own context. The dashboard breaks.

Worktree Isolation for Multi-Contributor Work

Git worktrees are the standard isolation mechanism for parallel AI-assisted workstreams. Each contributor (or pair) works in their own worktree — a separate working directory on a separate branch, sharing the same repository. Each AI coding assistant operates in its own isolated context without file-level interference from other workstreams.

The practical pattern:

  1. Define the workstreams — identify the independently buildable units (CAIC pipeline, UAC pipeline, unified dashboard, shared normalization layer)
  2. Create a worktree per workstream — each pair gets their own branch and working directory
  3. Build against the interface contract — each pair implements their side of the contract
  4. Merge sequentially — integrate one workstream first, then rebase remaining workstreams on the updated main

Launch with claude --worktree caic-pipeline to create an isolated worktree. The shared skills library and context architecture load automatically. Each worktree gets its own conversation and branch.

Create worktrees manually: git worktree add ../caic-pipeline -b caic-pipeline. Run a separate Codex instance in the worktree directory.

Create worktrees manually with git worktree add, then launch a separate pi instance in each worktree directory.

The sequential merge strategy is deliberate. Merging all workstreams simultaneously creates multi-way conflicts that are difficult to resolve. Merging one at a time — first the normalization layer (the shared dependency), then each pipeline, then the dashboard — means each subsequent merge has full context of what came before.

The Coordination Tax

Parallel execution is not always faster. The coordination overhead — defining interfaces, creating worktrees, managing branches, resolving merges, reviewing parallel outputs — is real.

Hard-won lessons from teams running parallel AI workstreams at scale:

Lesson Implication
Agents don't coordinate spontaneously If you want coordination, you must build it into the interface contracts and context architecture
For small tasks, coordination overhead exceeds the time saved Parallelization wins for genuinely independent, substantial workstreams — not for quick fixes or tightly coupled features
Human review becomes the bottleneck Five completed workstreams waiting for review creates pressure, not productivity — the autonomy slider from Lift 1 applies here
Semantic conflicts survive clean merges Two workstreams can merge without git conflicts while encoding incompatible assumptions — this is agentic drift

Agentic drift is the gradual, invisible divergence that happens when parallel autonomous workstreams operate on related parts of a codebase without coordination. Files merge cleanly, but the code contains semantic conflicts — different workstreams encode different assumptions about how things should work.

Prevention strategies:

  1. Shorter integration cycles — merge every few hours, not every few days. The integration tax compounds.
  2. File-level task ownership — structure tasks so each workstream owns different files. Shared files go in the shared dependency layer and get built first.
  3. Shared context documents — the shared context architecture from the previous section reduces drift by giving every contributor's AI coding assistant the same foundational assumptions.
  4. Deterministic guardrails — quality gates (linting, type checking, tests, eval harnesses) catch drift mechanically rather than relying on manual review.

Team Discussion: Drawing the Boundaries

Format: Team Discussion Time: ~3 minutes

Your team is about to build the multi-center platform in Run 2. You have two pairs and a system that needs: a CAIC ingestion pipeline, a UAC ingestion pipeline, a normalization layer mapping both formats to a unified schema, and a dashboard consuming the normalized data.

Discuss: How do you partition this into parallel workstreams? The normalization layer is the shared dependency — does one pair build it first while the other starts their pipeline, or do both pairs build their pipelines first and then collaborate on normalization? What's the interface contract between the pipelines and the dashboard — and who defines it? Where does the delegation-ready test tell you to sequence rather than parallelize? And given the coordination tax: is there any part of this that's faster to build together as a mob than to split and merge?

Key Insight

Parallel workstreams are an architectural decision, not a default. The value of parallelization depends on the quality of the decomposition — specifically, whether the interface contracts are explicit enough that workstreams can proceed independently. Define the interfaces before building the implementations. Merge sequentially, not simultaneously. Watch for agentic drift — the semantic conflicts that survive clean merges. And always apply the coordination tax test: if the overhead of splitting, isolating, and merging exceeds the time saved by parallel execution, the faster path is working together.