Going Parallel¶
Background Execution¶
Until now, you've been working synchronously — you give AI a task, watch it work, and review the output before moving on. But once you trust the pattern (story → tests → implementation → verification), you can let AI work in the background while you focus on something else.
Background execution means sending a task to AI and continuing your own work while AI builds. When it finishes, you review the output — just like checking on a colleague who's had time to complete an assignment.
Two ways to send work to the background:
- Ask in your prompt. Include "in the background" in your message — e.g., "Run the tests in the background" or "Build this feature in the background while I work on something else." Claude Code will run it as a background process automatically.
- Press Ctrl+B while a task is already running to push it to the background on the fly.
Either way, Claude Code keeps working in a separate process. You can check on background tasks anytime with /tasks.
Open additional terminal windows to run Codex instances in parallel. Each instance works independently with its own context.
Launch separate pi instances for parallel work. Each instance maintains its own conversation and context.
Sub-Agents¶
When AI encounters a complex task, it can recruit sub-agents — focused AI instances that handle specific parts of the work. Think of it like a team lead who brings in specialists: one handles research, another writes tests, another focuses on implementation.
You don't need to manage sub-agents directly. When you give your AI coding assistant a complex task, it may spin up sub-agents behind the scenes — each one getting a focused slice of context rather than the entire conversation. This often produces better results because each sub-agent gets a full oxygen tank (from Lift 1) dedicated to its specific task, rather than sharing one tank across everything.
Claude Code uses sub-agents automatically for complex tasks. You can also configure custom agents in .claude/agents/ for specialized roles — a research agent, a test-writing agent, an implementation agent.
Codex handles task decomposition internally, delegating to sub-processes as needed.
Use separate pi instances for specialized roles — one for research, one for implementation.
Keeping Parallel Work Focused¶
When you're running more than one task at a time, context management matters more. The oxygen tank metaphor from Lift 1 still applies — but now you're managing more than one tank.
The pattern:
- One conversation per task. Don't build three features in one conversation. Each feature gets its own fresh context.
- Start fresh for each new workstream. Your project context file (from Lift 1) means every new conversation starts with the right baseline — AI already knows your project. You don't re-explain anything; you just hand over the next story.
- Reset when context gets stale. If AI starts repeating itself or missing earlier instructions, the tank is running low. Start a new conversation rather than fighting through it.
- Keep workstreams independent. If Feature A depends on Feature B's output, don't run them in parallel — sequence them. Parallel workstreams must be independently buildable (the same "independently shippable" test from Lift 2's decomposition).
To start a fresh conversation, type /clear to reset your session without leaving the terminal. Your project context file loads automatically in the new conversation.
Close and reopen Codex from the terminal, or open a new Codex instance in a separate terminal.
Start a new pi instance. Your project context carries over to the new session.
Your project context file and skills from earlier lifts pay off here. Every new conversation starts with AI knowing your project, your conventions, and your processes — including the TDD workflow from Lift 3.
Team Activity: Launch a Background Task¶
Format: Mob Session Time: ~3 minutes Setup: One person drives, everyone else navigates. Use the same workspace where you've been building.
Pick one feature from your Final Sprint backlog that passed the delegation-ready test. Write a quick user story with 2-3 acceptance criteria.
Ask your AI coding assistant:
Here's a user story. Build it following TDD — write failing tests from the acceptance criteria first, then implement until they pass. [Paste your story and AC here].
After sending the prompt, press Ctrl+B to move it to the background. Then type /clear to start a fresh conversation for your next task.
Alternatively, include "in the background" right in your prompt: "Build this in the background following TDD..." — then start your next story immediately.
Send the prompt in one terminal, then open a new Codex instance in another terminal for your next task.
Send the prompt in one pi instance, then open a second instance for parallel work.
While that runs, start your next task in a fresh conversation. Your project context file means you don't need to re-explain anything — AI already knows your project, your conventions, and your TDD workflow.
Team Discussion: Trust and Oversight¶
Format: Team Discussion Time: ~2 minutes
Discuss: What's different about working this way compared to watching AI build step by step? What do you trust at this point — and what would you still want to check when the background task finishes? How do your automated tests (Lift 3) change your comfort level with letting AI work unsupervised?
Key Insight¶
Going parallel means shifting from watching AI work to reviewing AI's output. Background execution lets AI build while you move on. Sub-agents let AI recruit specialists behind the scenes. Context management — one conversation per task, fresh starts, independent workstreams — keeps each parallel effort focused. Your project context file and skills make every new conversation productive from the first prompt.