How Ace Manages 10 Projects in Parallel with Autonomous AI Agents
A deep dive into the multi-sprout architecture that lets Ace run 10+ Claude Code sessions simultaneously, each working on a different project with full context isolation.
The Parallel Desktop Model
The first thing you notice when Ace is running at full capacity is the desktops. macOS has a feature called Spaces — virtual desktops you can switch between with a three-finger swipe. Most people use two or three. Ace uses up to twelve.
Each Space belongs to a single sprout. Desktop 2 is the landing page project. Desktop 3 is the API integration. Desktop 4 is the Chrome extension. Each iTerm2 window fills its entire screen, and each window contains exactly one Claude Code session working on exactly one project. The isolation is total. Nothing bleeds between them.
This isn't just aesthetically clean — it's architecturally essential. When you run multiple AI coding agents on the same project or in the same terminal, you get conflicts, confused state, and agents that can't tell what the current version of a file is. By giving each sprout its own desktop, its own terminal, and its own git checkout, you eliminate an entire class of problems before they can start.
The TASK.md Briefing System
Every sprout begins with a TASK.md file. This is not a vague description. It's a full specification: what to build, what libraries to use, how to structure the code, what quality gates to pass before marking anything complete. A typical TASK.md for a landing page project is 400-600 words long and covers everything from the color palette to the Firebase deploy command.
Why does this matter? Because AI agents drift. Give a model a two-sentence description and check back in four hours — you'll find something that technically works but bears little resemblance to what you wanted. Give it a 500-word spec with clear acceptance criteria, design standards, and a checklist of required deliverables, and it stays on track.
The TASK.md is the contract between you and the sprout. It's written once, at the start of the project, and it doesn't change without explicit human intervention. The sprout reads it at the start of every session and uses it as the source of truth for every decision it makes.
Heartbeat Monitoring and the AppleScript Check
One of the hardest problems in autonomous agent management is distinguishing between a sprout that's working and a sprout that's crashed. Both look the same from the outside: the terminal is open, nothing is happening.
Ace solves this with a two-layer check. Every sprout updates a last_heartbeat timestamp in the SQLite database every 60 seconds as it works. If the heartbeat goes stale by more than 5 minutes, Ace runs a secondary check using AppleScript to inspect the actual terminal process. Is the process running? Is there active output? Did it receive an interrupt signal?
Only after this secondary check confirms the session is truly dead does Ace attempt recovery. This two-layer approach eliminates false positives — Claude Code sometimes takes several minutes to think through a complex problem, during which it produces no output. A naive heartbeat-only check would crash-recover sessions that were working fine.
Git Worktrees: Two Sprouts, One Repo
Sometimes you need two agents working on the same codebase simultaneously. Maybe one is building the authentication flow while another is building the dashboard. They can't both work on the main branch without trampling each other's changes.
Git worktrees solve this. A worktree is a separate checkout of the same repository, on a different branch, in a different directory. Sprout A works on feature/auth in ~/worktrees/myapp-auth/. Sprout B works on feature/dashboard in ~/worktrees/myapp-dashboard/. Both commit independently. When they're done, a merge coordinator agent reviews both branches and integrates them into main.
In practice, worktrees add about 10 minutes of coordination overhead per project. But they unlock a capability that single-branch workflows can't provide: true parallel development on a single codebase without conflicts.
The 12-Sprout Limit and Why It Exists
Ace currently caps at 12 parallel sessions. This isn't a software constraint — it's a hardware one. Each Claude Code session consumes roughly 1.5-2GB of RAM and a significant share of CPU during active builds. On a MacBook Pro with 32GB RAM, 12 sessions approach the limit of what the machine can handle without swapping to disk, which causes cascading slowdowns.
There's also a cognitive limit. The Telegram bridge surfaces human requests from all active sessions. At 12 sessions, you might receive 3-4 requests per hour requiring brief responses. At 20 sessions, the volume becomes overwhelming and the benefit of automation starts to erode.
What Breaks at Scale
Running 5+ concurrent sessions reveals failure modes you don't encounter with one or two. Firebase has API rate limits that become relevant when six projects are deploying simultaneously. npm registry requests can queue. SQLite's WAL mode handles concurrent writes well up to a point, but write-heavy operations from multiple sessions can cause brief contention.
The biggest lesson: isolation is not enough. You also need sequencing. Ace now staggers deploys — no more than two projects deploy to Firebase in any five-minute window. Build operations are staggered by 30 seconds to prevent CPU spikes from crashing multiple sessions simultaneously.
Managing ten projects in parallel is not just about having ten agents. It's about having a system that keeps them from tripping over each other at the infrastructure level.