Concurrent programming in the age of AI
Almost two years ago, I wrote about AI as a pairing partner — a tireless collaborator ready to help me debug, explore, and ship faster. That metaphor has aged, but not in the way I expected. The tools have gotten dramatically better. The models are sharper. But the biggest shift in my workflow hasn't been about the AI getting smarter. It's been about me changing how I think about using it.
I used to sit in a single conversation, working through one problem at a time. Ask a question, get an answer, write some code, ask another question. It worked great. But at some point, I realized I was the bottleneck. Not because I couldn't keep up with the AI, but because I was using it the same way I'd always written code: one thing at a time, start to finish, serially. And if there's one thing I know from years of building software, it's that serial execution doesn't scale.
The single-threaded developer
Here's the default loop most of us fall into with AI coding tools: you open a chat, describe your problem, get some help, implement the solution, and move on to the next thing. Maybe you're using an inline assistant in your editor, or maybe you're working through a terminal-based agent. Either way, the pattern is the same: one task, one context, one thread of execution.
This is already a massive improvement over the old way of working. Having an AI that understands your codebase and can reason about your problems is genuinely transformative.
But think about it from a systems perspective. You're running a single-threaded event loop. You pick up a task, process it until it's done (or until you're blocked), then pick up the next one. If the AI needs to churn through a large refactor, you wait. If you realize mid-task that you need something from a different part of the codebase, you either context-switch (losing your current state) or you make a mental note and come back to it later.
Sound familiar? It should. It's the same problem we've been solving in software for decades. When one thread isn't enough, you don't make the thread faster. You add more threads.
Thinking in concurrency
Once I started seeing my AI workflow through the lens of concurrency, the parallels became hard to ignore.
Task decomposition is the starting point. Before you can parallelize anything, you need to break the work into independent units. In traditional programming, this means identifying functions that can run simultaneously without stepping on each other. In AI-assisted development, it means looking at a feature and asking: which parts of this can an agent work on without needing to know about the others? The API endpoint doesn't need to wait for the UI component. The tests don't need to wait for the implementation to be merged. The documentation doesn't need to wait for anything.
Isolation is what makes parallelism safe. In concurrent programming, processes get their own memory space to avoid corrupting shared state. The equivalent here is giving each AI agent its own workspace — its own branch, its own copy of the relevant files, its own context. When agents work in isolation, they can't create the AI equivalent of a race condition: two agents editing the same file in conflicting ways.
Synchronization is where the developer comes back in. At some point, the parallel work has to converge. You review the diffs, resolve any conflicts, and merge. You're the synchronization primitive — the mutex, the join point, the barrier that ensures everything comes together coherently.
Workspaces as threads
This isn't just a thought experiment for me. It's how I actually work now.
I've been using a tool called Conductor that lets me spin up multiple AI coding agents, each in their own workspace: an isolated environment with its own branch and file state. When I have a day's worth of work ahead of me, I don't sit down and work through it linearly anymore. I decompose it.
The right level of decomposition isn't as granular as you might think. I wouldn't spin up separate workspaces for a data model, an API route, a UI component, and their tests. That's one coherent feature; it belongs in one workspace. The parallelism happens at a higher level. On a typical day, I might have one agent working on UI updates, another implementing telemetry changes, and a third building out new agent tooling. These are genuinely independent workstreams that touch different parts of the codebase and can run without knowing about each other.
Each workspace gets a focused brief describing the goal and any relevant context. Then they all start running at the same time. While one agent is refactoring a component hierarchy, another is wiring up event tracking, and a third is adding a new tool integration. I check in on each one, steer where needed, and review the results as they come in.
What surprises people when I describe this workflow is how it changes your role. You stop being the person who writes code with AI assistance and start being the person who plans the day's work, breaks it into independent streams, and then reviews and integrates the results. Less typing, more thinking. Less execution, more orchestration.
And honestly? The orchestration is the interesting part. Deciding where to draw the boundaries between workspaces forces you to think about which parts of your system are truly independent. The review process gives you a bird's-eye view across multiple workstreams at once. You end up with a better understanding of the system than if you'd written every line yourself, because you're constantly evaluating how the pieces fit together.
The gotchas
I'd be lying if I said this was all smooth sailing. Like any concurrency model, there are failure modes you learn to watch for.
The biggest one is coordination overhead. Sometimes tasks that seem independent aren't. Agent A adds a new field to a shared type. Agent B writes code that assumes the old shape. When you go to merge, you've got conflicts that wouldn't have existed if you'd done the work serially. The fix is the same as it is in concurrent systems: be explicit about your interfaces up front and communicate shared contracts to each agent.
Context fragmentation is another real issue. Each agent only sees its slice of the problem. A human developer holds the whole system in their head, or at least a working model of it. Parallel agents don't have that luxury. Sometimes the results are locally correct but globally inconsistent. An API returns data in a shape the UI wasn't expecting. A test assumes behavior that the implementation doesn't actually provide. You catch these in review, but they add friction.
Then there's the shared database problem. Each workspace gets its own branch and its own file state, but spinning up a whole new database per workspace is impractical. So you share one. This works fine until it doesn't. Agent A runs a migration that adds a column. Agent B is still running against the old schema. Agent C drops a table that Agent D was seeding test data into. It's shared mutable state — the exact thing isolation was supposed to prevent — and it sneaks back in through the infrastructure layer. The workarounds are the same ones we've always used for shared resources: be deliberate about migration order, keep schema changes additive when possible, and treat the database as a coordination point that requires extra care.
There's also the temptation to over-parallelize. Not every problem benefits from being split up. If a task is inherently sequential, where each step depends on the output of the previous one, forcing it into a concurrent model just adds complexity. Premature parallelization is every bit as real as premature optimization.
And the throughline from my 2024 post still holds: you need to understand the code. You can orchestrate ten agents in parallel, but if you can't evaluate whether their output is correct, you're just generating bugs faster.
Where this is heading
The developer's role is shifting. Not disappearing — shifting. We're moving from being single-threaded executors to concurrent schedulers. The most important skills in this new model are the same ones that have always made someone a good tech lead or architect: the ability to decompose problems cleanly, specify interfaces precisely, and review work critically.
This is still early. The tools are maturing fast. The patterns are emerging. But the fundamental insight feels durable to me: AI-assisted development is a concurrency problem, not just a chat problem. And the developers who figure out how to think in parallel, how to decompose, delegate, and integrate, are going to build things the rest of us can't quite believe.
It's been almost two years since I last wrote here. In that time, I went from having a pairing partner to managing a thread pool. I'm curious where the next two years will take us.