Conway’s law and coding assistants

In the coming years, we’re likely to see coding assistants like Copilot become more and more autonomous. Instead of completing your code, the new systems will write new functions given a specification. Instead of writing new functions, they will make small changes to a handful of files. Instead of making small changes, they’ll build new functionality with increasing complexity. They’ll do complex refactors, then start codebases from scratch, then manage projects from scratch. It’s uncertain what point we’ll hit and when, but I’d put money on all of these capabilities being in the near future.

This has an effect on the bottleneck of programming: Instead of it being the programmer’s time that is the bottleneck, it becomes the programmer’s ability to manage several autonomous processes.

With autocomplete, the programmer only has to wait around for a second or two for the completion to appear, and then we’re back to waiting on the programmer.

But when the coding assistant is getting 100x more done but taking 100x the time, something changes. While the programmer waits minutes for the assistant, they’re free to do anything. Surely there’s something productive they can do in this time? After all, at this point, human time might not be the bottleneck - but human cognition surely is. Let’s not waste it sitting idle.

So, the programmer starts another instance of the coding assistant to tackle a different problem. That spins off doing its own thing. Perhaps the programmer does this two or three more times before the first assistant is finished.

Now, instead of 100x, let’s say the coding assistant is getting 10,000x more done in another 10,000x the time. It can work for hours before needing a human to intervene. Is the programmer really going to manage hundreds of autonomous processes themselves? No! It’s unreasonable to expect the programmer to context switch between hundreds of tasks. And perhaps more importantly, these hundreds of autonomous systems are operating in the same environment and will surely clash with each other.

What we need is an organizational structure for coding assistants that reflects the codebase they’re building.


Instead of having the programmer manage hundreds of individual developers, it’s easy to imagine forming this into a hierarchy. The programmer now manages the “middle-management assistants” who take tasks, distribute them amongst a subset of the coding assistants, and report progress to the programmer.

A hierarchy is a natural choice, perhaps the only viable one, but we still need to grapple with that these are nearly decomposable systems: while most communication happens within teams, cross-team communication is vital to efficiency. You don’t want teams to duplicate work or to have plans that conflict with each other.

A simple specification for this organization could be:

  1. Responsibilities for the high-level goal are allocated to subteams.
  2. A high-bandwidth communication method between team members.
  3. A low-bandwidth communication method between teams.
  4. A high-bandwidth communication method between managers & subordinates.

Conway’s law, when applied to organizations that write software, states that the structure of the codebase will reflect the structure of the organization. This is plainly true if you’ve ever worked in such an organization; anecdotally Google has half a dozen Bluetooth implementations owned by half a dozen different product areas. Communication comes at a cost, and sometimes it’s easier to write your own implementation than pay that cost.

The typical causality of Conway’s law is that you first set up an organizational structure, then the codebase produced mirrors this structure. But what do we get if we take Conway’s law, and flip it on its head? What happens if we design a codebase first, then base the organizational structure around it?

This doesn’t work in human organizations as humans are a lot less flexible than code, so it makes sense to design the organization first. But exactly what constitutes a “coding assistant” is primarily conceptual. Spinning up new & turning down old coding assistants is a lot cheaper than hiring & firing. We can afford massive amounts of flexibility in the organization.

If we take this idea seriously, what we end up with is a “coding assistant” per file, and “middle-management assistant” per directory. (Once again everything is a file.)

A coding assistant (i.e. a file) is responsible for making changes to its file. The file should be able to fit into the context window. It communicates with other coding assistants (i.e. files) by seeing what other files references its code. This happens more commonly with files in the same directory than with files in other directories. Negotiation is done to coordinate changes to interfaces. Progress is reported to middle-management (i.e. directories) who ensure cohesion, and in turn communicate higher-level goals to the coding assistants.

Revisiting our specification:

  1. Responsibilities for the high-level goal are allocated to subteams: The programmer defines the subtasks as different directories. The code is the interface.
  2. A high-bandwidth communication method between team members: Interfaces are commonly shared within a directory, thus communications happens more commonly within a team (i.e. directory).
  3. A low-bandwidth communication method between teams: Code interfaces are sparsely connected between directories (assuming a well decoupled codebase).
  4. A high-bandwidth communication method between managers & subordinates: Middle-management (i.e. directories) are responsible for taking progress updates from their subordinates (i.e. files and subdirectories) and propagating high-level goals to their subordinates.

I’d like to stress the point in bold. In order for the programmer to intervene with this system, all that needs to happen is for the code to change. The programmer can do this themselves, or communicate the change to the system. Instead of reasoning about two levels of decomposition - the system and the codebase - this structure unifies the two. To best use human cognition while it still exceeds the performance of the assistants, we need to be able to change how the assistants organize themselves. We edit the code, and the organization changes. The code is the interface.