13 KiB
title | excerpt | highlight_image | draft | nav_exclude |
---|---|---|---|---|
Separating code reasoning and editing | An Architect model describes how to solve the coding problem, and an Editor model translates that into file edits. This Architect/Editor approach produces SOTA benchmark results. | /assets/architect.jpg | true | true |
{% if page.date %}
{{ page.date | date: "%B %d, %Y" }}
{% endif %}Separating code reasoning and editing
Aider now has experimental support for using two models to complete each coding task:
- An Architect model is asked to describe how to solve the coding problem.
- An Editor model is given the Architect's solution and asked to produce specific code editing instructions to apply those changes to source files.
Splitting up "code reasoning" and "code editing" has produced SOTA results on aider's code editing benchmark. It also significantly improved the benchmark scores of four of the top coding models, as compared to their previous "solo" scores (striped bars).
{% assign sorted_data = site.data.architect | sort: "pass_rate_2" | reverse %}
Motivation
This approach was motivated by OpenAI's o1 models. They are strong at reasoning, but often fail to output properly formatted code editing instructions. It helps to instead let them describe the solution however they prefer and then pass that output to a more traditional LLM. This Editor LLM can then interpret the solution description and produce the code editing instructions needed to update the existing source code file.
Traditional frontier models like GPT-4o and Sonnet also seem to benefit from separating code reasoning and editing like this. A pair of GPT-4o or a pair of Sonnet models in Architect/Editor configuration outperform their previous solo benchmark results.
Another reason why this approach is newly viable is that the speed and costs of frontier models have been rapidly improving. In particular, chaining older LLMs would have been quite slow and contrary to aider's goal of providing a rapid, interactive, pair programming AI coding experience.
Code reasoning and code editing
Aider uses a variety of edit formats to allow LLMs to specify edits to local source files. All of aider's editing formats require the LLM to return source code edits in a specific text format, so that aider can process the edits and apply them to the local source files.
Normally aider asks the model to solve a coding problem in one prompt, asking the LLM to explain the solution and return a well formatted series of file edits. Because this all happens in a single prompt/response round trip to the LLM, the model has to split its attention between solving the coding problem and conforming to the edit format.
The Architect/Editor approach splits this into two inference steps, possibly using two different LLMs:
- Ask how to solve the coding problem (Architect).
- Turn the proposed solution into a series of well formed code edits (Editor).
The Architect/Editor approach allows the Architect to focus on solving the coding problem and describe the solution however comes naturally to it. This gives the Architect more reasoning capacity to focus just on solving the coding task. We can also assign the Architect task to a strong reasoning model like o1-preview, and give the editing task to an appropriate model based on cost, editing skill, etc. Similarly, the Editor can focus all of its attention on properly formatting the edits without needing to reason much about how to solve the coding problem.
Results
The graph above and the table below show the aider's code editing benchmark score for various combinations of Architect and Editor models.
Some noteworthy observations:
- Pairing o1-preview as Architect with Deepseek as Editor sets a SOTA significantly above the previous best score. This result is obtained with Deepseek using the "whole" editing format, requiring it to output a full update copy of each edited source file. Both of these steps are therefore quite slow, so probably not practical for interactive use with aider.
- Pairing OpenAI's o1-preview with Anthropic's Sonnet as the Editor produces the second best result. This is an entirely practical configuration for users able to work with both providers.
- Pairing Sonnet/Sonnet and GPT-4o/GPT-4o provides significant lift for both models compared to their solo results, especially for GPT-4o.
- Deepseek is surprisingly effective as an Editor model. It seems remarkably capable at turning proposed coding solutions into new, updated versions of the source files. Using the efficient "diff" editing format, Deepseek helps all the Architect models except for Sonnet.
Try it!
The development version of aider
has built in defaults to support Architect/Editor coding with
o1-preview, o1-mini, GPT-4o and Claude 3.5 Sonnet.
Run aider with --architect
or get started quickly like this:
pip install -U git+https://github.com/paul-gauthier/aider.git
# Change directory into a git repo
cd /to/your/git/repo
# Work with Claude 3.5 Sonnet as the Architect and Editor
export ANTHROPIC_API_KEY=your-key-goes-here
aider --sonnet --architect
# Work with OpenAI models, using gpt-4o as the Editor
export OPENAI_API_KEY=your-key-goes-here
aider --4o --architect
aider --o1-mini --architect
aider --o1-preview --architect
More info
Aider has a number of "chat modes", and "architect" is available as a new chat mode.
The --architect
switch is a shortcut for --chat-mode architect
.
For more details, see documentation on
aider's chat modes.
Full results
{% for group in grouped_data %} {% assign group_class = forloop.index | modulo: 2 | plus: 1 %} {% for item in group.items %} {% endfor %} {% endfor %}Architect | Editor | Edit Format | Pass Rate |
---|---|---|---|
{{ item.model }} | {{ item.editor_model }} | {{ item.editor_edit_format | default: item.edit_format }} | {{ item.pass_rate_2 }}% |