Merge branch 'main' into mixpanel

This commit is contained in:
Paul Gauthier 2024-10-30 09:40:01 -07:00
commit 068fb38a5d
181 changed files with 141428 additions and 1961 deletions

24
.github/workflows/close-stale.yml vendored Normal file
View file

@ -0,0 +1,24 @@
name: 'Close stale issues and PRs'
on:
schedule:
- cron: '30 1 * * *'
workflow_dispatch:
permissions:
issues: write
jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v9
with:
stale-issue-message: 'This issue has been labelled stale because it has been open for 2 weeks with no activity. Remove stale label or add a comment to keep this issue open. Otherwise, it will be closed in 7 days.'
close-issue-message: 'This issue was closed because it has been stalled for 3 weeks with no activity. Feel free to add a comment here and we can re-open it. Or feel free to file a new issue any time.'
days-before-stale: 14
days-before-close: 7
stale-issue-label: 'stale'
stale-pr-label: 'stale'
only-labels: 'question'
days-before-pr-stale: -1
days-before-pr-close: -1

View file

@ -5,6 +5,7 @@ on:
paths-ignore:
- 'aider/website/**'
- README.md
- HISTORY.md
branches:
- main
pull_request:
@ -26,22 +27,24 @@ jobs:
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to DockerHub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
env:
dockerhub_username: ${{ secrets.DOCKERHUB_USERNAME }}
dockerhub_password: ${{ secrets.DOCKERHUB_PASSWORD }}
if: ${{ env.dockerhub_username }} && ${{ env.dockerhub_password }}
- name: Build Docker image
- name: Build Docker standard image
uses: docker/build-push-action@v5
with:
context: .
file: ./docker/Dockerfile
platforms: linux/amd64,linux/arm64
push: false
target: aider
- name: Build Docker full image
uses: docker/build-push-action@v5
with:
context: .
file: ./docker/Dockerfile
platforms: linux/amd64,linux/arm64
push: false
target: aider-full

View file

@ -70,15 +70,15 @@ jobs:
id: deployment
uses: actions/deploy-pages@v2
- name: Set up Python ${{ matrix.python-version }}
- name: Set up Python 3.12
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
python-version: '3.12'
- name: Install linkchecker
run: |
python -m pip install --upgrade pip
pip install linkchecker
python -m pip install linkchecker
- name: Run linkchecker
run: |

View file

@ -5,6 +5,7 @@ on:
paths-ignore:
- 'aider/website/**'
- README.md
- HISTORY.md
branches:
- main
pull_request:

View file

@ -5,6 +5,7 @@ on:
paths-ignore:
- 'aider/website/**'
- README.md
- HISTORY.md
branches:
- main
pull_request:

2
.gitignore vendored
View file

@ -10,3 +10,5 @@ Gemfile.lock
_site
.jekyll-cache/
.jekyll-metadata
aider/__version__.py
.venv/

View file

@ -14,3 +14,9 @@ repos:
hooks:
- id: flake8
args: ["--show-source"]
- repo: https://github.com/codespell-project/codespell
rev: v2.2.6
hooks:
- id: codespell
additional_dependencies:
- tomli

View file

@ -17,10 +17,10 @@ Contributions of
[LLM benchmark results](https://aider.chat/docs/leaderboards/)
are welcome!
See the
[benchmark README](https://github.com/paul-gauthier/aider/blob/main/benchmark/README.md)
[benchmark README](https://github.com/Aider-AI/aider/blob/main/benchmark/README.md)
for information on running aider's code editing benchmarks.
Submit results by opening a PR with edits to the
[benchmark results data files](https://github.com/paul-gauthier/aider/blob/main/_data/).
[benchmark results data files](https://github.com/Aider-AI/aider/blob/main/aider/website/_data/).
## Pull Requests
@ -33,19 +33,16 @@ ensure that your contributions can be integrated smoothly.
## Licensing
By contributing to this project, you agree that your contributions
will be licensed under the Apache License 2.0. Additionally, you
understand and agree that contributions may be subject to a different
license, should the project maintainers decide to change the licensing
terms.
Before contributing a PR, please review our
[Individual Contributor License Agreement](https://aider.chat/docs/legal/contributor-agreement.html).
All contributors will be asked to complete the agreement as part of the PR process.
## Setting up a Development Environment
### Clone the Repository
```
git clone https://github.com/paul-gauthier/aider.git
git clone https://github.com/Aider-AI/aider.git
cd aider
```
@ -154,6 +151,10 @@ The project's documentation is built using Jekyll and hosted on GitHub Pages. To
```
bundle exec jekyll build
```
5. Preview the website while editing (optional):
```
bundle exec jekyll serve
```
The built documentation will be available in the `aider/website/_site` directory.

View file

@ -3,8 +3,263 @@
### main branch
- Load and save aider slash-commands to files:
- `/save <fname>` command will make a file of `/add` and `/read-only` commands that recreate the current file context in the chat.
- `/load <fname>` will replay the commands in the file.
- You can use `/load` to run any arbitrary set of slash-commands, not just `/add` and `/read-only`.
- Use `--load <fname>` to run a list of commands on launch, before the interactive chat begins.
- Aider follows litellm's `supports_vision` attribute to enable image support for models.
- Bugfix for when diff mode flexibly handles the model using the wrong filename.
- Displays filenames in sorted order for `/add` and `/read-only`.
- New `--no-fancy-input` switch disables prompt toolkit input, now still available with `--no-pretty`.
- Properly support all o1 models, regardless of provider.
- Improved handling of API errors, especially when accessing the weak model.
- Aider wrote 70% of the code in this release.
### Aider v0.60.1
- Enable image support for Sonnet 10/22.
- Display filenames in sorted order.
### Aider v0.60.0
- Full support for Sonnet 10/22, the new SOTA model on aider's code editing benchmark.
- Aider uses Sonnet 10/22 by default.
- Improved formatting of added and read-only files above chat prompt, by @jbellis.
- Improved support for o1 models by more flexibly parsing their nonconforming code edit replies.
- Corrected diff edit format prompt that only the first match is replaced.
- Stronger whole edit format prompt asking for clean file names.
- Now offers to add `.env` to the `.gitignore` file.
- Ships with a small model metadata json file to handle models not yet updated in litellm.
- Model settings for o1 models on azure.
- Bugfix to properly include URLs in `/help` RAG results.
- Aider wrote 49% of the code in this release.
### Aider v0.59.1
- Check for obsolete `yes: true` in yaml config, show helpful error.
- Model settings for openrouter/anthropic/claude-3.5-sonnet:beta
### Aider v0.59.0
- Improvements to `/read-only`:
- Now supports shell-style auto-complete of the full file system.
- Still auto-completes the full paths of the repo files like `/add`.
- Now supports globs like `src/**/*.py`
- Renamed `--yes` to `--yes-always`.
- Now uses `AIDER_YES_ALWAYS` env var and `yes-always:` yaml key.
- Existing YAML and .env files will need to be updated.
- Can still abbreviate to `--yes` on the command line.
- Config file now uses standard YAML list syntax with ` - list entries`, one per line.
- `/settings` now includes the same announcement lines that would print at launch.
- Sanity checks the `--editor-model` on launch now, same as main and weak models.
- Added `--skip-sanity-check-repo` switch to speedup launch in large repos.
- Bugfix so architect mode handles Control-C properly.
- Repo-map is deterministic now, with improved caching logic.
- Improved commit message prompt.
- Aider wrote 77% of the code in this release.
### Aider v0.58.1
- Fixed bug where cache warming pings caused subsequent user messages to trigger a tight loop of LLM requests.
### Aider v0.58.0
- [Use a pair of Architect/Editor models for improved coding](https://aider.chat/2024/09/26/architect.html)
- Use a strong reasoning model like o1-preview as your Architect.
- Use a cheaper, faster model like gpt-4o as your Editor.
- New `--o1-preview` and `--o1-mini` shortcuts.
- Support for new Gemini 002 models.
- Better support for Qwen 2.5 models.
- Many confirmation questions can be skipped for the rest of the session with "(D)on't ask again" response.
- Autocomplete for `/read-only` supports the entire filesystem.
- New settings for completion menu colors.
- New `/copy` command to copy the last LLM response to the clipboard.
- Renamed `/clipboard` to `/paste`.
- Will now follow HTTP redirects when scraping urls.
- New `--voice-format` switch to send voice audio as wav/mp3/webm, by @mbailey.
- ModelSettings takes `extra_params` dict to specify any extras to pass to `litellm.completion()`.
- Support for cursor shapes when in vim mode.
- Numerous bug fixes.
- Aider wrote 53% of the code in this release.
### Aider v0.57.1
- Fixed dependency conflict between aider-chat[help] and [playwright].
### Aider v0.57.0
- Support for OpenAI o1 models:
- o1-preview now works well with diff edit format.
- o1-preview with diff now matches SOTA leaderboard result with whole edit format.
- `aider --model o1-mini`
- `aider --model o1-preview`
- On Windows, `/run` correctly uses PowerShell or cmd.exe.
- Support for new 08-2024 Cohere models, by @jalammar.
- Can now recursively add directories with `/read-only`.
- User input prompts now fall back to simple `input()` if `--no-pretty` or a Windows console is not available.
- Improved sanity check of git repo on startup.
- Improvements to prompt cache chunking strategy.
- Removed "No changes made to git tracked files".
- Numerous bug fixes for corner case crashes.
- Updated all dependency versions.
- Aider wrote 70% of the code in this release.
### Aider v0.56.0
- Enables prompt caching for Sonnet via OpenRouter by @fry69
- Enables 8k output tokens for Sonnet via VertexAI and DeepSeek V2.5.
- New `/report` command to open your browser with a pre-populated GitHub Issue.
- New `--chat-language` switch to set the spoken language.
- Now `--[no-]suggest-shell-commands` controls both prompting for and offering to execute shell commands.
- Check key imports on launch, provide helpful error message if dependencies aren't available.
- Renamed `--models` to `--list-models` by @fry69.
- Numerous bug fixes for corner case crashes.
- Aider wrote 56% of the code in this release.
### Aider v0.55.0
- Only print the pip command when self updating on Windows, without running it.
- Converted many error messages to warning messages.
- Added `--tool-warning-color` setting.
- Blanket catch and handle git errors in any `/command`.
- Catch and handle glob errors in `/add`, errors writing files.
- Disabled built in linter for typescript.
- Catch and handle terminals which don't support pretty output.
- Catch and handle playwright and pandoc errors.
- Catch `/voice` transcription exceptions, show the WAV file so the user can recover it.
- Aider wrote 53% of the code in this release.
### Aider v0.54.12
- Switched to `vX.Y.Z.dev` version naming.
### Aider v0.54.11
- Improved printed pip command output on Windows.
### Aider v0.54.10
- Bugfix to test command in platform info.
### Aider v0.54.9
- Include important devops files in the repomap.
- Print quoted pip install commands to the user.
- Adopt setuptools_scm to provide dev versions with git hashes.
- Share active test and lint commands with the LLM.
- Catch and handle most errors creating new files, reading existing files.
- Catch and handle most git errors.
- Added --verbose debug output for shell commands.
### Aider v0.54.8
- Startup QOL improvements:
- Sanity check the git repo and exit gracefully on problems.
- Pause for confirmation after model sanity check to allow user to review warnings.
- Bug fix for shell commands on Windows.
- Do not fuzzy match filenames when LLM is creating a new file, by @ozapinq
- Numerous corner case bug fixes submitted via new crash report -> GitHub Issue feature.
- Crash reports now include python version, OS, etc.
### Aider v0.54.7
- Offer to submit a GitHub issue pre-filled with uncaught exception info.
- Bugfix for infinite output.
### Aider v0.54.6
- New `/settings` command to show active settings.
- Only show cache warming status update if `--verbose`.
### Aider v0.54.5
- Bugfix for shell commands on Windows.
- Refuse to make git repo in $HOME, warn user.
- Don't ask again in current session about a file the user has said not to add to the chat.
- Added `--update` as an alias for `--upgrade`.
### Aider v0.54.4
- Bugfix to completions for `/model` command.
- Bugfix: revert home dir special case.
### Aider v0.54.3
- Dependency `watchdog<5` for docker image.
### Aider v0.54.2
- When users launch aider in their home dir, help them find/create a repo in a subdir.
- Added missing `pexpect` dependency.
### Aider v0.54.0
- Added model settings for `gemini/gemini-1.5-pro-exp-0827` and `gemini/gemini-1.5-flash-exp-0827`.
- Shell and `/run` commands can now be interactive in environments where a pty is available.
- Optionally share output of suggested shell commands back to the LLM.
- New `--[no-]suggest-shell-commands` switch to configure shell commands.
- Performance improvements for autocomplete in large/mono repos.
- New `--upgrade` switch to install latest version of aider from pypi.
- Bugfix to `--show-prompt`.
- Disabled automatic reply to the LLM on `/undo` for all models.
- Removed pager from `/web` output.
- Aider wrote 64% of the code in this release.
### Aider v0.53.0
- [Keep your prompt cache from expiring](https://aider.chat/docs/usage/caching.html#preventing-cache-expiration) with `--cache-keepalive-pings`.
- Pings the API every 5min to keep the cache warm.
- You can now bulk accept/reject a series of add url and run shell confirmations.
- Improved matching of filenames from S/R blocks with files in chat.
- Stronger prompting for Sonnet to make edits in code chat mode.
- Stronger prompting for the LLM to specify full file paths.
- Improved shell command prompting.
- Weak model now uses `extra_headers`, to support Anthropic beta features.
- New `--install-main-branch` to update to the latest dev version of aider.
- Improved error messages on attempt to add not-git subdir to chat.
- Show model metadata info on `--verbose`.
- Improved warnings when LLMs env variables aren't set.
- Bugfix to windows filenames which contain `\_`.
- Aider wrote 59% of the code in this release.
### Aider v0.52.1
- Bugfix for NameError when applying edits.
### Aider v0.52.0
- Aider now offers to run shell commands:
- Launch a browser to view updated html/css/js.
- Install new dependencies.
- Run DB migrations.
- Run the program to exercise changes.
- Run new test cases.
- `/read` and `/drop` now expand `~` to the home dir.
- Show the active chat mode at aider prompt.
- New `/reset` command to `/drop` files and `/clear` chat history.
- New `--map-multiplier-no-files` to control repo map size multiplier when no files are in the chat.
- Reduced default multiplier to 2.
- Bugfixes and improvements to auto commit sequencing.
- Improved formatting of token reports and confirmation dialogs.
- Default OpenAI model is now `gpt-4o-2024-08-06`.
- Bumped dependencies to pickup litellm bugfixes.
- Aider wrote 68% of the code in this release.
### Aider v0.51.0
- Prompt caching for Anthropic models with `--cache-prompts`.
- Caches the system prompt, repo map and `/read-only` files.
- Repo map recomputes less often in large/mono repos or when caching enabled.
- Use `--map-refresh <always|files|manual|auto>` to configure.
- Improved cost estimate logic for caching.
- Improved editing performance on Jupyter Notebook `.ipynb` files.
- Work around litellm tokenizer bug for images.
- Show which config yaml file is loaded with `--verbose`.
- Bumped dependency versions.
- Bugfix: properly load `.aider.models.metadata.json` data.
- Bugfix: Using `--msg /ask ...` caused an exception.
- Bugfix: litellm tokenizer bug for images.
- Aider wrote 56% of the code in this release.
### Aider v0.50.1
@ -492,7 +747,7 @@
### Aider v0.14.0
- [Support for Claude2 and other LLMs via OpenRouter](https://aider.chat/docs/faq.html#accessing-other-llms-with-openrouter) by @joshuavial
- Documentation for [running the aider benchmarking suite](https://github.com/paul-gauthier/aider/tree/main/benchmark)
- Documentation for [running the aider benchmarking suite](https://github.com/Aider-AI/aider/tree/main/benchmark)
- Aider now requires Python >= 3.9
@ -537,7 +792,7 @@
- Added `/git` command to run git from inside aider chats.
- Use Meta-ENTER (Esc+ENTER in some environments) to enter multiline chat messages.
- Create a `.gitignore` with `.aider*` to prevent users from accidentaly adding aider files to git.
- Create a `.gitignore` with `.aider*` to prevent users from accidentally adding aider files to git.
- Check pypi for newer versions and notify user.
- Updated keyboard interrupt logic so that 2 ^C in 2 seconds always forces aider to exit.
- Provide GPT with detailed error if it makes a bad edit block, ask for a retry.

View file

@ -9,12 +9,23 @@ Start a new project or work with an existing git repo.
Aider works best with GPT-4o & Claude 3.5 Sonnet and can
[connect to almost any LLM](https://aider.chat/docs/llms.html).
<!-- SCREENCAST START -->
<p align="center">
<img
src="https://aider.chat/assets/screencast.svg"
alt="aider screencast"
>
</p>
<!-- SCREENCAST END -->
<!-- VIDEO START
<p align="center">
<video style="max-width: 100%; height: auto;" autoplay loop muted playsinline>
<source src="/assets/shell-cmds-small.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
</p>
VIDEO END -->
<p align="center">
<a href="https://discord.gg/Tv2uQnR88V">
@ -35,7 +46,7 @@ cog.out(open("aider/website/_includes/get-started.md").read())
You can get started quickly like this:
```
python -m pip install aider-chat
python -m pip install -U aider-chat
# Change directory into a git repo
cd /to/your/git/repo
@ -96,7 +107,7 @@ projects like django, scikitlearn, matplotlib, etc.
- [Configuration](https://aider.chat/docs/config.html)
- [Troubleshooting](https://aider.chat/docs/troubleshooting.html)
- [LLM Leaderboards](https://aider.chat/docs/leaderboards/)
- [GitHub](https://github.com/paul-gauthier/aider)
- [GitHub](https://github.com/Aider-AI/aider)
- [Discord](https://discord.gg/Tv2uQnR88V)
- [Blog](https://aider.chat/blog/)
@ -107,14 +118,14 @@ projects like django, scikitlearn, matplotlib, etc.
- *The best AI coding assistant so far.* -- [Matthew Berman](https://www.youtube.com/watch?v=df8afeb1FY8)
- *Aider ... has easily quadrupled my coding productivity.* -- [SOLAR_FIELDS](https://news.ycombinator.com/item?id=36212100)
- *It's a cool workflow... Aider's ergonomics are perfect for me.* -- [qup](https://news.ycombinator.com/item?id=38185326)
- *It's really like having your senior developer live right in your Git repo - truly amazing!* -- [rappster](https://github.com/paul-gauthier/aider/issues/124)
- *What an amazing tool. It's incredible.* -- [valyagolev](https://github.com/paul-gauthier/aider/issues/6#issue-1722897858)
- *Aider is such an astounding thing!* -- [cgrothaus](https://github.com/paul-gauthier/aider/issues/82#issuecomment-1631876700)
- *It's really like having your senior developer live right in your Git repo - truly amazing!* -- [rappster](https://github.com/Aider-AI/aider/issues/124)
- *What an amazing tool. It's incredible.* -- [valyagolev](https://github.com/Aider-AI/aider/issues/6#issue-1722897858)
- *Aider is such an astounding thing!* -- [cgrothaus](https://github.com/Aider-AI/aider/issues/82#issuecomment-1631876700)
- *It was WAY faster than I would be getting off the ground and making the first few working versions.* -- [Daniel Feldman](https://twitter.com/d_feldman/status/1662295077387923456)
- *THANK YOU for Aider! It really feels like a glimpse into the future of coding.* -- [derwiki](https://news.ycombinator.com/item?id=38205643)
- *It's just amazing. It is freeing me to do things I felt were out my comfort zone before.* -- [Dougie](https://discord.com/channels/1131200896827654144/1174002618058678323/1174084556257775656)
- *This project is stellar.* -- [funkytaco](https://github.com/paul-gauthier/aider/issues/112#issuecomment-1637429008)
- *Amazing project, definitely the best AI coding assistant I've used.* -- [joshuavial](https://github.com/paul-gauthier/aider/issues/84)
- *This project is stellar.* -- [funkytaco](https://github.com/Aider-AI/aider/issues/112#issuecomment-1637429008)
- *Amazing project, definitely the best AI coding assistant I've used.* -- [joshuavial](https://github.com/Aider-AI/aider/issues/84)
- *I absolutely love using Aider ... It makes software development feel so much lighter as an experience.* -- [principalideal0](https://discord.com/channels/1131200896827654144/1133421607499595858/1229689636012691468)
- *I have been recovering from multiple shoulder surgeries ... and have used aider extensively. It has allowed me to continue productivity.* -- [codeninja](https://www.reddit.com/r/OpenAI/s/nmNwkHy1zG)
- *I am an aider addict. I'm getting so much more work done, but in less time.* -- [dandandan](https://discord.com/channels/1131200896827654144/1131200896827654149/1135913253483069470)

View file

@ -1 +1,6 @@
__version__ = "0.50.2-dev"
try:
from aider.__version__ import __version__
except Exception:
__version__ = "0.60.2.dev"
__all__ = [__version__]

View file

@ -22,9 +22,10 @@ def default_env_file(git_root):
def get_parser(default_config_files, git_root):
parser = configargparse.ArgumentParser(
description="aider is GPT powered coding in your terminal",
description="aider is AI pair programming in your terminal",
add_config_file_help=True,
default_config_files=default_config_files,
config_file_parser_class=configargparse.YAMLConfigFileParser,
auto_env_var_prefix="AIDER_",
)
group = parser.add_argument_group("Main")
@ -57,7 +58,7 @@ def get_parser(default_config_files, git_root):
const=opus_model,
help=f"Use {opus_model} model for the main chat",
)
sonnet_model = "claude-3-5-sonnet-20240620"
sonnet_model = "claude-3-5-sonnet-20241022"
group.add_argument(
"--sonnet",
action="store_const",
@ -74,7 +75,7 @@ def get_parser(default_config_files, git_root):
const=gpt_4_model,
help=f"Use {gpt_4_model} model for the main chat",
)
gpt_4o_model = "gpt-4o"
gpt_4o_model = "gpt-4o-2024-08-06"
group.add_argument(
"--4o",
action="store_const",
@ -117,10 +118,27 @@ def get_parser(default_config_files, git_root):
const=deepseek_model,
help=f"Use {deepseek_model} model for the main chat",
)
o1_mini_model = "o1-mini"
group.add_argument(
"--o1-mini",
action="store_const",
dest="model",
const=o1_mini_model,
help=f"Use {o1_mini_model} model for the main chat",
)
o1_preview_model = "o1-preview"
group.add_argument(
"--o1-preview",
action="store_const",
dest="model",
const=o1_preview_model,
help=f"Use {o1_preview_model} model for the main chat",
)
##########
group = parser.add_argument_group("Model Settings")
group.add_argument(
"--list-models",
"--models",
metavar="MODEL",
help="List known models which match the (partial) MODEL name",
@ -180,6 +198,13 @@ def get_parser(default_config_files, git_root):
default=None,
help="Specify what edit format the LLM should use (default depends on model)",
)
group.add_argument(
"--architect",
action="store_const",
dest="edit_format",
const="architect",
help="Use architect edit format for the main chat",
)
group.add_argument(
"--weak-model",
metavar="WEAK_MODEL",
@ -189,25 +214,31 @@ def get_parser(default_config_files, git_root):
" depends on --model)"
),
)
group.add_argument(
"--editor-model",
metavar="EDITOR_MODEL",
default=None,
help="Specify the model to use for editor tasks (default depends on --model)",
)
group.add_argument(
"--editor-edit-format",
metavar="EDITOR_EDIT_FORMAT",
default=None,
help="Specify the edit format for the editor model (default: depends on editor model)",
)
group.add_argument(
"--show-model-warnings",
action=argparse.BooleanOptionalAction,
default=True,
help="Only work with models that have meta-data available (default: True)",
)
group.add_argument(
"--map-tokens",
type=int,
default=None,
help="Max number of tokens to use for repo map, use 0 to disable (default: 1024)",
)
group.add_argument(
"--max-chat-history-tokens",
type=int,
default=None,
help=(
"Maximum number of tokens to use for chat history. If not specified, uses the model's"
" max_chat_history_tokens."
"Soft limit on tokens for chat history, after which summarization begins."
" If unspecified, defaults to the model's max_chat_history_tokens."
),
)
# This is a duplicate of the argument in the preparser and is a no-op by this time of
@ -219,6 +250,45 @@ def get_parser(default_config_files, git_root):
help="Specify the .env file to load (default: .env in git root)",
)
##########
group = parser.add_argument_group("Cache Settings")
group.add_argument(
"--cache-prompts",
action=argparse.BooleanOptionalAction,
default=False,
help="Enable caching of prompts (default: False)",
)
group.add_argument(
"--cache-keepalive-pings",
type=int,
default=0,
help="Number of times to ping at 5min intervals to keep prompt cache warm (default: 0)",
)
##########
group = parser.add_argument_group("Repomap Settings")
group.add_argument(
"--map-tokens",
type=int,
default=None,
help="Suggested number of tokens to use for repo map, use 0 to disable (default: 1024)",
)
group.add_argument(
"--map-refresh",
choices=["auto", "always", "files", "manual"],
default="auto",
help=(
"Control how often the repo map is refreshed. Options: auto, always, files, manual"
" (default: auto)"
),
)
group.add_argument(
"--map-multiplier-no-files",
type=float,
default=2,
help="Multiplier for map tokens when no files are specified (default: 2)",
)
##########
group = parser.add_argument_group("History Files")
default_input_history_file = (
@ -291,13 +361,51 @@ def get_parser(default_config_files, git_root):
group.add_argument(
"--tool-error-color",
default="#FF2222",
help="Set the color for tool error messages (default: red)",
help="Set the color for tool error messages (default: #FF2222)",
)
group.add_argument(
"--tool-warning-color",
default="#FFA500",
help="Set the color for tool warning messages (default: #FFA500)",
)
group.add_argument(
"--assistant-output-color",
default="#0088ff",
help="Set the color for assistant output (default: #0088ff)",
)
group.add_argument(
"--completion-menu-color",
metavar="COLOR",
default=None,
help="Set the color for the completion menu (default: terminal's default text color)",
)
group.add_argument(
"--completion-menu-bg-color",
metavar="COLOR",
default=None,
help=(
"Set the background color for the completion menu (default: terminal's default"
" background color)"
),
)
group.add_argument(
"--completion-menu-current-color",
metavar="COLOR",
default=None,
help=(
"Set the color for the current item in the completion menu (default: terminal's default"
" background color)"
),
)
group.add_argument(
"--completion-menu-current-bg-color",
metavar="COLOR",
default=None,
help=(
"Set the background color for the current item in the completion menu (default:"
" terminal's default text color)"
),
)
group.add_argument(
"--code-theme",
default="default",
@ -395,6 +503,12 @@ def get_parser(default_config_files, git_root):
default=False,
help="Perform a dry run without modifying files (default: False)",
)
group.add_argument(
"--skip-sanity-check-repo",
action="store_true",
help="Skip the sanity check for the git repository (default: False)",
default=False,
)
group = parser.add_argument_group("Fixing and committing")
group.add_argument(
"--lint",
@ -475,10 +589,10 @@ def get_parser(default_config_files, git_root):
default=False,
)
group.add_argument(
"--voice-language",
metavar="VOICE_LANGUAGE",
default="en",
help="Specify the language for voice using ISO 639-1 code (default: auto)",
"--chat-language",
metavar="CHAT_LANGUAGE",
default=None,
help="Specify the language to use in the chat (default: None, uses system settings)",
)
group.add_argument(
"--version",
@ -498,13 +612,26 @@ def get_parser(default_config_files, git_root):
help="Check for new aider versions on launch",
default=True,
)
group.add_argument(
"--install-main-branch",
action="store_true",
help="Install the latest version from the main branch",
default=False,
)
group.add_argument(
"--upgrade",
"--update",
action="store_true",
help="Upgrade aider to the latest version from PyPI",
default=False,
)
group.add_argument(
"--apply",
metavar="FILE",
help="Apply the changes from the given file instead of running the chat (debug)",
)
group.add_argument(
"--yes",
"--yes-always",
action="store_true",
help="Always say yes to every confirmation",
default=None,
@ -552,6 +679,11 @@ def get_parser(default_config_files, git_root):
" (disables chat mode)"
),
)
group.add_argument(
"--load",
metavar="LOAD_FILE",
help="Load and execute /commands from a file on launch",
)
group.add_argument(
"--encoding",
default="utf-8",
@ -574,6 +706,34 @@ def get_parser(default_config_files, git_root):
help="Run aider in your browser",
default=False,
)
group.add_argument(
"--suggest-shell-commands",
action=argparse.BooleanOptionalAction,
default=True,
help="Enable/disable suggesting shell commands (default: True)",
)
group.add_argument(
"--fancy-input",
action=argparse.BooleanOptionalAction,
default=True,
help="Enable/disable fancy input with history and completion (default: True)",
)
##########
group = parser.add_argument_group("Voice Settings")
group.add_argument(
"--voice-format",
metavar="VOICE_FORMAT",
default="wav",
choices=["wav", "mp3", "webm"],
help="Audio format for voice recording (default: wav). webm and mp3 require ffmpeg",
)
group.add_argument(
"--voice-language",
metavar="VOICE_LANGUAGE",
default="en",
help="Specify the language for voice using ISO 639-1 code (default: auto)",
)
return parser
@ -589,7 +749,6 @@ def get_md_help():
parser.formatter_class = MarkdownHelpFormatter
return argparse.ArgumentParser.format_help(parser)
return parser.format_help()
def get_sample_yaml():
@ -603,7 +762,6 @@ def get_sample_yaml():
parser.formatter_class = YamlHelpFormatter
return argparse.ArgumentParser.format_help(parser)
return parser.format_help()
def get_sample_dotenv():
@ -617,7 +775,6 @@ def get_sample_dotenv():
parser.formatter_class = DotEnvFormatter
return argparse.ArgumentParser.format_help(parser)
return parser.format_help()
def main():

View file

@ -144,8 +144,15 @@ class YamlHelpFormatter(argparse.HelpFormatter):
if default:
parts.append(f"#{switch}: {default}\n")
elif action.nargs in ("*", "+") or isinstance(action, argparse._AppendAction):
parts.append(f"#{switch}: xxx")
parts.append("## Specify multiple values like this:")
parts.append(f"#{switch}:")
parts.append(f"# - xxx")
parts.append(f"# - yyy")
parts.append(f"# - zzz")
else:
parts.append(f"#{switch}:\n")
parts.append(f"#{switch}: xxx\n")
###
# parts.append(str(action))

View file

@ -1,10 +1,15 @@
from .architect_coder import ArchitectCoder
from .ask_coder import AskCoder
from .base_coder import Coder
from .editblock_coder import EditBlockCoder
from .editblock_fenced_coder import EditBlockFencedCoder
from .editor_editblock_coder import EditorEditBlockCoder
from .editor_whole_coder import EditorWholeFileCoder
from .help_coder import HelpCoder
from .udiff_coder import UnifiedDiffCoder
from .wholefile_coder import WholeFileCoder
from .ask_coder import AskCoder
# from .single_wholefile_func_coder import SingleWholeFileFunctionCoder
__all__ = [
HelpCoder,
@ -14,4 +19,8 @@ __all__ = [
EditBlockFencedCoder,
WholeFileCoder,
UnifiedDiffCoder,
# SingleWholeFileFunctionCoder,
ArchitectCoder,
EditorEditBlockCoder,
EditorWholeFileCoder,
]

View file

@ -0,0 +1,44 @@
from .architect_prompts import ArchitectPrompts
from .ask_coder import AskCoder
from .base_coder import Coder
class ArchitectCoder(AskCoder):
edit_format = "architect"
gpt_prompts = ArchitectPrompts()
def reply_completed(self):
content = self.partial_response_content
if not self.io.confirm_ask("Edit the files?"):
return
kwargs = dict()
# Use the editor_model from the main_model if it exists, otherwise use the main_model itself
editor_model = self.main_model.editor_model or self.main_model
kwargs["main_model"] = editor_model
kwargs["edit_format"] = self.main_model.editor_edit_format
kwargs["suggest_shell_commands"] = False
kwargs["map_tokens"] = 0
kwargs["total_cost"] = self.total_cost
kwargs["cache_prompts"] = False
kwargs["num_cache_warming_pings"] = 0
kwargs["summarize_from_coder"] = False
new_kwargs = dict(io=self.io, from_coder=self)
new_kwargs.update(kwargs)
editor_coder = Coder.create(**new_kwargs)
editor_coder.cur_messages = []
editor_coder.done_messages = []
if self.verbose:
editor_coder.show_announcements()
editor_coder.run(with_message=content, preproc=False)
self.move_back_cur_messages("I made those changes to the files.")
self.total_cost = editor_coder.total_cost
self.aider_commit_hashes = editor_coder.aider_commit_hashes

View file

@ -0,0 +1,40 @@
# flake8: noqa: E501
from .base_prompts import CoderPrompts
class ArchitectPrompts(CoderPrompts):
main_system = """Act as an expert architect engineer and provide direction to your editor engineer.
Study the change request and the current code.
Describe how to modify the code to complete the request.
The editor engineer will rely solely on your instructions, so make them unambiguous and complete.
Explain all needed code changes clearly and completely, but concisely.
Just show the changes needed.
DO NOT show the entire updated function/file/etc!
Always reply in the same language as the change request.
"""
example_messages = []
files_content_prefix = """I have *added these files to the chat* so you see all of their contents.
*Trust this message as the true contents of the files!*
Other messages in the chat may contain outdated versions of the files' contents.
""" # noqa: E501
files_content_assistant_reply = (
"Ok, I will use that as the true, current contents of the files."
)
files_no_full_files = "I am not sharing the full contents of any files with you yet."
files_no_full_files_with_repo_map = ""
files_no_full_files_with_repo_map_reply = ""
repo_content_prefix = """I am working with you on code in a git repository.
Here are summaries of some files present in my git repo.
If you need to see the full contents of any files to answer my questions, ask me to *add them to the chat*.
"""
system_reminder = ""

View file

@ -6,7 +6,6 @@ from .base_prompts import CoderPrompts
class AskPrompts(CoderPrompts):
main_system = """Act as an expert code analyst.
Answer questions about the supplied code.
Always reply to the user in the same language they are using.
"""
@ -17,6 +16,10 @@ Always reply to the user in the same language they are using.
Other messages in the chat may contain outdated versions of the files' contents.
""" # noqa: E501
files_content_assistant_reply = (
"Ok, I will use that as the true, current contents of the files."
)
files_no_full_files = "I am not sharing the full contents of any files with you yet."
files_no_full_files_with_repo_map = ""

File diff suppressed because it is too large Load diff

View file

@ -22,6 +22,8 @@ You always COMPLETELY IMPLEMENT the needed code!
Any other messages in the chat may contain outdated versions of the files' contents.
""" # noqa: E501
files_content_assistant_reply = "Ok, any changes I propose will be to those files."
files_no_full_files = "I am not sharing any files that you can edit yet."
files_no_full_files_with_repo_map = """Don't try and edit any existing code without asking me to add the files to the chat!
@ -43,3 +45,8 @@ If you need to edit any of these files, ask me to *add them to the chat* first.
read_only_files_prefix = """Here are some READ ONLY files, provided for your reference.
Do not edit these files!
"""
shell_cmd_prompt = ""
shell_cmd_reminder = ""
no_shell_cmd_prompt = ""
no_shell_cmd_reminder = ""

View file

@ -0,0 +1,64 @@
from dataclasses import dataclass, field
from typing import List
@dataclass
class ChatChunks:
system: List = field(default_factory=list)
examples: List = field(default_factory=list)
done: List = field(default_factory=list)
repo: List = field(default_factory=list)
readonly_files: List = field(default_factory=list)
chat_files: List = field(default_factory=list)
cur: List = field(default_factory=list)
reminder: List = field(default_factory=list)
def all_messages(self):
return (
self.system
+ self.examples
+ self.readonly_files
+ self.repo
+ self.done
+ self.chat_files
+ self.cur
+ self.reminder
)
def add_cache_control_headers(self):
if self.examples:
self.add_cache_control(self.examples)
else:
self.add_cache_control(self.system)
if self.repo:
# this will mark both the readonly_files and repomap chunk as cacheable
self.add_cache_control(self.repo)
else:
# otherwise, just cache readonly_files if there are any
self.add_cache_control(self.readonly_files)
self.add_cache_control(self.chat_files)
def add_cache_control(self, messages):
if not messages:
return
content = messages[-1]["content"]
if type(content) is str:
content = dict(
type="text",
text=content,
)
content["cache_control"] = {"type": "ephemeral"}
messages[-1]["content"] = [content]
def cacheable_messages(self):
messages = self.all_messages()
for i, message in enumerate(reversed(messages)):
if isinstance(message.get("content"), list) and message["content"][0].get(
"cache_control"
):
return messages[: len(messages) - i]
return messages

View file

@ -14,6 +14,7 @@ from .editblock_prompts import EditBlockPrompts
class EditBlockCoder(Coder):
"""A coder that uses search/replace blocks for code modifications."""
edit_format = "diff"
gpt_prompts = EditBlockPrompts()
@ -21,13 +22,27 @@ class EditBlockCoder(Coder):
content = self.partial_response_content
# might raise ValueError for malformed ORIG/UPD blocks
edits = list(find_original_update_blocks(content, self.fence))
edits = list(
find_original_update_blocks(
content,
self.fence,
self.get_inchat_relative_files(),
)
)
self.shell_commands += [edit[1] for edit in edits if edit[0] is None]
edits = [edit for edit in edits if edit[0] is not None]
return edits
def apply_edits(self, edits):
def apply_edits_dry_run(self, edits):
return self.apply_edits(edits, dry_run=True)
def apply_edits(self, edits, dry_run=False):
failed = []
passed = []
updated_edits = []
for edit in edits:
path, original, updated = edit
full_path = self.abs_root_path(path)
@ -39,14 +54,21 @@ class EditBlockCoder(Coder):
content = self.io.read_text(full_path)
new_content = do_replace(full_path, content, original, updated, self.fence)
if new_content:
path = self.get_rel_fname(full_path)
break
updated_edits.append((path, original, updated))
if new_content:
self.io.write_text(full_path, new_content)
if not dry_run:
self.io.write_text(full_path, new_content)
passed.append(edit)
else:
failed.append(edit)
if dry_run:
return updated_edits
if not failed:
return
@ -354,9 +376,13 @@ def do_replace(fname, content, before_text, after_text, fence=None):
return new_content
HEAD = "<<<<<<< SEARCH"
DIVIDER = "======="
UPDATED = ">>>>>>> REPLACE"
HEAD = r"^<{5,9} SEARCH\s*$"
DIVIDER = r"^={5,9}\s*$"
UPDATED = r"^>{5,9} REPLACE\s*$"
HEAD_ERR = "<<<<<<< SEARCH"
DIVIDER_ERR = "======="
UPDATED_ERR = ">>>>>>> REPLACE"
separators = "|".join([HEAD, DIVIDER, UPDATED])
@ -384,77 +410,106 @@ def strip_filename(filename, fence):
filename = filename.strip()
filename = filename.strip("`")
filename = filename.strip("*")
filename = filename.replace("\\_", "_")
# https://github.com/Aider-AI/aider/issues/1158
# filename = filename.replace("\\_", "_")
return filename
def find_original_update_blocks(content, fence=DEFAULT_FENCE):
# make sure we end with a newline, otherwise the regex will miss <<UPD on the last line
if not content.endswith("\n"):
content = content + "\n"
pieces = re.split(split_re, content)
pieces.reverse()
processed = []
# Keep using the same filename in cases where GPT produces an edit block
# without a filename.
def find_original_update_blocks(content, fence=DEFAULT_FENCE, valid_fnames=None):
lines = content.splitlines(keepends=True)
i = 0
current_filename = None
try:
while pieces:
cur = pieces.pop()
if cur in (DIVIDER, UPDATED):
processed.append(cur)
raise ValueError(f"Unexpected {cur}")
head_pattern = re.compile(HEAD)
divider_pattern = re.compile(DIVIDER)
updated_pattern = re.compile(UPDATED)
if cur.strip() != HEAD:
processed.append(cur)
continue
while i < len(lines):
line = lines[i]
processed.append(cur) # original_marker
# Check for shell code blocks
shell_starts = [
"```bash",
"```sh",
"```shell",
"```cmd",
"```batch",
"```powershell",
"```ps1",
"```zsh",
"```fish",
"```ksh",
"```csh",
"```tcsh",
]
next_is_editblock = i + 1 < len(lines) and head_pattern.match(lines[i + 1].strip())
filename = find_filename(processed[-2].splitlines(), fence)
if not filename:
if current_filename:
filename = current_filename
if any(line.strip().startswith(start) for start in shell_starts) and not next_is_editblock:
shell_content = []
i += 1
while i < len(lines) and not lines[i].strip().startswith("```"):
shell_content.append(lines[i])
i += 1
if i < len(lines) and lines[i].strip().startswith("```"):
i += 1 # Skip the closing ```
yield None, "".join(shell_content)
continue
# Check for SEARCH/REPLACE blocks
if head_pattern.match(line.strip()):
try:
# if next line after HEAD exists and is DIVIDER, it's a new file
if i + 1 < len(lines) and divider_pattern.match(lines[i + 1].strip()):
filename = find_filename(lines[max(0, i - 3) : i], fence, None)
else:
raise ValueError(missing_filename_err.format(fence=fence))
filename = find_filename(lines[max(0, i - 3) : i], fence, valid_fnames)
current_filename = filename
if not filename:
if current_filename:
filename = current_filename
else:
raise ValueError(missing_filename_err.format(fence=fence))
original_text = pieces.pop()
processed.append(original_text)
current_filename = filename
divider_marker = pieces.pop()
processed.append(divider_marker)
if divider_marker.strip() != DIVIDER:
raise ValueError(f"Expected `{DIVIDER}` not {divider_marker.strip()}")
original_text = []
i += 1
while i < len(lines) and not divider_pattern.match(lines[i].strip()):
original_text.append(lines[i])
i += 1
updated_text = pieces.pop()
processed.append(updated_text)
if i >= len(lines) or not divider_pattern.match(lines[i].strip()):
raise ValueError(f"Expected `{DIVIDER_ERR}`")
updated_marker = pieces.pop()
processed.append(updated_marker)
if updated_marker.strip() != UPDATED:
raise ValueError(f"Expected `{UPDATED}` not `{updated_marker.strip()}")
updated_text = []
i += 1
while i < len(lines) and not (
updated_pattern.match(lines[i].strip())
or divider_pattern.match(lines[i].strip())
):
updated_text.append(lines[i])
i += 1
yield filename, original_text, updated_text
except ValueError as e:
processed = "".join(processed)
err = e.args[0]
raise ValueError(f"{processed}\n^^^ {err}")
except IndexError:
processed = "".join(processed)
raise ValueError(f"{processed}\n^^^ Incomplete SEARCH/REPLACE block.")
except Exception:
processed = "".join(processed)
raise ValueError(f"{processed}\n^^^ Error parsing SEARCH/REPLACE block.")
if i >= len(lines) or not (
updated_pattern.match(lines[i].strip())
or divider_pattern.match(lines[i].strip())
):
raise ValueError(f"Expected `{UPDATED_ERR}` or `{DIVIDER_ERR}`")
yield filename, "".join(original_text), "".join(updated_text)
except ValueError as e:
processed = "".join(lines[: i + 1])
err = e.args[0]
raise ValueError(f"{processed}\n^^^ {err}")
i += 1
def find_filename(lines, fence):
def find_filename(lines, fence, valid_fnames):
"""
Deepseek Coder v2 has been doing this:
@ -468,19 +523,54 @@ def find_filename(lines, fence):
This is a more flexible search back for filenames.
"""
if valid_fnames is None:
valid_fnames = []
# Go back through the 3 preceding lines
lines.reverse()
lines = lines[:3]
filenames = []
for line in lines:
# If we find a filename, done
filename = strip_filename(line, fence)
if filename:
return filename
filenames.append(filename)
# Only continue as long as we keep seeing fences
if not line.startswith(fence[0]):
return
break
if not filenames:
return
# pick the *best* filename found
# Check for exact match first
for fname in filenames:
if fname in valid_fnames:
return fname
# Check for partial match (basename match)
for fname in filenames:
for vfn in valid_fnames:
if fname == Path(vfn).name:
return vfn
# Perform fuzzy matching with valid_fnames
for fname in filenames:
close_matches = difflib.get_close_matches(fname, valid_fnames, n=1, cutoff=0.8)
if len(close_matches) == 1:
return close_matches[0]
# If no fuzzy match, look for a file w/extension
for fname in filenames:
if "." in fname:
return fname
if filenames:
return filenames[0]
def find_similar_lines(search_lines, content_lines, threshold=0.6):

View file

@ -111,9 +111,9 @@ class EditBlockFunctionCoder(Coder):
updated = get_arg(edit, "updated_lines")
# gpt-3.5 returns lists even when instructed to return a string!
if self.code_format == "list" or type(original) == list:
if self.code_format == "list" or type(original) is list:
original = "\n".join(original)
if self.code_format == "list" or type(updated) == list:
if self.code_format == "list" or type(updated) is list:
updated = "\n".join(updated)
if original and not original.endswith("\n"):

View file

@ -14,16 +14,45 @@ If the request is ambiguous, ask questions.
Always reply to the user in the same language they are using.
Once you understand the request you MUST:
1. Decide if you need to propose *SEARCH/REPLACE* edits to any files that haven't been added to the chat. You can create new files without asking. But if you need to propose edits to existing files not already added to the chat, you *MUST* tell the user their full path names and ask them to *add the files to the chat*. End your reply and wait for their approval. You can keep asking if you then decide you need to edit more files.
2. Think step-by-step and explain the needed changes with a numbered list of short sentences.
3. Describe each change with a *SEARCH/REPLACE block* per the examples below. All changes to files must use this *SEARCH/REPLACE block* format. ONLY EVER RETURN CODE IN A *SEARCH/REPLACE BLOCK*!
All changes to files must use the *SEARCH/REPLACE block* format.
1. Decide if you need to propose *SEARCH/REPLACE* edits to any files that haven't been added to the chat. You can create new files without asking!
Keep this info about the user's system in mind:
{platform}
But if you need to propose edits to existing files not already added to the chat, you *MUST* tell the user their full path names and ask them to *add the files to the chat*.
End your reply and wait for their approval.
You can keep asking if you then decide you need to edit more files.
2. Think step-by-step and explain the needed changes in a few short sentences.
3. Describe each change with a *SEARCH/REPLACE block* per the examples below.
All changes to files must use this *SEARCH/REPLACE block* format.
ONLY EVER RETURN CODE IN A *SEARCH/REPLACE BLOCK*!
{shell_cmd_prompt}
"""
shell_cmd_prompt = """
4. *Concisely* suggest any shell commands the user might want to run in ```bash blocks.
Just suggest shell commands this way, not example code.
Only suggest complete shell commands that are ready to execute, without placeholders.
Only suggest at most a few shell commands at a time, not more than 1-3.
Use the appropriate shell based on the user's system info:
{platform}
Examples of when to suggest shell commands:
- If you changed a self-contained html file, suggest an OS-appropriate command to open a browser to view it to see the updated content.
- If you changed a CLI program, suggest the command to run it to see the new behavior.
- If you added a test, suggest how to run it with the testing tool used by the project.
- Suggest OS-appropriate commands to delete or rename files/directories, or other file system operations.
- If your code changes add new dependencies, suggest the command to install them.
- Etc.
"""
no_shell_cmd_prompt = """
Keep in mind these details about the user's platform and environment:
{platform}
"""
example_messages = [
dict(
role="user",
@ -116,7 +145,7 @@ from hello import hello
system_reminder = """# *SEARCH/REPLACE block* Rules:
Every *SEARCH/REPLACE block* must use this format:
1. The file path alone on a line, verbatim. No bold asterisks, no quotes around it, no escaping of characters, etc.
1. The *FULL* file path alone on a line, verbatim. No bold asterisks, no quotes around it, no escaping of characters, etc.
2. The opening fence and code language, eg: {fence[0]}python
3. The start of search block: <<<<<<< SEARCH
4. A contiguous chunk of lines to search for in the existing source code
@ -125,11 +154,14 @@ Every *SEARCH/REPLACE block* must use this format:
7. The end of the replace block: >>>>>>> REPLACE
8. The closing fence: {fence[1]}
Use the *FULL* file path, as shown to you by the user.
Every *SEARCH* section must *EXACTLY MATCH* the existing file content, character for character, including all comments, docstrings, etc.
If the file contains code or other data wrapped/escaped in json/xml/quotes or other containers, you need to propose edits to the literal contents of the file, including the container markup.
*SEARCH/REPLACE* blocks will replace *all* matching occurrences.
Include enough lines to make the SEARCH blocks uniquely match the lines to change.
*SEARCH/REPLACE* blocks will *only* replace the first match occurrence.
Including multiple unique *SEARCH/REPLACE* blocks if needed.
Include enough lines in each SEARCH section to uniquely match each set of lines that need to change.
Keep *SEARCH/REPLACE* blocks concise.
Break large *SEARCH/REPLACE* blocks into a series of smaller blocks that each change a small portion of the file.
@ -140,11 +172,27 @@ Only create *SEARCH/REPLACE* blocks for files that the user has added to the cha
To move code within a file, use 2 *SEARCH/REPLACE* blocks: 1 to delete it from its current location, 1 to insert it in the new location.
Pay attention to which filenames the user wants you to edit, especially if they are asking you to create a new file.
If you want to put code in a new file, use a *SEARCH/REPLACE block* with:
- A new file path, including dir name if needed
- An empty `SEARCH` section
- The new file's contents in the `REPLACE` section
To rename files which have been added to the chat, use shell commands at the end of your response.
{lazy_prompt}
ONLY EVER RETURN CODE IN A *SEARCH/REPLACE BLOCK*!
{shell_cmd_reminder}
"""
shell_cmd_reminder = """
Examples of when to suggest shell commands:
- If you changed a self-contained html file, suggest an OS-appropriate command to open a browser to view it to see the updated content.
- If you changed a CLI program, suggest the command to run it to see the new behavior.
- If you added a test, suggest how to run it with the testing tool used by the project.
- Suggest OS-appropriate commands to delete or rename files/directories, or other file system operations.
- If your code changes add new dependencies, suggest the command to install them.
- Etc.
"""

View file

@ -0,0 +1,7 @@
from .editblock_coder import EditBlockCoder
from .editor_editblock_prompts import EditorEditBlockPrompts
class EditorEditBlockCoder(EditBlockCoder):
edit_format = "editor-diff"
gpt_prompts = EditorEditBlockPrompts()

View file

@ -0,0 +1,16 @@
# flake8: noqa: E501
from .editblock_prompts import EditBlockPrompts
class EditorEditBlockPrompts(EditBlockPrompts):
main_system = """Act as an expert software developer who edits source code.
{lazy_prompt}
Describe each change with a *SEARCH/REPLACE block* per the examples below.
All changes to files must use this *SEARCH/REPLACE block* format.
ONLY EVER RETURN CODE IN A *SEARCH/REPLACE BLOCK*!
"""
shell_cmd_prompt = ""
no_shell_cmd_prompt = ""
shell_cmd_reminder = ""

View file

@ -0,0 +1,7 @@
from .editor_whole_prompts import EditorWholeFilePrompts
from .wholefile_coder import WholeFileCoder
class EditorWholeFileCoder(WholeFileCoder):
edit_format = "editor-whole"
gpt_prompts = EditorWholeFilePrompts()

View file

@ -0,0 +1,10 @@
# flake8: noqa: E501
from .wholefile_prompts import WholeFilePrompts
class EditorWholeFilePrompts(WholeFilePrompts):
main_system = """Act as an expert software developer and make changes to source code.
{lazy_prompt}
Output a copy of each file that needs changes.
"""

View file

@ -484,7 +484,7 @@ def git_cherry_pick_osr_onto_o(texts):
# cherry pick R onto original
try:
repo.git.cherry_pick(replace_hash, "--minimal")
except git.exc.GitCommandError:
except (git.exc.ODBError, git.exc.GitError):
# merge conflicts!
return
@ -522,7 +522,7 @@ def git_cherry_pick_sr_onto_so(texts):
# cherry pick replace onto original
try:
repo.git.cherry_pick(replace_hash, "--minimal")
except git.exc.GitCommandError:
except (git.exc.ODBError, git.exc.GitError):
# merge conflicts!
return

View file

@ -6,13 +6,15 @@ from .single_wholefile_func_prompts import SingleWholeFileFunctionPrompts
class SingleWholeFileFunctionCoder(Coder):
edit_format = "func"
functions = [
dict(
name="write_file",
description="write new content into the file",
# strict=True,
parameters=dict(
type="object",
required=["explanation", "content"],
properties=dict(
explanation=dict(
type="string",
@ -26,12 +28,13 @@ class SingleWholeFileFunctionCoder(Coder):
description="Content to write to the file",
),
),
required=["explanation", "content"],
additionalProperties=False,
),
),
]
def __init__(self, *args, **kwargs):
raise RuntimeError("Deprecated, needs to be refactored to support get_edits/apply_edits")
self.gpt_prompts = SingleWholeFileFunctionPrompts()
super().__init__(*args, **kwargs)
@ -44,33 +47,19 @@ class SingleWholeFileFunctionCoder(Coder):
self.cur_messages += [dict(role="assistant", content=self.partial_response_content)]
def render_incremental_response(self, final=False):
res = ""
if self.partial_response_content:
return self.partial_response_content
res += self.partial_response_content
args = self.parse_partial_args()
return str(args)
if not args:
return
return ""
explanation = args.get("explanation")
files = args.get("files", [])
res = ""
if explanation:
res += f"{explanation}\n\n"
for i, file_upd in enumerate(files):
path = file_upd.get("path")
if not path:
continue
content = file_upd.get("content")
if not content:
continue
this_final = (i < len(files) - 1) or final
res += self.live_diffs(path, content, this_final)
for k, v in args.items():
res += "\n"
res += f"{k}:\n"
res += v
return res
@ -95,18 +84,19 @@ class SingleWholeFileFunctionCoder(Coder):
return "\n".join(show_diff)
def _update_files(self):
name = self.partial_response_function_call.get("name")
if name and name != "write_file":
raise ValueError(f'Unknown function_call name="{name}", use name="write_file"')
def get_edits(self):
chat_files = self.get_inchat_relative_files()
assert len(chat_files) == 1, chat_files
args = self.parse_partial_args()
if not args:
return
return []
content = args["content"]
path = self.get_inchat_relative_files()[0]
if self.allowed_to_edit(path, content):
return set([path])
res = chat_files[0], args["content"]
dump(res)
return [res]
return set()
def apply_edits(self, edits):
for path, content in edits:
full_path = self.abs_root_path(path)
self.io.write_text(full_path, content)

View file

@ -9,17 +9,10 @@ from .wholefile_prompts import WholeFilePrompts
class WholeFileCoder(Coder):
"""A coder that operates on entire files for code modifications."""
edit_format = "whole"
gpt_prompts = WholeFilePrompts()
def update_cur_messages(self, edited):
if edited:
self.cur_messages += [
dict(role="assistant", content=self.gpt_prompts.redacted_edit_message)
]
else:
self.cur_messages += [dict(role="assistant", content=self.partial_response_content)]
def render_incremental_response(self, final):
try:
return self.get_edits(mode="diff")
@ -65,6 +58,12 @@ class WholeFileCoder(Coder):
fname = fname.strip("*") # handle **filename.py**
fname = fname.rstrip(":")
fname = fname.strip("`")
fname = fname.lstrip("#")
fname = fname.strip()
# Issue #1232
if len(fname) > 250:
fname = ""
# Did gpt prepend a bogus dir? It especially likes to
# include the path/to prefix from the one-shot example in
@ -130,15 +129,16 @@ class WholeFileCoder(Coder):
def do_live_diff(self, full_path, new_lines, final):
if Path(full_path).exists():
orig_lines = self.io.read_text(full_path).splitlines(keepends=True)
orig_lines = self.io.read_text(full_path)
if orig_lines is not None:
orig_lines = orig_lines.splitlines(keepends=True)
show_diff = diffs.diff_partial_update(
orig_lines,
new_lines,
final=final,
).splitlines()
output = show_diff
else:
output = ["```"] + new_lines + ["```"]
show_diff = diffs.diff_partial_update(
orig_lines,
new_lines,
final=final,
).splitlines()
return show_diff
output = ["```"] + new_lines + ["```"]
return output

View file

@ -52,7 +52,7 @@ path/to/filename.js
{fence[1]}
Every *file listing* MUST use this format:
- First line: the filename with any originally provided path
- First line: the filename with any originally provided path; no extra markup, punctuation, comments, etc. **JUST** the filename with path.
- Second line: opening {fence[0]}
- ... entire content of the file ...
- Final line: closing {fence[1]}

View file

@ -1,19 +1,24 @@
import glob
import os
import re
import subprocess
import sys
import tempfile
from collections import OrderedDict
from os.path import expanduser
from pathlib import Path
import git
import pyperclip
from PIL import Image, ImageGrab
from rich.text import Text
from prompt_toolkit.completion import Completion, PathCompleter
from prompt_toolkit.document import Document
from aider import models, prompts, voice
from aider.format_settings import format_settings
from aider.help import Help, install_help_extra
from aider.llm import litellm
from aider.repo import ANY_GIT_ERROR
from aider.run_cmd import run_cmd
from aider.scrape import Scraper, install_playwright
from aider.utils import is_image_file
@ -29,9 +34,24 @@ class Commands:
voice = None
scraper = None
def __init__(self, io, coder, voice_language=None, verify_ssl=True):
def clone(self):
return Commands(
self.io,
None,
voice_language=self.voice_language,
verify_ssl=self.verify_ssl,
args=self.args,
parser=self.parser,
)
def __init__(
self, io, coder, voice_language=None, verify_ssl=True, args=None, parser=None, verbose=False
):
self.io = io
self.coder = coder
self.parser = parser
self.args = args
self.verbose = verbose
self.verify_ssl = verify_ssl
if voice_language == "auto":
@ -119,8 +139,8 @@ class Commands:
else:
self.io.tool_output("Please provide a partial model name to search for.")
def cmd_web(self, args, paginate=True):
"Scrape a webpage, convert to markdown and add to the chat"
def cmd_web(self, args):
"Scrape a webpage, convert to markdown and send in a message"
url = args.strip()
if not url:
@ -131,7 +151,7 @@ class Commands:
if not self.scraper:
res = install_playwright(self.io)
if not res:
self.io.tool_error("Unable to initialize playwright.")
self.io.tool_warning("Unable to initialize playwright.")
self.scraper = Scraper(
print_error=self.io.tool_error, playwright_available=res, verify_ssl=self.verify_ssl
@ -142,19 +162,24 @@ class Commands:
self.io.tool_output("... done.")
if paginate:
with self.io.console.pager():
self.io.console.print(Text(content))
return content
def is_command(self, inp):
return inp[0] in "/!"
def get_raw_completions(self, cmd):
assert cmd.startswith("/")
cmd = cmd[1:]
cmd = cmd.replace("-", "_")
raw_completer = getattr(self, f"completions_raw_{cmd}", None)
return raw_completer
def get_completions(self, cmd):
assert cmd.startswith("/")
cmd = cmd[1:]
cmd = cmd.replace("-", "_")
fun = getattr(self, f"completions_{cmd}", None)
if not fun:
return
@ -175,10 +200,14 @@ class Commands:
cmd_name = cmd_name.replace("-", "_")
cmd_method_name = f"cmd_{cmd_name}"
cmd_method = getattr(self, cmd_method_name, None)
if cmd_method:
return cmd_method(args)
else:
if not cmd_method:
self.io.tool_output(f"Error: Command {cmd_name} not found.")
return
try:
return cmd_method(args)
except ANY_GIT_ERROR as err:
self.io.tool_error(f"Unable to complete {cmd_name}: {err}")
def matching_commands(self, inp):
words = inp.strip().split()
@ -186,7 +215,7 @@ class Commands:
return
first_word = words[0]
rest_inp = inp[len(words[0]) :]
rest_inp = inp[len(words[0]) :].strip()
all_commands = self.get_commands()
matching_commands = [cmd for cmd in all_commands if cmd.startswith(first_word)]
@ -219,20 +248,25 @@ class Commands:
def cmd_commit(self, args=None):
"Commit edits to the repo made outside the chat (commit message optional)"
try:
self.raw_cmd_commit(args)
except ANY_GIT_ERROR as err:
self.io.tool_error(f"Unable to complete commit: {err}")
def raw_cmd_commit(self, args=None):
if not self.coder.repo:
self.io.tool_error("No git repository found.")
return
if not self.coder.repo.is_dirty():
self.io.tool_error("No more changes to commit.")
self.io.tool_warning("No more changes to commit.")
return
commit_message = args.strip() if args else None
self.coder.repo.commit(message=commit_message)
def cmd_lint(self, args="", fnames=None):
"Lint and fix provided files or in-chat files if none provided"
"Lint and fix in-chat files or all dirty files if none in chat"
if not self.coder.repo:
self.io.tool_error("No git repository found.")
@ -246,7 +280,7 @@ class Commands:
fnames = self.coder.repo.get_dirty_files()
if not fnames:
self.io.tool_error("No dirty files to lint.")
self.io.tool_warning("No dirty files to lint.")
return
fnames = [self.coder.abs_root_path(fname) for fname in fnames]
@ -257,18 +291,18 @@ class Commands:
errors = self.coder.linter.lint(fname)
except FileNotFoundError as err:
self.io.tool_error(f"Unable to lint {fname}")
self.io.tool_error(str(err))
self.io.tool_output(str(err))
continue
if not errors:
continue
self.io.tool_error(errors)
self.io.tool_output(errors)
if not self.io.confirm_ask(f"Fix lint errors in {fname}?", default="y"):
continue
# Commit everything before we start fixing lint errors
if self.coder.repo.is_dirty():
if self.coder.repo.is_dirty() and self.coder.dirty_commits:
self.cmd_commit("")
if not lint_coder:
@ -283,15 +317,28 @@ class Commands:
lint_coder.run(errors)
lint_coder.abs_fnames = set()
if lint_coder and self.coder.repo.is_dirty():
if lint_coder and self.coder.repo.is_dirty() and self.coder.auto_commits:
self.cmd_commit("")
def cmd_clear(self, args):
"Clear the chat history"
self._clear_chat_history()
def _drop_all_files(self):
self.coder.abs_fnames = set()
self.coder.abs_read_only_fnames = set()
def _clear_chat_history(self):
self.coder.done_messages = []
self.coder.cur_messages = []
def cmd_reset(self, args):
"Drop all files and clear the chat history"
self._drop_all_files()
self._clear_chat_history()
self.io.tool_output("All files dropped and chat history cleared.")
def cmd_tokens(self, args):
"Report on the number of tokens used by the current chat context"
@ -398,15 +445,37 @@ class Commands:
def cmd_undo(self, args):
"Undo the last git commit if it was done by aider"
try:
self.raw_cmd_undo(args)
except ANY_GIT_ERROR as err:
self.io.tool_error(f"Unable to complete undo: {err}")
def raw_cmd_undo(self, args):
if not self.coder.repo:
self.io.tool_error("No git repository found.")
return
last_commit = self.coder.repo.repo.head.commit
if not last_commit.parents:
last_commit = self.coder.repo.get_head_commit()
if not last_commit or not last_commit.parents:
self.io.tool_error("This is the first commit in the repository. Cannot undo.")
return
last_commit_hash = self.coder.repo.get_head_commit_sha(short=True)
last_commit_message = self.coder.repo.get_head_commit_message("(unknown)").strip()
if last_commit_hash not in self.coder.aider_commit_hashes:
self.io.tool_error("The last commit was not made by aider in this chat session.")
self.io.tool_output(
"You could try `/git reset --hard HEAD^` but be aware that this is a destructive"
" command!"
)
return
if len(last_commit.parents) > 1:
self.io.tool_error(
f"The last commit {last_commit.hexsha} has more than 1 parent, can't undo."
)
return
prev_commit = last_commit.parents[0]
changed_files_last_commit = [item.a_path for item in last_commit.diff(prev_commit)]
@ -432,7 +501,7 @@ class Commands:
try:
remote_head = self.coder.repo.repo.git.rev_parse(f"origin/{current_branch}")
has_origin = True
except git.exc.GitCommandError:
except ANY_GIT_ERROR:
has_origin = False
if has_origin:
@ -443,19 +512,25 @@ class Commands:
)
return
last_commit_hash = self.coder.repo.repo.head.commit.hexsha[:7]
last_commit_message = self.coder.repo.repo.head.commit.message.strip()
if last_commit_hash not in self.coder.aider_commit_hashes:
self.io.tool_error("The last commit was not made by aider in this chat session.")
self.io.tool_error(
"You could try `/git reset --hard HEAD^` but be aware that this is a destructive"
" command!"
)
return
# Reset only the files which are part of `last_commit`
restored = set()
unrestored = set()
for file_path in changed_files_last_commit:
self.coder.repo.repo.git.checkout("HEAD~1", file_path)
try:
self.coder.repo.repo.git.checkout("HEAD~1", file_path)
restored.add(file_path)
except ANY_GIT_ERROR:
unrestored.add(file_path)
if unrestored:
self.io.tool_error(f"Error restoring {file_path}, aborting undo.")
self.io.tool_output("Restored files:")
for file in restored:
self.io.tool_output(f" {file}")
self.io.tool_output("Unable to restore files:")
for file in unrestored:
self.io.tool_output(f" {file}")
return
# Move the HEAD back before the latest commit
self.coder.repo.repo.git.reset("--soft", "HEAD~1")
@ -463,8 +538,8 @@ class Commands:
self.io.tool_output(f"Removed: {last_commit_hash} {last_commit_message}")
# Get the current HEAD after undo
current_head_hash = self.coder.repo.repo.head.commit.hexsha[:7]
current_head_message = self.coder.repo.repo.head.commit.message.strip()
current_head_hash = self.coder.repo.get_head_commit_sha(short=True)
current_head_message = self.coder.repo.get_head_commit_message("(unknown)").strip()
self.io.tool_output(f"Now at: {current_head_hash} {current_head_message}")
if self.coder.main_model.send_undo_reply:
@ -472,11 +547,17 @@ class Commands:
def cmd_diff(self, args=""):
"Display the diff of changes since the last message"
try:
self.raw_cmd_diff(args)
except ANY_GIT_ERROR as err:
self.io.tool_error(f"Unable to complete diff: {err}")
def raw_cmd_diff(self, args=""):
if not self.coder.repo:
self.io.tool_error("No git repository found.")
return
current_head = self.coder.repo.get_head()
current_head = self.coder.repo.get_head_commit_sha()
if current_head is None:
self.io.tool_error("Unable to get current commit. The repository might be empty.")
return
@ -487,7 +568,7 @@ class Commands:
commit_before_message = self.coder.commit_before_message[-2]
if not commit_before_message or commit_before_message == current_head:
self.io.tool_error("No changes to display since the last message.")
self.io.tool_warning("No changes to display since the last message.")
return
self.io.tool_output(f"Diff since {commit_before_message[:7]}...")
@ -498,16 +579,69 @@ class Commands:
"HEAD",
)
# don't use io.tool_output() because we don't want to log or further colorize
print(diff)
self.io.print(diff)
def quote_fname(self, fname):
if " " in fname and '"' not in fname:
fname = f'"{fname}"'
return fname
def completions_read(self):
return self.completions_add()
def completions_raw_read_only(self, document, complete_event):
# Get the text before the cursor
text = document.text_before_cursor
# Skip the first word and the space after it
after_command = text.split()[-1]
# Create a new Document object with the text after the command
new_document = Document(after_command, cursor_position=len(after_command))
def get_paths():
return [self.coder.root] if self.coder.root else None
path_completer = PathCompleter(
get_paths=get_paths,
only_directories=False,
expanduser=True,
)
# Adjust the start_position to replace all of 'after_command'
adjusted_start_position = -len(after_command)
# Collect all completions
all_completions = []
# Iterate over the completions and modify them
for completion in path_completer.get_completions(new_document, complete_event):
quoted_text = self.quote_fname(after_command + completion.text)
all_completions.append(
Completion(
text=quoted_text,
start_position=adjusted_start_position,
display=completion.display,
style=completion.style,
selected_style=completion.selected_style,
)
)
# Add completions from the 'add' command
add_completions = self.completions_add()
for completion in add_completions:
if after_command in completion:
all_completions.append(
Completion(
text=completion,
start_position=adjusted_start_position,
display=completion,
)
)
# Sort all completions based on their text
sorted_completions = sorted(all_completions, key=lambda c: c.text)
# Yield the sorted completions
for completion in sorted_completions:
yield completion
def completions_add(self):
files = set(self.coder.get_all_relative_files())
@ -516,12 +650,17 @@ class Commands:
return files
def glob_filtered_to_repo(self, pattern):
if not pattern.strip():
return []
try:
if os.path.isabs(pattern):
# Handle absolute paths
raw_matched_files = [Path(pattern)]
else:
raw_matched_files = list(Path(self.coder.root).glob(pattern))
try:
raw_matched_files = list(Path(self.coder.root).glob(pattern))
except (IndexError, AttributeError):
raw_matched_files = []
except ValueError as err:
self.io.tool_error(f"Error matching {pattern}: {err}")
raw_matched_files = []
@ -531,9 +670,9 @@ class Commands:
matched_files += expand_subdir(fn)
matched_files = [
str(Path(fn).relative_to(self.coder.root))
fn.relative_to(self.coder.root)
for fn in matched_files
if Path(fn).is_relative_to(self.coder.root)
if fn.is_relative_to(self.coder.root)
]
# if repo, filter against it
@ -545,9 +684,7 @@ class Commands:
return res
def cmd_add(self, args):
"Add files to the chat so GPT can edit them or review them in detail"
added_fnames = []
"Add files to the chat so aider can edit them or review them in detail"
all_matched_files = set()
@ -559,7 +696,7 @@ class Commands:
fname = Path(self.coder.root) / word
if self.coder.repo and self.coder.repo.ignored_file(fname):
self.io.tool_error(f"Skipping {fname} that matches aiderignore spec.")
self.io.tool_warning(f"Skipping {fname} due to aiderignore or --subtree-only.")
continue
if fname.exists():
@ -574,17 +711,25 @@ class Commands:
all_matched_files.update(matched_files)
continue
if self.io.confirm_ask(f"No files matched '{word}'. Do you want to create {fname}?"):
if "*" in str(fname) or "?" in str(fname):
self.io.tool_error(f"Cannot create file with wildcard characters: {fname}")
else:
try:
fname.touch()
all_matched_files.add(str(fname))
except OSError as e:
self.io.tool_error(f"Error creating file {fname}: {e}")
if "*" in str(fname) or "?" in str(fname):
self.io.tool_error(
f"No match, and cannot create file with wildcard characters: {fname}"
)
continue
for matched_file in all_matched_files:
if fname.exists() and fname.is_dir() and self.coder.repo:
self.io.tool_error(f"Directory {fname} is not in git.")
self.io.tool_output(f"You can add to git with: /git add {fname}")
continue
if self.io.confirm_ask(f"No files matched '{word}'. Do you want to create {fname}?"):
try:
fname.touch()
all_matched_files.add(str(fname))
except OSError as e:
self.io.tool_error(f"Error creating file {fname}: {e}")
for matched_file in sorted(all_matched_files):
abs_file_path = self.coder.abs_root_path(matched_file)
if not abs_file_path.startswith(self.coder.root) and not is_image_file(matched_file):
@ -594,7 +739,8 @@ class Commands:
continue
if abs_file_path in self.coder.abs_fnames:
self.io.tool_error(f"{matched_file} is already in the chat")
self.io.tool_error(f"{matched_file} is already in the chat as an editable file")
continue
elif abs_file_path in self.coder.abs_read_only_fnames:
if self.coder.repo and self.coder.repo.path_in_repo(matched_file):
self.coder.abs_read_only_fnames.remove(abs_file_path)
@ -602,17 +748,17 @@ class Commands:
self.io.tool_output(
f"Moved {matched_file} from read-only to editable files in the chat"
)
added_fnames.append(matched_file)
else:
self.io.tool_error(
f"Cannot add {matched_file} as it's not part of the repository"
)
else:
if is_image_file(matched_file) and not self.coder.main_model.accepts_images:
if is_image_file(matched_file) and not self.coder.main_model.info.get(
"supports_vision"
):
self.io.tool_error(
f"Cannot add image file {matched_file} as the"
f" {self.coder.main_model.name} does not support image.\nYou can run `aider"
" --4-turbo-vision` to use GPT-4 Turbo with Vision."
f" {self.coder.main_model.name} does not support images."
)
continue
content = self.io.read_text(abs_file_path)
@ -622,7 +768,6 @@ class Commands:
self.coder.abs_fnames.add(abs_file_path)
self.io.tool_output(f"Added {matched_file} to the chat")
self.coder.check_added_files()
added_fnames.append(matched_file)
def completions_drop(self):
files = self.coder.get_inchat_relative_files()
@ -636,24 +781,26 @@ class Commands:
if not args.strip():
self.io.tool_output("Dropping all files from the chat session.")
self.coder.abs_fnames = set()
self.coder.abs_read_only_fnames = set()
self._drop_all_files()
return
filenames = parse_quoted_filenames(args)
for word in filenames:
# Expand tilde in the path
expanded_word = os.path.expanduser(word)
# Handle read-only files separately, without glob_filtered_to_repo
read_only_matched = [f for f in self.coder.abs_read_only_fnames if word in f]
read_only_matched = [f for f in self.coder.abs_read_only_fnames if expanded_word in f]
if read_only_matched:
for matched_file in read_only_matched:
self.coder.abs_read_only_fnames.remove(matched_file)
self.io.tool_output(f"Removed read-only file {matched_file} from the chat")
matched_files = self.glob_filtered_to_repo(word)
matched_files = self.glob_filtered_to_repo(expanded_word)
if not matched_files:
matched_files.append(word)
matched_files.append(expanded_word)
for matched_file in matched_files:
abs_fname = self.coder.abs_root_path(matched_file)
@ -662,7 +809,7 @@ class Commands:
self.io.tool_output(f"Removed {matched_file} from the chat")
def cmd_git(self, args):
"Run a git command"
"Run a git command (output excluded from chat)"
combined_output = None
try:
args = "git " + args
@ -692,45 +839,39 @@ class Commands:
if not args and self.coder.test_cmd:
args = self.coder.test_cmd
if not args:
return
if not callable(args):
if type(args) is not str:
raise ValueError(repr(args))
return self.cmd_run(args, True)
errors = args()
if not errors:
return
self.io.tool_error(errors, strip=False)
self.io.tool_output(errors)
return errors
def cmd_run(self, args, add_on_nonzero_exit=False):
"Run a shell command and optionally add the output to the chat (alias: !)"
combined_output = None
exit_status, combined_output = run_cmd(
args, verbose=self.verbose, error_print=self.io.tool_error
)
instructions = None
try:
result = subprocess.run(
args,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
text=True,
shell=True,
encoding=self.io.encoding,
errors="replace",
)
combined_output = result.stdout
except Exception as e:
self.io.tool_error(f"Error running command: {e}")
if combined_output is None:
return
self.io.tool_output(combined_output)
if add_on_nonzero_exit:
add = result.returncode != 0
add = exit_status != 0
else:
self.io.tool_output()
response = self.io.prompt_ask(
"Add the output to the chat?\n(Y/n/instructions)", default=""
"Add the output to the chat?\n(Y)es/(n)o/message with instructions:",
).strip()
self.io.tool_output()
if response.lower() in ["yes", "y"]:
add = True
@ -739,6 +880,9 @@ class Commands:
else:
add = True
instructions = response
if response.strip():
self.io.user_input(response, log_only=True)
self.io.add_to_input_history(response)
if add:
for line in combined_output.splitlines():
@ -868,14 +1012,6 @@ class Commands:
show_announcements=False,
)
def clone(self):
return Commands(
self.io,
None,
voice_language=self.voice_language,
verify_ssl=self.verify_ssl,
)
def cmd_ask(self, args):
"Ask questions about the code base without editing any files"
return self._generic_chat_command(args, "ask")
@ -884,6 +1020,10 @@ class Commands:
"Ask for changes to your code"
return self._generic_chat_command(args, self.coder.main_model.edit_format)
def cmd_architect(self, args):
"Enter architect mode to discuss high-level design and architecture"
return self._generic_chat_command(args, "architect")
def _generic_chat_command(self, args, edit_format):
if not args.strip():
self.io.tool_error(f"Please provide a question or topic for the {edit_format} chat.")
@ -936,7 +1076,7 @@ class Commands:
self.io.tool_error("To use /voice you must provide an OpenAI API key.")
return
try:
self.voice = voice.Voice()
self.voice = voice.Voice(audio_format=self.args.voice_format)
except voice.SoundDeviceError:
self.io.tool_error(
"Unable to import `sounddevice` and/or `soundfile`, is portaudio installed?"
@ -968,14 +1108,15 @@ class Commands:
if text:
self.io.add_to_input_history(text)
print()
self.io.print()
self.io.user_input(text, log_only=False)
print()
self.io.print()
return text
def cmd_clipboard(self, args):
"Add image/text from the clipboard to the chat (optionally provide a name for the image)"
def cmd_paste(self, args):
"""Paste image/text from the clipboard into the chat.\
Optionally provide a name for the image."""
try:
# Check for image first
image = ImageGrab.grabclipboard()
@ -1023,25 +1164,72 @@ class Commands:
except Exception as e:
self.io.tool_error(f"Error processing clipboard content: {e}")
def cmd_read(self, args):
"Add a file to the chat that is for reference, not to be edited"
def cmd_read_only(self, args):
"Add files to the chat that are for reference, not to be edited"
if not args.strip():
self.io.tool_error("Please provide a filename to read.")
self.io.tool_error("Please provide filenames or directories to read.")
return
filename = args.strip()
abs_path = os.path.abspath(filename)
filenames = parse_quoted_filenames(args)
all_paths = []
if not os.path.exists(abs_path):
self.io.tool_error(f"File not found: {abs_path}")
# First collect all expanded paths
for pattern in filenames:
expanded_pattern = expanduser(pattern)
if os.path.isabs(expanded_pattern):
# For absolute paths, glob it
matches = list(glob.glob(expanded_pattern))
else:
# For relative paths and globs, use glob from the root directory
matches = list(Path(self.coder.root).glob(expanded_pattern))
if not matches:
self.io.tool_error(f"No matches found for: {pattern}")
else:
all_paths.extend(matches)
# Then process them in sorted order
for path in sorted(all_paths):
abs_path = self.coder.abs_root_path(path)
if os.path.isfile(abs_path):
self._add_read_only_file(abs_path, path)
elif os.path.isdir(abs_path):
self._add_read_only_directory(abs_path, path)
else:
self.io.tool_error(f"Not a file or directory: {abs_path}")
def _add_read_only_file(self, abs_path, original_name):
if abs_path in self.coder.abs_read_only_fnames:
self.io.tool_error(f"{original_name} is already in the chat as a read-only file")
return
elif abs_path in self.coder.abs_fnames:
self.coder.abs_fnames.remove(abs_path)
self.coder.abs_read_only_fnames.add(abs_path)
self.io.tool_output(
f"Moved {original_name} from editable to read-only files in the chat"
)
else:
self.coder.abs_read_only_fnames.add(abs_path)
self.io.tool_output(f"Added {original_name} to read-only files.")
if not os.path.isfile(abs_path):
self.io.tool_error(f"Not a file: {abs_path}")
return
def _add_read_only_directory(self, abs_path, original_name):
added_files = 0
for root, _, files in os.walk(abs_path):
for file in files:
file_path = os.path.join(root, file)
if (
file_path not in self.coder.abs_fnames
and file_path not in self.coder.abs_read_only_fnames
):
self.coder.abs_read_only_fnames.add(file_path)
added_files += 1
self.coder.abs_read_only_fnames.add(abs_path)
self.io.tool_output(f"Added {abs_path} to read-only files.")
if added_files > 0:
self.io.tool_output(
f"Added {added_files} files from directory {original_name} to read-only files."
)
else:
self.io.tool_output(f"No new files added from directory {original_name}.")
def cmd_map(self, args):
"Print out the current repository map"
@ -1051,9 +1239,118 @@ class Commands:
else:
self.io.tool_output("No repository map available.")
def cmd_map_refresh(self, args):
"Force a refresh of the repository map"
repo_map = self.coder.get_repo_map(force_refresh=True)
if repo_map:
self.io.tool_output("The repo map has been refreshed, use /map to view it.")
def cmd_settings(self, args):
"Print out the current settings"
settings = format_settings(self.parser, self.args)
announcements = "\n".join(self.coder.get_announcements())
output = f"{announcements}\n{settings}"
self.io.tool_output(output)
def completions_raw_load(self, document, complete_event):
return self.completions_raw_read_only(document, complete_event)
def cmd_load(self, args):
"Load and execute commands from a file"
if not args.strip():
self.io.tool_error("Please provide a filename containing commands to load.")
return
try:
with open(args.strip(), "r", encoding=self.io.encoding, errors="replace") as f:
commands = f.readlines()
except FileNotFoundError:
self.io.tool_error(f"File not found: {args}")
return
except Exception as e:
self.io.tool_error(f"Error reading file: {e}")
return
for cmd in commands:
cmd = cmd.strip()
if not cmd or cmd.startswith("#"):
continue
self.io.tool_output(f"\nExecuting: {cmd}")
self.run(cmd)
def completions_raw_save(self, document, complete_event):
return self.completions_raw_read_only(document, complete_event)
def cmd_save(self, args):
"Save commands to a file that can reconstruct the current chat session's files"
if not args.strip():
self.io.tool_error("Please provide a filename to save the commands to.")
return
try:
with open(args.strip(), "w", encoding=self.io.encoding) as f:
# Write commands to add editable files
for fname in sorted(self.coder.abs_fnames):
rel_fname = self.coder.get_rel_fname(fname)
f.write(f"/add {rel_fname}\n")
# Write commands to add read-only files
for fname in sorted(self.coder.abs_read_only_fnames):
# Use absolute path for files outside repo root, relative path for files inside
if Path(fname).is_relative_to(self.coder.root):
rel_fname = self.coder.get_rel_fname(fname)
f.write(f"/read-only {rel_fname}\n")
else:
f.write(f"/read-only {fname}\n")
self.io.tool_output(f"Saved commands to {args.strip()}")
except Exception as e:
self.io.tool_error(f"Error saving commands to file: {e}")
def cmd_copy(self, args):
"Copy the last assistant message to the clipboard"
all_messages = self.coder.done_messages + self.coder.cur_messages
assistant_messages = [msg for msg in reversed(all_messages) if msg["role"] == "assistant"]
if not assistant_messages:
self.io.tool_error("No assistant messages found to copy.")
return
last_assistant_message = assistant_messages[0]["content"]
try:
pyperclip.copy(last_assistant_message)
preview = (
last_assistant_message[:50] + "..."
if len(last_assistant_message) > 50
else last_assistant_message
)
self.io.tool_output(f"Copied last assistant message to clipboard. Preview: {preview}")
except pyperclip.PyperclipException as e:
self.io.tool_error(f"Failed to copy to clipboard: {str(e)}")
self.io.tool_output(
"You may need to install xclip or xsel on Linux, or pbcopy on macOS."
)
except Exception as e:
self.io.tool_error(f"An unexpected error occurred while copying to clipboard: {str(e)}")
def cmd_report(self, args):
"Report a problem by opening a GitHub Issue"
from aider.report import report_github_issue
announcements = "\n".join(self.coder.get_announcements())
issue_text = announcements
if args.strip():
title = args.strip()
else:
title = None
report_github_issue(issue_text, title=title, confirm=False)
def expand_subdir(file_path):
file_path = Path(file_path)
if file_path.is_file():
yield file_path
return
@ -1061,7 +1358,7 @@ def expand_subdir(file_path):
if file_path.is_dir():
for file in file_path.rglob("*"):
if file.is_file():
yield str(file)
yield file
def parse_quoted_filenames(args):
@ -1071,11 +1368,7 @@ def parse_quoted_filenames(args):
def get_help_md():
from aider.coders import Coder
from aider.models import Model
coder = Coder(Model("gpt-3.5-turbo"), None)
md = coder.commands.get_help_md()
md = Commands(None, None).get_help_md()
return md

26
aider/format_settings.py Normal file
View file

@ -0,0 +1,26 @@
def scrub_sensitive_info(args, text):
# Replace sensitive information with last 4 characters
if text and args.openai_api_key:
last_4 = args.openai_api_key[-4:]
text = text.replace(args.openai_api_key, f"...{last_4}")
if text and args.anthropic_api_key:
last_4 = args.anthropic_api_key[-4:]
text = text.replace(args.anthropic_api_key, f"...{last_4}")
return text
def format_settings(parser, args):
show = scrub_sensitive_info(args, parser.format_values())
# clean up the headings for consistency w/ new lines
heading_env = "Environment Variables:"
heading_defaults = "Defaults:"
if heading_env in show:
show = show.replace(heading_env, "\n" + heading_env)
show = show.replace(heading_defaults, "\n" + heading_defaults)
show += "\n"
show += "Option settings:\n"
for arg, val in sorted(vars(args).items()):
if val:
val = scrub_sensitive_info(args, str(val))
show += f" - {arg}: {val}\n" # noqa: E221
return show

View file

@ -26,6 +26,10 @@ class CaptureIO(InputOutput):
self.lines.append(msg)
super().tool_error(msg)
def tool_warning(self, msg):
self.lines.append(msg)
super().tool_warning(msg)
def get_captured_lines(self):
lines = self.lines
self.lines = []
@ -156,7 +160,7 @@ class GUI:
st.warning(
"This browser version of aider is experimental. Please share feedback in [GitHub"
" issues](https://github.com/paul-gauthier/aider/issues)."
" issues](https://github.com/Aider-AI/aider/issues)."
)
def do_settings_tab(self):
@ -524,7 +528,7 @@ def gui_main():
page_icon=urls.favicon,
menu_items={
"Get Help": urls.website,
"Report a bug": "https://github.com/paul-gauthier/aider/issues",
"Report a bug": "https://github.com/Aider-AI/aider/issues",
"About": "# Aider\nAI pair programming in your browser.",
},
)

View file

@ -1,6 +1,8 @@
#!/usr/bin/env python
import json
import os
import shutil
import warnings
from pathlib import Path
@ -38,24 +40,45 @@ def get_package_files():
def fname_to_url(filepath):
website = "website/"
index = "/index.md"
website = "website"
index = "index.md"
md = ".md"
docid = ""
if filepath.startswith("website/_includes/"):
pass
elif filepath.startswith(website):
docid = filepath[len(website) :]
# Convert backslashes to forward slashes for consistency
filepath = filepath.replace("\\", "/")
if filepath.endswith(index):
filepath = filepath[: -len(index)] + "/"
elif filepath.endswith(md):
filepath = filepath[: -len(md)] + ".html"
# Convert to Path object for easier manipulation
path = Path(filepath)
docid = "https://aider.chat/" + filepath
# Split the path into parts
parts = path.parts
return docid
# Find the 'website' part in the path
try:
website_index = [p.lower() for p in parts].index(website.lower())
except ValueError:
return "" # 'website' not found in the path
# Extract the part of the path starting from 'website'
relevant_parts = parts[website_index + 1 :]
# Handle _includes directory
if relevant_parts and relevant_parts[0].lower() == "_includes":
return ""
# Join the remaining parts
url_path = "/".join(relevant_parts)
# Handle index.md and other .md files
if url_path.lower().endswith(index.lower()):
url_path = url_path[: -len(index)]
elif url_path.lower().endswith(md.lower()):
url_path = url_path[: -len(md)] + ".html"
# Ensure the URL starts and ends with '/'
url_path = url_path.strip("/")
return f"https://aider.chat/{url_path}"
def get_index():
@ -69,12 +92,17 @@ def get_index():
dname = Path.home() / ".aider" / "caches" / ("help." + __version__)
if dname.exists():
storage_context = StorageContext.from_defaults(
persist_dir=dname,
)
index = load_index_from_storage(storage_context)
else:
index = None
try:
if dname.exists():
storage_context = StorageContext.from_defaults(
persist_dir=dname,
)
index = load_index_from_storage(storage_context)
except (OSError, json.JSONDecodeError):
shutil.rmtree(dname)
if index is None:
parser = MarkdownNodeParser()
nodes = []

View file

@ -108,7 +108,9 @@ class ChatSummary:
for model in self.models:
try:
summary = simple_send_with_retries(model.name, summarize_messages)
summary = simple_send_with_retries(
model.name, summarize_messages, extra_params=model.extra_params
)
if summary is not None:
summary = prompts.summary_prefix + summary
return [dict(role="user", content=summary)]

View file

@ -1,27 +1,41 @@
import base64
import os
from collections import defaultdict
from dataclasses import dataclass
from datetime import datetime
from pathlib import Path
from prompt_toolkit.completion import Completer, Completion
from prompt_toolkit.completion import Completer, Completion, ThreadedCompleter
from prompt_toolkit.cursor_shapes import ModalCursorShapeConfig
from prompt_toolkit.enums import EditingMode
from prompt_toolkit.history import FileHistory
from prompt_toolkit.key_binding import KeyBindings
from prompt_toolkit.lexers import PygmentsLexer
from prompt_toolkit.shortcuts import CompleteStyle, PromptSession, prompt
from prompt_toolkit.shortcuts import CompleteStyle, PromptSession
from prompt_toolkit.styles import Style
from pygments.lexers import MarkdownLexer, guess_lexer_for_filename
from pygments.token import Token
from pygments.util import ClassNotFound
from rich.console import Console
from rich.markdown import Markdown
from rich.style import Style as RichStyle
from rich.text import Text
from aider.mdstream import MarkdownStream
from .dump import dump # noqa: F401
from .utils import is_image_file
@dataclass
class ConfirmGroup:
preference: str = None
show_group: bool = True
def __init__(self, items=None):
if items is not None:
self.show_group = len(items) > 1
class AutoCompleter(Completer):
def __init__(
self, root, rel_fnames, addable_rel_fnames, commands, encoding, abs_read_only_fnames=None
@ -55,7 +69,15 @@ class AutoCompleter(Completer):
if abs_read_only_fnames:
all_fnames.extend(abs_read_only_fnames)
for fname in all_fnames:
self.all_fnames = all_fnames
self.tokenized = False
def tokenize(self):
if self.tokenized:
return
self.tokenized = True
for fname in self.all_fnames:
try:
with open(fname, "r", encoding=self.encoding) as f:
content = f.read()
@ -63,27 +85,37 @@ class AutoCompleter(Completer):
continue
try:
lexer = guess_lexer_for_filename(fname, content)
except ClassNotFound:
except Exception: # On Windows, bad ref to time.clock which is deprecated
continue
tokens = list(lexer.get_tokens(content))
self.words.update(token[1] for token in tokens if token[0] in Token.Name)
def get_command_completions(self, text, words):
candidates = []
tokens = list(lexer.get_tokens(content))
self.words.update(
(token[1], f"`{token[1]}`") for token in tokens if token[0] in Token.Name
)
def get_command_completions(self, document, complete_event, text, words):
if len(words) == 1 and not text[-1].isspace():
partial = words[0].lower()
candidates = [cmd for cmd in self.command_names if cmd.startswith(partial)]
return candidates
for candidate in sorted(candidates):
yield Completion(candidate, start_position=-len(words[-1]))
return
if len(words) <= 1:
return []
if text[-1].isspace():
return []
if len(words) <= 1 or text[-1].isspace():
return
cmd = words[0]
partial = words[-1].lower()
if cmd not in self.command_names:
matches, _, _ = self.commands.matching_commands(cmd)
if len(matches) == 1:
cmd = matches[0]
elif cmd not in matches:
return
raw_completer = self.commands.get_raw_completions(cmd)
if raw_completer:
yield from raw_completer(document, complete_event)
return
if cmd not in self.command_completions:
@ -96,38 +128,42 @@ class AutoCompleter(Completer):
return
candidates = [word for word in candidates if partial in word.lower()]
return candidates
for candidate in sorted(candidates):
yield Completion(candidate, start_position=-len(words[-1]))
def get_completions(self, document, complete_event):
self.tokenize()
text = document.text_before_cursor
words = text.split()
if not words:
return
if text and text[-1].isspace():
# don't keep completing after a space
return
if text[0] == "/":
candidates = self.get_command_completions(text, words)
if candidates is not None:
for candidate in candidates:
yield Completion(candidate, start_position=-len(words[-1]))
return
yield from self.get_command_completions(document, complete_event, text, words)
return
candidates = self.words
candidates.update(set(self.fname_to_rel_fnames))
candidates = [(word, f"`{word}`") for word in candidates]
candidates = [word if type(word) is tuple else (word, word) for word in candidates]
last_word = words[-1]
completions = []
for word_match, word_insert in candidates:
if word_match.lower().startswith(last_word.lower()):
completions.append((word_insert, -len(last_word), word_match))
rel_fnames = self.fname_to_rel_fnames.get(word_match, [])
if rel_fnames:
for rel_fname in rel_fnames:
yield Completion(
f"`{rel_fname}`", start_position=-len(last_word), display=rel_fname
)
else:
yield Completion(
word_insert, start_position=-len(last_word), display=word_match
)
completions.append((rel_fname, -len(last_word), rel_fname))
for ins, pos, match in sorted(completions):
yield Completion(ins, start_position=pos, display=match)
class InputOutput:
@ -137,7 +173,7 @@ class InputOutput:
def __init__(
self,
pretty=True,
yes=False,
yes=None,
input_history_file=None,
chat_history_file=None,
input=None,
@ -145,11 +181,20 @@ class InputOutput:
user_input_color="blue",
tool_output_color=None,
tool_error_color="red",
tool_warning_color="#FFA500",
assistant_output_color="blue",
completion_menu_color=None,
completion_menu_bg_color=None,
completion_menu_current_color=None,
completion_menu_current_bg_color=None,
code_theme="default",
encoding="utf-8",
dry_run=False,
llm_history_file=None,
editingmode=EditingMode.EMACS,
fancy_input=True,
):
self.never_prompts = set()
self.editingmode = editingmode
no_color = os.environ.get("NO_COLOR")
if no_color is not None and no_color != "":
@ -158,6 +203,14 @@ class InputOutput:
self.user_input_color = user_input_color if pretty else None
self.tool_output_color = tool_output_color if pretty else None
self.tool_error_color = tool_error_color if pretty else None
self.tool_warning_color = tool_warning_color if pretty else None
self.assistant_output_color = assistant_output_color
self.completion_menu_color = completion_menu_color if pretty else None
self.completion_menu_bg_color = completion_menu_bg_color if pretty else None
self.completion_menu_current_color = completion_menu_current_color if pretty else None
self.completion_menu_current_bg_color = completion_menu_current_bg_color if pretty else None
self.code_theme = code_theme
self.input = input
self.output = output
@ -178,19 +231,74 @@ class InputOutput:
self.encoding = encoding
self.dry_run = dry_run
if pretty:
self.console = Console()
else:
self.console = Console(force_terminal=False, no_color=True)
current_time = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
self.append_chat_history(f"\n# aider chat started at {current_time}\n\n")
self.prompt_session = None
if fancy_input:
# Initialize PromptSession
session_kwargs = {
"input": self.input,
"output": self.output,
"lexer": PygmentsLexer(MarkdownLexer),
"editing_mode": self.editingmode,
}
if self.editingmode == EditingMode.VI:
session_kwargs["cursor"] = ModalCursorShapeConfig()
if self.input_history_file is not None:
session_kwargs["history"] = FileHistory(self.input_history_file)
try:
self.prompt_session = PromptSession(**session_kwargs)
self.console = Console() # pretty console
except Exception as err:
self.console = Console(force_terminal=False, no_color=True)
self.tool_error(f"Can't initialize prompt toolkit: {err}") # non-pretty
else:
self.console = Console(force_terminal=False, no_color=True) # non-pretty
def _get_style(self):
style_dict = {}
if not self.pretty:
return Style.from_dict(style_dict)
if self.user_input_color:
style_dict.setdefault("", self.user_input_color)
style_dict.update(
{
"pygments.literal.string": f"bold italic {self.user_input_color}",
}
)
# Conditionally add 'completion-menu' style
completion_menu_style = []
if self.completion_menu_bg_color:
completion_menu_style.append(f"bg:{self.completion_menu_bg_color}")
if self.completion_menu_color:
completion_menu_style.append(self.completion_menu_color)
if completion_menu_style:
style_dict["completion-menu"] = " ".join(completion_menu_style)
# Conditionally add 'completion-menu.completion.current' style
completion_menu_current_style = []
if self.completion_menu_current_bg_color:
completion_menu_current_style.append(f"bg:{self.completion_menu_current_bg_color}")
if self.completion_menu_current_color:
completion_menu_current_style.append(self.completion_menu_current_color)
if completion_menu_current_style:
style_dict["completion-menu.completion.current"] = " ".join(
completion_menu_current_style
)
return Style.from_dict(style_dict)
def read_image(self, filename):
try:
with open(str(filename), "rb") as image_file:
encoded_string = base64.b64encode(image_file.read())
return encoded_string.decode("utf-8")
except OSError as err:
self.tool_error(f"{filename}: unable to read: {err}")
return
except FileNotFoundError:
self.tool_error(f"{filename}: file not found error")
return
@ -208,6 +316,9 @@ class InputOutput:
try:
with open(str(filename), "r", encoding=self.encoding) as f:
return f.read()
except OSError as err:
self.tool_error(f"{filename}: unable to read: {err}")
return
except FileNotFoundError:
self.tool_error(f"{filename}: file not found error")
return
@ -222,73 +333,87 @@ class InputOutput:
def write_text(self, filename, content):
if self.dry_run:
return
with open(str(filename), "w", encoding=self.encoding) as f:
f.write(content)
try:
with open(str(filename), "w", encoding=self.encoding) as f:
f.write(content)
except OSError as err:
self.tool_error(f"Unable to write file {filename}: {err}")
def get_input(self, root, rel_fnames, addable_rel_fnames, commands, abs_read_only_fnames=None):
def rule(self):
if self.pretty:
style = dict(style=self.user_input_color) if self.user_input_color else dict()
self.console.rule(**style)
else:
print()
def get_input(
self,
root,
rel_fnames,
addable_rel_fnames,
commands,
abs_read_only_fnames=None,
edit_format=None,
):
self.rule()
rel_fnames = list(rel_fnames)
show = " ".join(rel_fnames)
if len(show) > 10:
show += "\n"
show = ""
if rel_fnames:
rel_read_only_fnames = [
get_rel_fname(fname, root) for fname in (abs_read_only_fnames or [])
]
show = self.format_files_for_input(rel_fnames, rel_read_only_fnames)
if edit_format:
show += edit_format
show += "> "
inp = ""
multiline_input = False
if self.user_input_color:
style = Style.from_dict(
{
"": self.user_input_color,
"pygments.literal.string": f"bold italic {self.user_input_color}",
}
)
else:
style = None
style = self._get_style()
completer_instance = AutoCompleter(
root,
rel_fnames,
addable_rel_fnames,
commands,
self.encoding,
abs_read_only_fnames=abs_read_only_fnames,
completer_instance = ThreadedCompleter(
AutoCompleter(
root,
rel_fnames,
addable_rel_fnames,
commands,
self.encoding,
abs_read_only_fnames=abs_read_only_fnames,
)
)
kb = KeyBindings()
@kb.add("c-space")
def _(event):
"Ignore Ctrl when pressing space bar"
event.current_buffer.insert_text(" ")
@kb.add("escape", "c-m", eager=True)
def _(event):
event.current_buffer.insert_text("\n")
while True:
if multiline_input:
show = ". "
session_kwargs = {
"message": show,
"completer": completer_instance,
"reserve_space_for_menu": 4,
"complete_style": CompleteStyle.MULTI_COLUMN,
"input": self.input,
"output": self.output,
"lexer": PygmentsLexer(MarkdownLexer),
}
if style:
session_kwargs["style"] = style
if self.input_history_file is not None:
session_kwargs["history"] = FileHistory(self.input_history_file)
kb = KeyBindings()
@kb.add("escape", "c-m", eager=True)
def _(event):
event.current_buffer.insert_text("\n")
session = PromptSession(
key_bindings=kb, editing_mode=self.editingmode, **session_kwargs
)
line = session.prompt()
try:
if self.prompt_session:
line = self.prompt_session.prompt(
show,
completer=completer_instance,
reserve_space_for_menu=4,
complete_style=CompleteStyle.MULTI_COLUMN,
style=style,
key_bindings=kb,
)
else:
line = input(show)
except UnicodeEncodeError as err:
self.tool_error(str(err))
return ""
if line and line[0] == "{" and not multiline_input:
multiline_input = True
@ -311,6 +436,9 @@ class InputOutput:
if not self.input_history_file:
return
FileHistory(self.input_history_file).append_string(inp)
# Also add to the in-memory history if it exists
if hasattr(self, "session") and hasattr(self.session, "history"):
self.session.history.append_string(inp)
def get_input_history(self):
if not self.input_history_file:
@ -329,7 +457,11 @@ class InputOutput:
def user_input(self, inp, log_only=True):
if not log_only:
style = dict(style=self.user_input_color) if self.user_input_color else dict()
if self.pretty and self.user_input_color:
style = dict(style=self.user_input_color)
else:
style = dict()
self.console.print(Text(inp), **style)
prefix = "####"
@ -350,33 +482,132 @@ class InputOutput:
hist = "\n" + content.strip() + "\n\n"
self.append_chat_history(hist)
def confirm_ask(self, question, default="y"):
def confirm_ask(
self,
question,
default="y",
subject=None,
explicit_yes_required=False,
group=None,
allow_never=False,
):
self.num_user_asks += 1
question_id = (question, subject)
if question_id in self.never_prompts:
return False
if group and not group.show_group:
group = None
if group:
allow_never = True
valid_responses = ["yes", "no"]
options = " (Y)es/(N)o"
if group:
if not explicit_yes_required:
options += "/(A)ll"
valid_responses.append("all")
options += "/(S)kip all"
valid_responses.append("skip")
if allow_never:
options += "/(D)on't ask again"
valid_responses.append("don't")
question += options + " [Yes]: "
if subject:
self.tool_output()
if "\n" in subject:
lines = subject.splitlines()
max_length = max(len(line) for line in lines)
padded_lines = [line.ljust(max_length) for line in lines]
padded_subject = "\n".join(padded_lines)
self.tool_output(padded_subject, bold=True)
else:
self.tool_output(subject, bold=True)
style = self._get_style()
def is_valid_response(text):
if not text:
return True
return text.lower() in valid_responses
if self.yes is True:
res = "y"
res = "n" if explicit_yes_required else "y"
elif self.yes is False:
res = "n"
elif group and group.preference:
res = group.preference
self.user_input(f"{question}{res}", log_only=False)
else:
res = prompt(question + " ", default=default)
while True:
if self.prompt_session:
res = self.prompt_session.prompt(
question,
style=style,
)
else:
res = input(question)
res = res.lower().strip()
is_yes = res in ("y", "yes")
if not res:
res = "y" # Default to Yes if no input
break
res = res.lower()
good = any(valid_response.startswith(res) for valid_response in valid_responses)
if good:
break
hist = f"{question.strip()} {'y' if is_yes else 'n'}"
error_message = f"Please answer with one of: {', '.join(valid_responses)}"
self.tool_error(error_message)
res = res.lower()[0]
if res == "d" and allow_never:
self.never_prompts.add(question_id)
hist = f"{question.strip()} {res}"
self.append_chat_history(hist, linebreak=True, blockquote=True)
return False
if explicit_yes_required:
is_yes = res == "y"
else:
is_yes = res in ("y", "a")
is_all = res == "a" and group is not None and not explicit_yes_required
is_skip = res == "s" and group is not None
if group:
if is_all and not explicit_yes_required:
group.preference = "all"
elif is_skip:
group.preference = "skip"
hist = f"{question.strip()} {res}"
self.append_chat_history(hist, linebreak=True, blockquote=True)
return is_yes
def prompt_ask(self, question, default=None):
def prompt_ask(self, question, default="", subject=None):
self.num_user_asks += 1
if subject:
self.tool_output()
self.tool_output(subject, bold=True)
style = self._get_style()
if self.yes is True:
res = "yes"
elif self.yes is False:
res = "no"
else:
res = prompt(question + " ", default=default)
if self.prompt_session:
res = self.prompt_session.prompt(question + " ", default=default, style=style)
else:
res = input(question + " ")
hist = f"{question.strip()} {res.strip()}"
self.append_chat_history(hist, linebreak=True, blockquote=True)
@ -385,36 +616,68 @@ class InputOutput:
return res
def tool_error(self, message="", strip=True):
self.num_error_outputs += 1
def _tool_message(self, message="", strip=True, color=None):
if message.strip():
if "\n" in message:
for line in message.splitlines():
self.append_chat_history(line, linebreak=True, blockquote=True, strip=strip)
else:
if strip:
hist = message.strip()
else:
hist = message
hist = message.strip() if strip else message
self.append_chat_history(hist, linebreak=True, blockquote=True)
message = Text(message)
style = dict(style=self.tool_error_color) if self.tool_error_color else dict()
style = dict(style=color) if self.pretty and color else dict()
self.console.print(message, **style)
def tool_error(self, message="", strip=True):
self.num_error_outputs += 1
self._tool_message(message, strip, self.tool_error_color)
def tool_warning(self, message="", strip=True):
self._tool_message(message, strip, self.tool_warning_color)
def tool_output(self, *messages, log_only=False, bold=False):
if messages:
hist = " ".join(messages)
hist = f"{hist.strip()}"
self.append_chat_history(hist, linebreak=True, blockquote=True)
if not log_only:
messages = list(map(Text, messages))
style = dict(color=self.tool_output_color) if self.tool_output_color else dict()
if log_only:
return
messages = list(map(Text, messages))
style = dict()
if self.pretty:
if self.tool_output_color:
style["color"] = self.tool_output_color
style["reverse"] = bold
style = RichStyle(**style)
self.console.print(*messages, style=style)
style = RichStyle(**style)
self.console.print(*messages, style=style)
def get_assistant_mdstream(self):
mdargs = dict(style=self.assistant_output_color, code_theme=self.code_theme)
mdStream = MarkdownStream(mdargs=mdargs)
return mdStream
def assistant_output(self, message, pretty=None):
show_resp = message
# Coder will force pretty off if fence is not triple-backticks
if pretty is None:
pretty = self.pretty
if pretty:
show_resp = Markdown(
message, style=self.assistant_output_color, code_theme=self.code_theme
)
else:
show_resp = Text(message or "<no response>")
self.console.print(show_resp)
def print(self, message=""):
print(message)
def append_chat_history(self, text, linebreak=False, blockquote=False, strip=True):
if blockquote:
@ -428,5 +691,30 @@ class InputOutput:
if not text.endswith("\n"):
text += "\n"
if self.chat_history_file is not None:
with self.chat_history_file.open("a", encoding=self.encoding) as f:
f.write(text)
try:
with self.chat_history_file.open("a", encoding=self.encoding, errors="ignore") as f:
f.write(text)
except (PermissionError, OSError) as err:
print(f"Warning: Unable to write to chat history file {self.chat_history_file}.")
print(err)
self.chat_history_file = None # Disable further attempts to write
def format_files_for_input(self, rel_fnames, rel_read_only_fnames):
read_only_files = []
for full_path in sorted(rel_read_only_fnames or []):
read_only_files.append(f"{full_path} (read only)")
editable_files = []
for full_path in sorted(rel_fnames):
if full_path in rel_read_only_fnames:
continue
editable_files.append(f"{full_path}")
return "\n".join(read_only_files + editable_files) + "\n"
def get_rel_fname(fname, root):
try:
return os.path.relpath(fname, root)
except ValueError:
return fname

View file

@ -35,7 +35,10 @@ class Linter:
def get_rel_fname(self, fname):
if self.root:
return os.path.relpath(fname, self.root)
try:
return os.path.relpath(fname, self.root)
except ValueError:
return fname
else:
return fname
@ -43,14 +46,18 @@ class Linter:
cmd += " " + rel_fname
cmd = cmd.split()
process = subprocess.Popen(
cmd,
cwd=self.root,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
encoding=self.encoding,
errors="replace",
)
try:
process = subprocess.Popen(
cmd,
cwd=self.root,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
encoding=self.encoding,
errors="replace",
)
except OSError as err:
print(f"Unable to execute lint command: {err}")
return
stdout, _ = process.communicate()
errors = stdout
if process.returncode == 0:
@ -76,7 +83,11 @@ class Linter:
def lint(self, fname, cmd=None):
rel_fname = self.get_rel_fname(fname)
code = Path(fname).read_text(self.encoding)
try:
code = Path(fname).read_text(encoding=self.encoding, errors="replace")
except OSError as err:
print(f"Unable to read {fname}: {err}")
return
if cmd:
cmd = cmd.strip()
@ -198,7 +209,16 @@ def basic_lint(fname, code):
if not lang:
return
parser = get_parser(lang)
# Tree-sitter linter is not capable of working with typescript #1132
if lang == "typescript":
return
try:
parser = get_parser(lang)
except Exception as err:
print(f"Unable to load parser: {err}")
return
tree = parser.parse(bytes(code, "utf-8"))
errors = traverse_tree(tree.root_node)

View file

@ -9,6 +9,7 @@ AIDER_APP_NAME = "Aider"
os.environ["OR_SITE_URL"] = AIDER_SITE_URL
os.environ["OR_APP_NAME"] = AIDER_APP_NAME
os.environ["LITELLM_MODE"] = "PRODUCTION"
# `import litellm` takes 1.5 seconds, defer it!
@ -17,6 +18,8 @@ class LazyLiteLLM:
_lazy_module = None
def __getattr__(self, name):
if name == "_lazy_module":
return super()
self._load_litellm()
return getattr(self._lazy_module, name)
@ -29,6 +32,7 @@ class LazyLiteLLM:
self._lazy_module.suppress_debug_info = True
self._lazy_module.set_verbose = False
self._lazy_module.drop_params = True
self._lazy_module._logging._disable_debugging()
litellm = LazyLiteLLM()

View file

@ -1,34 +1,57 @@
# ai
import configparser
import json
import os
import re
import sys
import threading
import traceback
from pathlib import Path
import git
import importlib_resources
from dotenv import load_dotenv
from prompt_toolkit.enums import EditingMode
from aider import __version__, models, utils
from aider import __version__, models, urls, utils
from aider.analytics import Analytics
from aider.args import get_parser
from aider.coders import Coder
from aider.commands import Commands, SwitchCoder
from aider.format_settings import format_settings, scrub_sensitive_info
from aider.history import ChatSummary
from aider.io import InputOutput
from aider.llm import litellm # noqa: F401; properly init litellm on launch
from aider.repo import GitRepo
from aider.versioncheck import check_version
from aider.repo import ANY_GIT_ERROR, GitRepo
from aider.report import report_uncaught_exceptions
from aider.versioncheck import check_version, install_from_main_branch, install_upgrade
from .dump import dump # noqa: F401
def check_config_files_for_yes(config_files):
found = False
for config_file in config_files:
if Path(config_file).exists():
try:
with open(config_file, "r") as f:
for line in f:
if line.strip().startswith("yes:"):
print("Configuration error detected.")
print(f"The file {config_file} contains a line starting with 'yes:'")
print("Please replace 'yes:' with 'yes-always:' in this file.")
found = True
except Exception:
pass
return found
def get_git_root():
"""Try and guess the git repo, since the conf.yml can be at the repo root"""
try:
repo = git.Repo(search_parent_directories=True)
return repo.working_tree_dir
except git.InvalidGitRepositoryError:
except (git.InvalidGitRepositoryError, FileNotFoundError):
return None
@ -37,7 +60,7 @@ def guessed_wrong_repo(io, git_root, fnames, git_dname):
try:
check_repo = Path(GitRepo(io, fnames, git_dname).root).resolve()
except FileNotFoundError:
except (OSError,) + ANY_GIT_ERROR:
return
# we had no guess, rely on the "true" repo result
@ -51,15 +74,30 @@ def guessed_wrong_repo(io, git_root, fnames, git_dname):
return str(check_repo)
def make_new_repo(git_root, io):
try:
repo = git.Repo.init(git_root)
check_gitignore(git_root, io, False)
except ANY_GIT_ERROR as err: # issue #1233
io.tool_error(f"Unable to create git repo in {git_root}")
io.tool_output(str(err))
return
io.tool_output(f"Git repository created in {git_root}")
return repo
def setup_git(git_root, io):
repo = None
if git_root:
repo = git.Repo(git_root)
elif io.confirm_ask("No git repo found, create one to track GPT's changes (recommended)?"):
elif Path.cwd() == Path.home():
io.tool_warning("You should probably run aider in a directory, not your home dir.")
return
elif io.confirm_ask("No git repo found, create one to track aider's changes (recommended)?"):
git_root = str(Path.cwd().resolve())
repo = git.Repo.init(git_root)
io.tool_output("Git repository created in the current working directory.")
check_gitignore(git_root, io, False)
repo = make_new_repo(git_root, io)
if not repo:
return
@ -73,7 +111,7 @@ def setup_git(git_root, io):
pass
try:
user_email = config.get_value("user", "email", None)
except configparser.NoSectionError:
except (configparser.NoSectionError, configparser.NoOptionError):
pass
if user_name and user_email:
@ -82,10 +120,10 @@ def setup_git(git_root, io):
with repo.config_writer() as git_config:
if not user_name:
git_config.set_value("user", "name", "Your Name")
io.tool_error('Update git name with: git config user.name "Your Name"')
io.tool_warning('Update git name with: git config user.name "Your Name"')
if not user_email:
git_config.set_value("user", "email", "you@example.com")
io.tool_error('Update git email with: git config user.email "you@example.com"')
io.tool_warning('Update git email with: git config user.email "you@example.com"')
return repo.working_tree_dir
@ -96,60 +134,39 @@ def check_gitignore(git_root, io, ask=True):
try:
repo = git.Repo(git_root)
if repo.ignored(".aider"):
if repo.ignored(".aider") and repo.ignored(".env"):
return
except git.exc.InvalidGitRepositoryError:
except ANY_GIT_ERROR:
pass
pat = ".aider*"
patterns = [".aider*", ".env"]
patterns_to_add = []
gitignore_file = Path(git_root) / ".gitignore"
if gitignore_file.exists():
content = io.read_text(gitignore_file)
if content is None:
return
if pat in content.splitlines():
return
existing_lines = content.splitlines()
for pat in patterns:
if pat not in existing_lines:
patterns_to_add.append(pat)
else:
content = ""
patterns_to_add = patterns
if ask and not io.confirm_ask(f"Add {pat} to .gitignore (recommended)?"):
if not patterns_to_add:
return
if ask and not io.confirm_ask(f"Add {', '.join(patterns_to_add)} to .gitignore (recommended)?"):
return
if content and not content.endswith("\n"):
content += "\n"
content += pat + "\n"
content += "\n".join(patterns_to_add) + "\n"
io.write_text(gitignore_file, content)
io.tool_output(f"Added {pat} to .gitignore")
def format_settings(parser, args):
show = scrub_sensitive_info(args, parser.format_values())
# clean up the headings for consistency w/ new lines
heading_env = "Environment Variables:"
heading_defaults = "Defaults:"
if heading_env in show:
show = show.replace(heading_env, "\n" + heading_env)
show = show.replace(heading_defaults, "\n" + heading_defaults)
show += "\n"
show += "Option settings:\n"
for arg, val in sorted(vars(args).items()):
if val:
val = scrub_sensitive_info(args, str(val))
show += f" - {arg}: {val}\n" # noqa: E221
return show
def scrub_sensitive_info(args, text):
# Replace sensitive information with last 4 characters
if text and args.openai_api_key:
last_4 = args.openai_api_key[-4:]
text = text.replace(args.openai_api_key, f"...{last_4}")
if text and args.anthropic_api_key:
last_4 = args.anthropic_api_key[-4:]
text = text.replace(args.anthropic_api_key, f"...{last_4}")
return text
io.tool_output(f"Added {', '.join(patterns_to_add)} to .gitignore")
def check_streamlit_install(io):
@ -179,7 +196,10 @@ def launch_gui(args):
"--server.runOnSave=false",
]
if "-dev" in __version__:
# https://github.com/Aider-AI/aider/issues/2193
is_dev = "-dev" in str(__version__)
if is_dev:
print("Watching for file changes.")
else:
st_args += [
@ -219,24 +239,31 @@ def parse_lint_cmds(lint_cmds, io):
res[lang] = cmd
else:
io.tool_error(f'Unable to parse --lint-cmd "{lint_cmd}"')
io.tool_error('The arg should be "language: cmd --args ..."')
io.tool_error('For example: --lint-cmd "python: flake8 --select=E9"')
io.tool_output('The arg should be "language: cmd --args ..."')
io.tool_output('For example: --lint-cmd "python: flake8 --select=E9"')
err = True
if err:
return
return res
def generate_search_path_list(default_fname, git_root, command_line_file):
def generate_search_path_list(default_file, git_root, command_line_file):
files = []
default_file = Path(default_fname)
files.append(Path.home() / default_file) # homedir
if git_root:
files.append(Path(git_root) / default_file) # git root
files.append(default_file.resolve())
files.append(default_file)
if command_line_file:
files.append(command_line_file)
files = [Path(fn).resolve() for fn in files]
resolved_files = []
for fn in files:
try:
resolved_files.append(Path(fn).resolve())
except OSError:
pass
files = resolved_files
files.reverse()
uniq = []
for fn in files:
@ -276,7 +303,7 @@ def register_models(git_root, model_settings_fname, io, verbose=False):
return None
def load_dotenv_files(git_root, dotenv_fname):
def load_dotenv_files(git_root, dotenv_fname, encoding="utf-8"):
dotenv_files = generate_search_path_list(
".env",
git_root,
@ -284,9 +311,14 @@ def load_dotenv_files(git_root, dotenv_fname):
)
loaded = []
for fname in dotenv_files:
if Path(fname).exists():
loaded.append(fname)
load_dotenv(fname, override=True)
try:
if Path(fname).exists():
load_dotenv(fname, override=True, encoding=encoding)
loaded.append(fname)
except OSError as e:
print(f"OSError loading {fname}: {e}")
except Exception as e:
print(f"Error loading {fname}: {e}")
return loaded
@ -295,6 +327,10 @@ def register_litellm_models(git_root, model_metadata_fname, io, verbose=False):
".aider.model.metadata.json", git_root, model_metadata_fname
)
# Add the resource file path
resource_metadata = importlib_resources.files("aider.resources").joinpath("model-metadata.json")
model_metatdata_files.append(str(resource_metadata))
try:
model_metadata_files_loaded = models.register_litellm_models(model_metatdata_files)
if len(model_metadata_files_loaded) > 0 and verbose:
@ -306,7 +342,42 @@ def register_litellm_models(git_root, model_metadata_fname, io, verbose=False):
return 1
def sanity_check_repo(repo, io):
if not repo:
return True
if not repo.repo.working_tree_dir:
io.tool_error("The git repo does not seem to have a working tree?")
return False
bad_ver = False
try:
repo.get_tracked_files()
if not repo.git_repo_error:
return True
error_msg = str(repo.git_repo_error)
except ANY_GIT_ERROR as exc:
error_msg = str(exc)
bad_ver = "version in (1, 2)" in error_msg
except AssertionError as exc:
error_msg = str(exc)
bad_ver = True
if bad_ver:
io.tool_error("Aider only works with git repos with version number 1 or 2.")
io.tool_output("You may be able to convert your repo: git update-index --index-version=2")
io.tool_output("Or run aider --no-git to proceed without using git.")
io.tool_output(urls.git_index_version)
return False
io.tool_error("Unable to read git repository, it may be corrupt?")
io.tool_output(error_msg)
return False
def main(argv=None, input=None, output=None, force_git_root=None, return_coder=False):
report_uncaught_exceptions()
if argv is None:
argv = sys.argv[1:]
@ -317,7 +388,12 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
conf_fname = Path(".aider.conf.yml")
default_config_files = [conf_fname.resolve()] # CWD
default_config_files = []
try:
default_config_files += [conf_fname.resolve()] # CWD
except OSError:
pass
if git_root:
git_conf = Path(git_root) / conf_fname # git root
if git_conf not in default_config_files:
@ -326,7 +402,13 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
default_config_files = list(map(str, default_config_files))
parser = get_parser(default_config_files, git_root)
args, unknown = parser.parse_known_args(argv)
try:
args, unknown = parser.parse_known_args(argv)
except AttributeError as e:
if all(word in str(e) for word in ["bool", "object", "has", "no", "attribute", "strip"]):
if check_config_files_for_yes(default_config_files):
return 1
raise e
if args.verbose:
print("Config files search order, if no --config:")
@ -337,10 +419,11 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
default_config_files.reverse()
parser = get_parser(default_config_files, git_root)
args, unknown = parser.parse_known_args(argv)
# Load the .env file specified in the arguments
loaded_dotenvs = load_dotenv_files(git_root, args.env_file)
loaded_dotenvs = load_dotenv_files(git_root, args.env_file, args.encoding)
# Parse again to include any arguments that might have been defined in .env
args = parser.parse_args(argv)
@ -353,41 +436,63 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
if not args.verify_ssl:
import httpx
os.environ["SSL_VERIFY"] = ""
litellm._load_litellm()
litellm._lazy_module.client_session = httpx.Client(verify=False)
litellm._lazy_module.aclient_session = httpx.AsyncClient(verify=False)
if args.dark_mode:
args.user_input_color = "#32FF32"
args.tool_error_color = "#FF3333"
args.tool_warning_color = "#FFFF00"
args.assistant_output_color = "#00FFFF"
args.code_theme = "monokai"
if args.light_mode:
args.user_input_color = "green"
args.tool_error_color = "red"
args.tool_warning_color = "#FFA500"
args.assistant_output_color = "blue"
args.code_theme = "default"
if return_coder and args.yes is None:
args.yes = True
if return_coder and args.yes_always is None:
args.yes_always = True
editing_mode = EditingMode.VI if args.vim else EditingMode.EMACS
io = InputOutput(
args.pretty,
args.yes,
args.input_history_file,
args.chat_history_file,
input=input,
output=output,
user_input_color=args.user_input_color,
tool_output_color=args.tool_output_color,
tool_error_color=args.tool_error_color,
dry_run=args.dry_run,
encoding=args.encoding,
llm_history_file=args.llm_history_file,
editingmode=editing_mode,
)
def get_io(pretty):
return InputOutput(
pretty,
args.yes_always,
args.input_history_file,
args.chat_history_file,
input=input,
output=output,
user_input_color=args.user_input_color,
tool_output_color=args.tool_output_color,
tool_warning_color=args.tool_warning_color,
tool_error_color=args.tool_error_color,
completion_menu_color=args.completion_menu_color,
completion_menu_bg_color=args.completion_menu_bg_color,
completion_menu_current_color=args.completion_menu_current_color,
completion_menu_current_bg_color=args.completion_menu_current_bg_color,
assistant_output_color=args.assistant_output_color,
code_theme=args.code_theme,
dry_run=args.dry_run,
encoding=args.encoding,
llm_history_file=args.llm_history_file,
editingmode=editing_mode,
fancy_input=args.fancy_input,
)
io = get_io(args.pretty)
try:
io.rule()
except UnicodeEncodeError as err:
if not io.pretty:
raise err
io = get_io(False)
io.tool_warning("Terminal does not support pretty output (UnicodeDecodeError)")
analytics = Analytics(
args.analytics, logfile=args.analytics_log, permanently_disable=args.analytics_disable
@ -415,7 +520,7 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
io.tool_error(f"{fname} is a directory, not provided alone.")
good = False
if not good:
io.tool_error(
io.tool_output(
"Provide either a single directory of a git repo, or a list of one or more files."
)
return 1
@ -442,11 +547,19 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
update_available = check_version(io, just_check=True, verbose=args.verbose)
return 0 if not update_available else 1
if args.install_main_branch:
success = install_from_main_branch(io)
return 0 if success else 1
if args.upgrade:
success = install_upgrade(io)
return 0 if success else 1
if args.check_update:
check_version(io, verbose=args.verbose)
if args.models:
models.print_matching_models(io, args.models)
if args.list_models:
models.print_matching_models(io, args.list_models)
return 0
if args.git:
@ -462,6 +575,8 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
cmd_line = scrub_sensitive_info(args, cmd_line)
io.tool_output(cmd_line, log_only=True)
check_and_load_imports(io, verbose=args.verbose)
if args.anthropic_api_key:
os.environ["ANTHROPIC_API_KEY"] = args.anthropic_api_key
@ -480,18 +595,35 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
register_litellm_models(git_root, args.model_metadata_file, io, verbose=args.verbose)
if not args.model:
args.model = "gpt-4o"
args.model = "gpt-4o-2024-08-06"
if os.environ.get("ANTHROPIC_API_KEY"):
args.model = "claude-3-5-sonnet-20240620"
args.model = "claude-3-5-sonnet-20241022"
main_model = models.Model(args.model, weak_model=args.weak_model)
main_model = models.Model(
args.model,
weak_model=args.weak_model,
editor_model=args.editor_model,
editor_edit_format=args.editor_edit_format,
)
if args.verbose:
io.tool_output("Model info:")
io.tool_output(json.dumps(main_model.info, indent=4))
lint_cmds = parse_lint_cmds(args.lint_cmd, io)
if lint_cmds is None:
return 1
if args.show_model_warnings:
models.sanity_check_models(io, main_model)
problem = models.sanity_check_models(io, main_model)
if problem:
io.tool_output("You can skip this check with --no-show-model-warnings")
io.tool_output()
try:
if not io.confirm_ask("Proceed anyway?"):
return 1
except KeyboardInterrupt:
return 1
repo = None
if args.git:
@ -512,13 +644,29 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
except FileNotFoundError:
pass
commands = Commands(io, None, verify_ssl=args.verify_ssl)
if not args.skip_sanity_check_repo:
if not sanity_check_repo(repo, io):
return 1
commands = Commands(
io, None, verify_ssl=args.verify_ssl, args=args, parser=parser, verbose=args.verbose
)
summarizer = ChatSummary(
[main_model.weak_model, main_model],
args.max_chat_history_tokens or main_model.max_chat_history_tokens,
)
if args.cache_prompts and args.map_refresh == "auto":
args.map_refresh = "files"
if not main_model.streaming:
if args.stream:
io.tool_warning(
f"Warning: Streaming is not supported by {main_model.name}. Disabling streaming."
)
args.stream = False
try:
coder = Coder.create(
main_model=main_model,
@ -533,8 +681,6 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
dry_run=args.dry_run,
map_tokens=args.map_tokens,
verbose=args.verbose,
assistant_output_color=args.assistant_output_color,
code_theme=args.code_theme,
stream=args.stream,
use_git=args.git,
restore_chat_history=args.restore_chat_history,
@ -545,8 +691,13 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
commands=commands,
summarizer=summarizer,
analytics=analytics,
map_refresh=args.map_refresh,
cache_prompts=args.cache_prompts,
map_mul_no_files=args.map_multiplier_no_files,
num_cache_warming_pings=args.cache_keepalive_pings,
suggest_shell_commands=args.suggest_shell_commands,
chat_language=args.chat_language,
)
except ValueError as err:
io.tool_error(str(err))
return 1
@ -560,7 +711,7 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
coder.cur_messages += [
dict(role="user", content="Hello!"),
]
messages = coder.format_messages()
messages = coder.format_messages().all_messages()
utils.show_messages(messages)
return
@ -605,18 +756,24 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
io.tool_output('Use /help <question> for help, run "aider --help" to see cmd line args')
if git_root and Path.cwd().resolve() != Path(git_root).resolve():
io.tool_error(
io.tool_warning(
"Note: in-chat filenames are always relative to the git working dir, not the current"
" working dir."
)
io.tool_error(f"Cur working dir: {Path.cwd()}")
io.tool_error(f"Git working dir: {git_root}")
io.tool_output(f"Cur working dir: {Path.cwd()}")
io.tool_output(f"Git working dir: {git_root}")
if args.load:
commands.cmd_load(args.load)
if args.message:
io.add_to_input_history(args.message)
io.tool_output()
coder.run(with_message=args.message)
try:
coder.run(with_message=args.message)
except SwitchCoder:
pass
return
if args.message_file:
@ -657,19 +814,72 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
coder.show_announcements()
def load_slow_imports():
def check_and_load_imports(io, verbose=False):
installs_file = Path.home() / ".aider" / "installs.json"
key = (__version__, sys.executable)
if verbose:
io.tool_output(
f"Checking imports for version {__version__} and executable {sys.executable}"
)
io.tool_output(f"Installs file: {installs_file}")
try:
if installs_file.exists():
with open(installs_file, "r") as f:
installs = json.load(f)
if verbose:
io.tool_output("Installs file exists and loaded")
else:
installs = {}
if verbose:
io.tool_output("Installs file does not exist, creating new dictionary")
if str(key) not in installs:
if verbose:
io.tool_output(
"First run for this version and executable, loading imports synchronously"
)
try:
load_slow_imports(swallow=False)
except Exception as err:
io.tool_error(str(err))
io.tool_output("Error loading required imports. Did you install aider properly?")
io.tool_output("https://aider.chat/docs/install/install.html")
sys.exit(1)
installs[str(key)] = True
installs_file.parent.mkdir(parents=True, exist_ok=True)
with open(installs_file, "w") as f:
json.dump(installs, f, indent=4)
if verbose:
io.tool_output("Imports loaded and installs file updated")
else:
if verbose:
io.tool_output("Not first run, loading imports in background thread")
thread = threading.Thread(target=load_slow_imports)
thread.daemon = True
thread.start()
except Exception as e:
io.tool_warning(f"Error in checking imports: {e}")
if verbose:
io.tool_output(f"Full exception details: {traceback.format_exc()}")
def load_slow_imports(swallow=True):
# These imports are deferred in various ways to
# improve startup time.
# This func is called in a thread to load them in the background
# while we wait for the user to type their first message.
# This func is called either synchronously or in a thread
# depending on whether it's been run before for this version and executable.
try:
import httpx # noqa: F401
import litellm # noqa: F401
import networkx # noqa: F401
import numpy # noqa: F401
except Exception:
pass
except Exception as e:
if not swallow:
raise e
if __name__ == "__main__":

View file

@ -1,22 +1,24 @@
import difflib
import importlib
import json
import math
import os
import platform
import sys
import time
from dataclasses import dataclass, fields
from pathlib import Path
from typing import Optional
import json5
import yaml
from PIL import Image
from aider import urls
from aider.dump import dump # noqa: F401
from aider.llm import AIDER_APP_NAME, AIDER_SITE_URL, litellm
from aider.llm import litellm
DEFAULT_MODEL_NAME = "gpt-4o"
ANTHROPIC_BETA_HEADER = "prompt-caching-2024-07-31"
OPENAI_MODELS = """
gpt-4
@ -54,6 +56,7 @@ claude-3-haiku-20240307
claude-3-opus-20240229
claude-3-sonnet-20240229
claude-3-5-sonnet-20240620
claude-3-5-sonnet-20241022
"""
ANTHROPIC_MODELS = [ln.strip() for ln in ANTHROPIC_MODELS.splitlines() if ln.strip()]
@ -67,12 +70,17 @@ class ModelSettings:
weak_model_name: Optional[str] = None
use_repo_map: bool = False
send_undo_reply: bool = False
accepts_images: bool = False
lazy: bool = False
reminder_as_sys_msg: bool = False
reminder: str = "user"
examples_as_sys_msg: bool = False
extra_headers: Optional[dict] = None
max_tokens: Optional[int] = None
extra_params: Optional[dict] = None
cache_control: bool = False
caches_by_default: bool = False
use_system_prompt: bool = True
use_temperature: bool = True
streaming: bool = True
editor_model_name: Optional[str] = None
editor_edit_format: Optional[str] = None
# https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo
@ -85,31 +93,31 @@ MODEL_SETTINGS = [
"gpt-3.5-turbo",
"whole",
weak_model_name="gpt-4o-mini",
reminder_as_sys_msg=True,
reminder="sys",
),
ModelSettings(
"gpt-3.5-turbo-0125",
"whole",
weak_model_name="gpt-4o-mini",
reminder_as_sys_msg=True,
reminder="sys",
),
ModelSettings(
"gpt-3.5-turbo-1106",
"whole",
weak_model_name="gpt-4o-mini",
reminder_as_sys_msg=True,
reminder="sys",
),
ModelSettings(
"gpt-3.5-turbo-0613",
"whole",
weak_model_name="gpt-4o-mini",
reminder_as_sys_msg=True,
reminder="sys",
),
ModelSettings(
"gpt-3.5-turbo-16k-0613",
"whole",
weak_model_name="gpt-4o-mini",
reminder_as_sys_msg=True,
reminder="sys",
),
# gpt-4
ModelSettings(
@ -117,85 +125,72 @@ MODEL_SETTINGS = [
"udiff",
weak_model_name="gpt-4o-mini",
use_repo_map=True,
send_undo_reply=True,
accepts_images=True,
lazy=True,
reminder_as_sys_msg=True,
reminder="sys",
),
ModelSettings(
"gpt-4-turbo",
"udiff",
weak_model_name="gpt-4o-mini",
use_repo_map=True,
send_undo_reply=True,
accepts_images=True,
lazy=True,
reminder_as_sys_msg=True,
reminder="sys",
),
ModelSettings(
"openai/gpt-4o",
"diff",
weak_model_name="gpt-4o-mini",
use_repo_map=True,
send_undo_reply=True,
accepts_images=True,
lazy=True,
reminder_as_sys_msg=True,
reminder="sys",
editor_edit_format="editor-diff",
),
ModelSettings(
"openai/gpt-4o-2024-08-06",
"diff",
weak_model_name="gpt-4o-mini",
use_repo_map=True,
send_undo_reply=True,
accepts_images=True,
lazy=True,
reminder_as_sys_msg=True,
reminder="sys",
),
ModelSettings(
"gpt-4o-2024-08-06",
"diff",
weak_model_name="gpt-4o-mini",
use_repo_map=True,
send_undo_reply=True,
accepts_images=True,
lazy=True,
reminder_as_sys_msg=True,
reminder="sys",
),
ModelSettings(
"gpt-4o",
"diff",
weak_model_name="gpt-4o-mini",
use_repo_map=True,
send_undo_reply=True,
accepts_images=True,
lazy=True,
reminder_as_sys_msg=True,
reminder="sys",
editor_edit_format="editor-diff",
),
ModelSettings(
"gpt-4o-mini",
"whole",
weak_model_name="gpt-4o-mini",
accepts_images=True,
lazy=True,
reminder_as_sys_msg=True,
reminder="sys",
),
ModelSettings(
"openai/gpt-4o-mini",
"whole",
weak_model_name="openai/gpt-4o-mini",
accepts_images=True,
lazy=True,
reminder_as_sys_msg=True,
reminder="sys",
),
ModelSettings(
"gpt-4-0125-preview",
"udiff",
weak_model_name="gpt-4o-mini",
use_repo_map=True,
send_undo_reply=True,
lazy=True,
reminder_as_sys_msg=True,
reminder="sys",
examples_as_sys_msg=True,
),
ModelSettings(
@ -203,26 +198,22 @@ MODEL_SETTINGS = [
"udiff",
weak_model_name="gpt-4o-mini",
use_repo_map=True,
send_undo_reply=True,
lazy=True,
reminder_as_sys_msg=True,
reminder="sys",
),
ModelSettings(
"gpt-4-vision-preview",
"diff",
weak_model_name="gpt-4o-mini",
use_repo_map=True,
send_undo_reply=True,
accepts_images=True,
reminder_as_sys_msg=True,
reminder="sys",
),
ModelSettings(
"gpt-4-0314",
"diff",
weak_model_name="gpt-4o-mini",
use_repo_map=True,
send_undo_reply=True,
reminder_as_sys_msg=True,
reminder="sys",
examples_as_sys_msg=True,
),
ModelSettings(
@ -230,16 +221,14 @@ MODEL_SETTINGS = [
"diff",
weak_model_name="gpt-4o-mini",
use_repo_map=True,
send_undo_reply=True,
reminder_as_sys_msg=True,
reminder="sys",
),
ModelSettings(
"gpt-4-32k-0613",
"diff",
weak_model_name="gpt-4o-mini",
use_repo_map=True,
send_undo_reply=True,
reminder_as_sys_msg=True,
reminder="sys",
),
# Claude
ModelSettings(
@ -247,14 +236,12 @@ MODEL_SETTINGS = [
"diff",
weak_model_name="claude-3-haiku-20240307",
use_repo_map=True,
send_undo_reply=True,
),
ModelSettings(
"openrouter/anthropic/claude-3-opus",
"diff",
weak_model_name="openrouter/anthropic/claude-3-haiku",
use_repo_map=True,
send_undo_reply=True,
),
ModelSettings(
"claude-3-sonnet-20240229",
@ -265,38 +252,121 @@ MODEL_SETTINGS = [
"claude-3-5-sonnet-20240620",
"diff",
weak_model_name="claude-3-haiku-20240307",
editor_model_name="claude-3-5-sonnet-20240620",
editor_edit_format="editor-diff",
use_repo_map=True,
examples_as_sys_msg=True,
accepts_images=True,
max_tokens=8192,
extra_headers={"anthropic-beta": "max-tokens-3-5-sonnet-2024-07-15"},
extra_params={
"extra_headers": {
"anthropic-beta": ANTHROPIC_BETA_HEADER,
},
"max_tokens": 8192,
},
cache_control=True,
reminder="user",
),
ModelSettings(
"anthropic/claude-3-5-sonnet-20240620",
"diff",
weak_model_name="claude-3-haiku-20240307",
weak_model_name="anthropic/claude-3-haiku-20240307",
editor_model_name="anthropic/claude-3-5-sonnet-20240620",
editor_edit_format="editor-diff",
use_repo_map=True,
examples_as_sys_msg=True,
max_tokens=8192,
extra_headers={
"anthropic-beta": "max-tokens-3-5-sonnet-2024-07-15",
"HTTP-Referer": AIDER_SITE_URL,
"X-Title": AIDER_APP_NAME,
extra_params={
"extra_headers": {
"anthropic-beta": ANTHROPIC_BETA_HEADER,
},
"max_tokens": 8192,
},
cache_control=True,
reminder="user",
),
ModelSettings(
"anthropic/claude-3-5-sonnet-20241022",
"diff",
weak_model_name="anthropic/claude-3-haiku-20240307",
editor_model_name="anthropic/claude-3-5-sonnet-20241022",
editor_edit_format="editor-diff",
use_repo_map=True,
examples_as_sys_msg=True,
extra_params={
"extra_headers": {
"anthropic-beta": ANTHROPIC_BETA_HEADER,
},
"max_tokens": 8192,
},
cache_control=True,
reminder="user",
),
ModelSettings(
"claude-3-5-sonnet-20241022",
"diff",
weak_model_name="claude-3-haiku-20240307",
editor_model_name="claude-3-5-sonnet-20241022",
editor_edit_format="editor-diff",
use_repo_map=True,
examples_as_sys_msg=True,
extra_params={
"extra_headers": {
"anthropic-beta": ANTHROPIC_BETA_HEADER,
},
"max_tokens": 8192,
},
cache_control=True,
reminder="user",
),
ModelSettings(
"anthropic/claude-3-haiku-20240307",
"whole",
weak_model_name="anthropic/claude-3-haiku-20240307",
examples_as_sys_msg=True,
extra_params={
"extra_headers": {
"anthropic-beta": ANTHROPIC_BETA_HEADER,
},
},
cache_control=True,
),
ModelSettings(
"claude-3-haiku-20240307",
"whole",
weak_model_name="claude-3-haiku-20240307",
examples_as_sys_msg=True,
extra_params={
"extra_headers": {
"anthropic-beta": ANTHROPIC_BETA_HEADER,
},
},
cache_control=True,
),
ModelSettings(
"openrouter/anthropic/claude-3.5-sonnet",
"diff",
weak_model_name="openrouter/anthropic/claude-3-haiku-20240307",
weak_model_name="openrouter/anthropic/claude-3-haiku",
editor_model_name="openrouter/anthropic/claude-3.5-sonnet",
editor_edit_format="editor-diff",
use_repo_map=True,
examples_as_sys_msg=True,
accepts_images=True,
max_tokens=8192,
extra_headers={
"anthropic-beta": "max-tokens-3-5-sonnet-2024-07-15",
"HTTP-Referer": "https://aider.chat",
"X-Title": "Aider",
extra_params={
"max_tokens": 8192,
},
reminder="user",
cache_control=True,
),
ModelSettings(
"openrouter/anthropic/claude-3.5-sonnet:beta",
"diff",
weak_model_name="openrouter/anthropic/claude-3-haiku:beta",
editor_model_name="openrouter/anthropic/claude-3.5-sonnet:beta",
editor_edit_format="editor-diff",
use_repo_map=True,
examples_as_sys_msg=True,
extra_params={
"max_tokens": 8192,
},
reminder="user",
cache_control=True,
),
# Vertex AI Claude models
# Does not yet support 8k token
@ -304,16 +374,33 @@ MODEL_SETTINGS = [
"vertex_ai/claude-3-5-sonnet@20240620",
"diff",
weak_model_name="vertex_ai/claude-3-haiku@20240307",
editor_model_name="vertex_ai/claude-3-5-sonnet@20240620",
editor_edit_format="editor-diff",
use_repo_map=True,
examples_as_sys_msg=True,
accepts_images=True,
extra_params={
"max_tokens": 8192,
},
reminder="user",
),
ModelSettings(
"vertex_ai/claude-3-5-sonnet-v2@20241022",
"diff",
weak_model_name="vertex_ai/claude-3-haiku@20240307",
editor_model_name="vertex_ai/claude-3-5-sonnet-v2@20241022",
editor_edit_format="editor-diff",
use_repo_map=True,
examples_as_sys_msg=True,
extra_params={
"max_tokens": 8192,
},
reminder="user",
),
ModelSettings(
"vertex_ai/claude-3-opus@20240229",
"diff",
weak_model_name="vertex_ai/claude-3-haiku@20240307",
use_repo_map=True,
send_undo_reply=True,
),
ModelSettings(
"vertex_ai/claude-3-sonnet@20240229",
@ -326,7 +413,19 @@ MODEL_SETTINGS = [
"whole",
weak_model_name="command-r-plus",
use_repo_map=True,
send_undo_reply=True,
),
# New Cohere models
ModelSettings(
"command-r-08-2024",
"whole",
weak_model_name="command-r-08-2024",
use_repo_map=True,
),
ModelSettings(
"command-r-plus-08-2024",
"whole",
weak_model_name="command-r-plus-08-2024",
use_repo_map=True,
),
# Groq llama3
ModelSettings(
@ -347,65 +446,271 @@ MODEL_SETTINGS = [
examples_as_sys_msg=True,
),
# Gemini
ModelSettings(
"gemini/gemini-1.5-pro-002",
"diff",
use_repo_map=True,
),
ModelSettings(
"gemini/gemini-1.5-flash-002",
"whole",
),
ModelSettings(
"gemini/gemini-1.5-pro",
"diff-fenced",
use_repo_map=True,
send_undo_reply=True,
),
ModelSettings(
"gemini/gemini-1.5-pro-latest",
"diff-fenced",
use_repo_map=True,
send_undo_reply=True,
),
ModelSettings(
"gemini/gemini-1.5-pro-exp-0827",
"diff-fenced",
use_repo_map=True,
),
ModelSettings(
"gemini/gemini-1.5-flash-exp-0827",
"whole",
use_repo_map=False,
send_undo_reply=False,
),
ModelSettings(
"deepseek/deepseek-chat",
"diff",
use_repo_map=True,
send_undo_reply=True,
examples_as_sys_msg=True,
reminder_as_sys_msg=True,
reminder="sys",
extra_params={
"max_tokens": 8192,
},
),
ModelSettings(
"deepseek/deepseek-coder",
"diff",
use_repo_map=True,
send_undo_reply=True,
examples_as_sys_msg=True,
reminder_as_sys_msg=True,
reminder="sys",
caches_by_default=True,
extra_params={
"max_tokens": 8192,
},
),
ModelSettings(
"deepseek-chat",
"diff",
use_repo_map=True,
examples_as_sys_msg=True,
reminder="sys",
extra_params={
"max_tokens": 8192,
},
),
ModelSettings(
"deepseek-coder",
"diff",
use_repo_map=True,
examples_as_sys_msg=True,
reminder="sys",
caches_by_default=True,
extra_params={
"max_tokens": 8192,
},
),
ModelSettings(
"openrouter/deepseek/deepseek-coder",
"diff",
use_repo_map=True,
send_undo_reply=True,
examples_as_sys_msg=True,
reminder_as_sys_msg=True,
reminder="sys",
),
ModelSettings(
"openrouter/openai/gpt-4o",
"diff",
weak_model_name="openrouter/openai/gpt-4o-mini",
use_repo_map=True,
send_undo_reply=True,
accepts_images=True,
lazy=True,
reminder_as_sys_msg=True,
reminder="sys",
editor_edit_format="editor-diff",
),
ModelSettings(
"openai/o1-mini",
"whole",
weak_model_name="openai/gpt-4o-mini",
editor_model_name="openai/gpt-4o",
editor_edit_format="editor-diff",
use_repo_map=True,
reminder="user",
use_system_prompt=False,
use_temperature=False,
streaming=False,
),
ModelSettings(
"azure/o1-mini",
"whole",
weak_model_name="azure/gpt-4o-mini",
editor_model_name="azure/gpt-4o",
editor_edit_format="editor-diff",
use_repo_map=True,
reminder="user",
use_system_prompt=False,
use_temperature=False,
streaming=False,
),
ModelSettings(
"o1-mini",
"whole",
weak_model_name="gpt-4o-mini",
editor_model_name="gpt-4o",
editor_edit_format="editor-diff",
use_repo_map=True,
reminder="user",
use_system_prompt=False,
use_temperature=False,
streaming=False,
),
ModelSettings(
"openai/o1-preview",
"diff",
weak_model_name="openai/gpt-4o-mini",
editor_model_name="openai/gpt-4o",
editor_edit_format="editor-diff",
use_repo_map=True,
reminder="user",
use_system_prompt=False,
use_temperature=False,
streaming=False,
),
ModelSettings(
"azure/o1-preview",
"diff",
weak_model_name="azure/gpt-4o-mini",
editor_model_name="azure/gpt-4o",
editor_edit_format="editor-diff",
use_repo_map=True,
reminder="user",
use_system_prompt=False,
use_temperature=False,
streaming=False,
),
ModelSettings(
"o1-preview",
"architect",
weak_model_name="gpt-4o-mini",
editor_model_name="gpt-4o",
editor_edit_format="editor-diff",
use_repo_map=True,
reminder="user",
use_system_prompt=False,
use_temperature=False,
streaming=False,
),
ModelSettings(
"openrouter/openai/o1-mini",
"whole",
weak_model_name="openrouter/openai/gpt-4o-mini",
editor_model_name="openrouter/openai/gpt-4o",
editor_edit_format="editor-diff",
use_repo_map=True,
reminder="user",
use_system_prompt=False,
use_temperature=False,
streaming=False,
),
ModelSettings(
"openrouter/openai/o1-preview",
"diff",
weak_model_name="openrouter/openai/gpt-4o-mini",
editor_model_name="openrouter/openai/gpt-4o",
editor_edit_format="editor-diff",
use_repo_map=True,
reminder="user",
use_system_prompt=False,
use_temperature=False,
streaming=False,
),
]
class Model:
def __init__(self, model, weak_model=None):
# Set defaults from ModelSettings
default_settings = ModelSettings(name="")
for field in fields(ModelSettings):
setattr(self, field.name, getattr(default_settings, field.name))
model_info_url = (
"https://raw.githubusercontent.com/BerriAI/litellm/main/model_prices_and_context_window.json"
)
def get_model_flexible(model, content):
info = content.get(model, dict())
if info:
return info
pieces = model.split("/")
if len(pieces) == 2:
info = content.get(pieces[1])
if info and info.get("litellm_provider") == pieces[0]:
return info
return dict()
def get_model_info(model):
if not litellm._lazy_module:
cache_dir = Path.home() / ".aider" / "caches"
cache_file = cache_dir / "model_prices_and_context_window.json"
try:
cache_dir.mkdir(parents=True, exist_ok=True)
use_cache = True
except OSError:
# If we can't create the cache directory, we'll skip using the cache
use_cache = False
if use_cache:
current_time = time.time()
cache_age = (
current_time - cache_file.stat().st_mtime if cache_file.exists() else float("inf")
)
if cache_age < 60 * 60 * 24:
try:
content = json.loads(cache_file.read_text())
res = get_model_flexible(model, content)
if res:
return res
except Exception as ex:
print(str(ex))
import requests
try:
response = requests.get(model_info_url, timeout=5)
if response.status_code == 200:
content = response.json()
if use_cache:
try:
cache_file.write_text(json.dumps(content, indent=4))
except OSError:
# If we can't write to the cache file, we'll just skip caching
pass
res = get_model_flexible(model, content)
if res:
return res
except Exception as ex:
print(str(ex))
# If all else fails, do it the slow way...
try:
info = litellm.get_model_info(model)
return info
except Exception:
return dict()
class Model(ModelSettings):
def __init__(self, model, weak_model=None, editor_model=None, editor_edit_format=None):
self.name = model
self.max_chat_history_tokens = 1024
self.weak_model = None
self.editor_model = None
self.info = self.get_model_info(model)
@ -426,23 +731,13 @@ class Model:
else:
self.get_weak_model(weak_model)
def get_model_info(self, model):
# Try and do this quickly, without triggering the litellm import
spec = importlib.util.find_spec("litellm")
if spec:
origin = Path(spec.origin)
fname = origin.parent / "model_prices_and_context_window_backup.json"
if fname.exists():
data = json.loads(fname.read_text())
info = data.get(model)
if info:
return info
if editor_model is False:
self.editor_model_name = None
else:
self.get_editor_model(editor_model, editor_edit_format)
# Do it the slow way...
try:
return litellm.get_model_info(model)
except Exception:
return dict()
def get_model_info(self, model):
return get_model_info(model)
def configure_model_settings(self, model):
for ms in MODEL_SETTINGS:
@ -475,12 +770,18 @@ class Model:
return # <--
if "gpt-3.5" in model or "gpt-4" in model:
self.reminder_as_sys_msg = True
self.reminder = "sys"
if "3.5-sonnet" in model or "3-5-sonnet" in model:
self.edit_format = "diff"
self.use_repo_map = True
self.examples_as_sys_msg = True
self.reminder = "user"
if model.startswith("o1-") or "/o1-" in model:
self.use_system_prompt = False
self.use_temperature = False
self.streaming = False
# use the defaults
if self.edit_format == "diff":
@ -511,6 +812,26 @@ class Model:
def commit_message_models(self):
return [self.weak_model, self]
def get_editor_model(self, provided_editor_model_name, editor_edit_format):
# If editor_model_name is provided, override the model settings
if provided_editor_model_name:
self.editor_model_name = provided_editor_model_name
if editor_edit_format:
self.editor_edit_format = editor_edit_format
if not self.editor_model_name or self.editor_model_name == self.name:
self.editor_model = self
else:
self.editor_model = Model(
self.editor_model_name,
editor_model=False,
)
if not self.editor_edit_format:
self.editor_edit_format = self.editor_model.edit_format
return self.editor_model
def tokenizer(self, text):
return litellm.encode(model=self.name, text=text)
@ -530,7 +851,11 @@ class Model:
else:
msgs = json.dumps(messages)
return len(self.tokenizer(msgs))
try:
return len(self.tokenizer(msgs))
except Exception as err:
print(f"Unable to count tokens: {err}")
return 0
def token_count_for_image(self, fname):
"""
@ -645,7 +970,8 @@ def register_litellm_models(model_fnames):
try:
with open(model_fname, "r") as model_def_file:
model_def = json.load(model_def_file)
model_def = json5.load(model_def_file)
litellm._load_litellm()
litellm.register_model(model_def)
except Exception as e:
raise Exception(f"Error loading model definition from {model_fname}: {e}")
@ -666,9 +992,21 @@ def validate_variables(vars):
def sanity_check_models(io, main_model):
sanity_check_model(io, main_model)
problem_main = sanity_check_model(io, main_model)
problem_weak = None
if main_model.weak_model and main_model.weak_model is not main_model:
sanity_check_model(io, main_model.weak_model)
problem_weak = sanity_check_model(io, main_model.weak_model)
problem_editor = None
if (
main_model.editor_model
and main_model.editor_model is not main_model
and main_model.editor_model is not main_model.weak_model
):
problem_editor = sanity_check_model(io, main_model.editor_model)
return problem_main or problem_weak or problem_editor
def sanity_check_model(io, model):
@ -676,9 +1014,11 @@ def sanity_check_model(io, model):
if model.missing_keys:
show = True
io.tool_error(f"Model {model}: Missing these environment variables:")
io.tool_warning(f"Warning: {model} expects these environment variables")
for key in model.missing_keys:
io.tool_error(f"- {key}")
value = os.environ.get(key, "")
status = "Set" if value else "Not set"
io.tool_output(f"- {key}: {status}")
if platform.system() == "Windows" or True:
io.tool_output(
@ -688,12 +1028,12 @@ def sanity_check_model(io, model):
elif not model.keys_in_environment:
show = True
io.tool_output(f"Model {model}: Unknown which environment variables are required.")
io.tool_warning(f"Warning for {model}: Unknown which environment variables are required.")
if not model.info:
show = True
io.tool_output(
f"Model {model}: Unknown context window size and costs, using sane defaults."
io.tool_warning(
f"Warning for {model}: Unknown context window size and costs, using sane defaults."
)
possible_matches = fuzzy_match_models(model.name)
@ -703,7 +1043,9 @@ def sanity_check_model(io, model):
io.tool_output(f"- {match}")
if show:
io.tool_output(f"For more info, see: {urls.model_warnings}\n")
io.tool_output(f"For more info, see: {urls.model_warnings}")
return show
def fuzzy_match_models(name):
@ -755,20 +1097,37 @@ def print_matching_models(io, search):
io.tool_output(f'No models match "{search}".')
def get_model_settings_as_yaml():
import yaml
model_settings_list = []
for ms in MODEL_SETTINGS:
model_settings_dict = {
field.name: getattr(ms, field.name) for field in fields(ModelSettings)
}
model_settings_list.append(model_settings_dict)
return yaml.dump(model_settings_list, default_flow_style=False)
def main():
if len(sys.argv) != 2:
print("Usage: python models.py <model_name>")
if len(sys.argv) < 2:
print("Usage: python models.py <model_name> or python models.py --yaml")
sys.exit(1)
model_name = sys.argv[1]
matching_models = fuzzy_match_models(model_name)
if matching_models:
print(f"Matching models for '{model_name}':")
for model in matching_models:
print(model)
if sys.argv[1] == "--yaml":
yaml_string = get_model_settings_as_yaml()
print(yaml_string)
else:
print(f"No matching models found for '{model_name}'.")
model_name = sys.argv[1]
matching_models = fuzzy_match_models(model_name)
if matching_models:
print(f"Matching models for '{model_name}':")
for model in matching_models:
print(model)
else:
print(f"No matching models found for '{model_name}'.")
if __name__ == "__main__":

View file

@ -5,14 +5,21 @@
# Conventional Commits text adapted from:
# https://www.conventionalcommits.org/en/v1.0.0/#summary
commit_system = """You are an expert software engineer.
commit_system = """You are an expert software engineer that generates concise, \
one-line Git commit messages based on the provided diffs.
Review the provided context and diffs which are about to be committed to a git repo.
Review the diffs carefully.
Generate a commit message for those changes.
The commit message MUST use the imperative tense.
Generate a one-line commit message for those changes.
The commit message should be structured as follows: <type>: <description>
Use these for <type>: fix, feat, build, chore, ci, docs, style, refactor, perf, test
Reply with JUST the commit message, without quotes, comments, questions, etc!
Ensure the commit message:
- Starts with the appropriate prefix.
- Is in the imperative mood (e.g., \"Add feature\" not \"Added feature\" or \"Adding feature\").
- Does not exceed 72 characters.
Reply only with the one-line commit message, without any additional text, explanations, \
or line breaks.
"""
# COMMANDS

View file

@ -10,6 +10,16 @@ from aider.sendchat import simple_send_with_retries
from .dump import dump # noqa: F401
ANY_GIT_ERROR = (
git.exc.ODBError,
git.exc.GitError,
OSError,
IndexError,
BufferError,
TypeError,
ValueError,
)
class GitRepo:
repo = None
@ -19,6 +29,7 @@ class GitRepo:
aider_ignore_last_check = 0
subtree_only = False
ignore_file_cache = {}
git_repo_error = None
def __init__(
self,
@ -67,9 +78,7 @@ class GitRepo:
repo_path = git.Repo(fname, search_parent_directories=True).working_dir
repo_path = utils.safe_abs_path(repo_path)
repo_paths.append(repo_path)
except git.exc.InvalidGitRepositoryError:
pass
except git.exc.NoSuchPathError:
except ANY_GIT_ERROR:
pass
num_repos = len(set(repo_paths))
@ -116,7 +125,10 @@ class GitRepo:
if fnames:
fnames = [str(self.abs_root_path(fn)) for fn in fnames]
for fname in fnames:
self.repo.git.add(fname)
try:
self.repo.git.add(fname)
except ANY_GIT_ERROR as err:
self.io.tool_error(f"Unable to add {fname}: {err}")
cmd += ["--"] + fnames
else:
cmd += ["-a"]
@ -132,25 +144,27 @@ class GitRepo:
original_auther_name_env = os.environ.get("GIT_AUTHOR_NAME")
os.environ["GIT_AUTHOR_NAME"] = committer_name
self.repo.git.commit(cmd)
commit_hash = self.repo.head.commit.hexsha[:7]
self.io.tool_output(f"Commit {commit_hash} {commit_message}", bold=True)
try:
self.repo.git.commit(cmd)
commit_hash = self.get_head_commit_sha(short=True)
self.io.tool_output(f"Commit {commit_hash} {commit_message}", bold=True)
return commit_hash, commit_message
except ANY_GIT_ERROR as err:
self.io.tool_error(f"Unable to commit: {err}")
finally:
# Restore the env
# Restore the env
if self.attribute_committer:
if original_committer_name_env is not None:
os.environ["GIT_COMMITTER_NAME"] = original_committer_name_env
else:
del os.environ["GIT_COMMITTER_NAME"]
if self.attribute_committer:
if original_committer_name_env is not None:
os.environ["GIT_COMMITTER_NAME"] = original_committer_name_env
else:
del os.environ["GIT_COMMITTER_NAME"]
if aider_edits and self.attribute_author:
if original_auther_name_env is not None:
os.environ["GIT_AUTHOR_NAME"] = original_auther_name_env
else:
del os.environ["GIT_AUTHOR_NAME"]
return commit_hash, commit_message
if aider_edits and self.attribute_author:
if original_auther_name_env is not None:
os.environ["GIT_AUTHOR_NAME"] = original_auther_name_env
else:
del os.environ["GIT_AUTHOR_NAME"]
def get_rel_repo_dir(self):
try:
@ -178,7 +192,9 @@ class GitRepo:
max_tokens = model.info.get("max_input_tokens") or 0
if max_tokens and num_tokens > max_tokens:
continue
commit_message = simple_send_with_retries(model.name, messages)
commit_message = simple_send_with_retries(
model.name, messages, extra_params=model.extra_params
)
if commit_message:
break
@ -201,9 +217,9 @@ class GitRepo:
try:
commits = self.repo.iter_commits(active_branch)
current_branch_has_commits = any(commits)
except git.exc.GitCommandError:
except ANY_GIT_ERROR:
pass
except TypeError:
except (TypeError,) + ANY_GIT_ERROR:
pass
if not fnames:
@ -214,18 +230,21 @@ class GitRepo:
if not self.path_in_repo(fname):
diffs += f"Added {fname}\n"
if current_branch_has_commits:
args = ["HEAD", "--"] + list(fnames)
diffs += self.repo.git.diff(*args)
try:
if current_branch_has_commits:
args = ["HEAD", "--"] + list(fnames)
diffs += self.repo.git.diff(*args)
return diffs
wd_args = ["--"] + list(fnames)
index_args = ["--cached"] + wd_args
diffs += self.repo.git.diff(*index_args)
diffs += self.repo.git.diff(*wd_args)
return diffs
wd_args = ["--"] + list(fnames)
index_args = ["--cached"] + wd_args
diffs += self.repo.git.diff(*index_args)
diffs += self.repo.git.diff(*wd_args)
return diffs
except ANY_GIT_ERROR as err:
self.io.tool_error(f"Unable to diff: {err}")
def diff_commits(self, pretty, from_commit, to_commit):
args = []
@ -247,15 +266,26 @@ class GitRepo:
commit = self.repo.head.commit
except ValueError:
commit = None
except ANY_GIT_ERROR as err:
self.git_repo_error = err
self.io.tool_error(f"Unable to list files in git repo: {err}")
self.io.tool_output("Is your git repo corrupted?")
return []
files = set()
if commit:
if commit in self.tree_files:
files = self.tree_files[commit]
else:
for blob in commit.tree.traverse():
if blob.type == "blob": # blob is a file
files.add(blob.path)
try:
for blob in commit.tree.traverse():
if blob.type == "blob": # blob is a file
files.add(blob.path)
except ANY_GIT_ERROR as err:
self.git_repo_error = err
self.io.tool_error(f"Unable to list files in git repo: {err}")
self.io.tool_output("Is your git repo corrupted?")
return []
files = set(self.normalize_path(path) for path in files)
self.tree_files[commit] = set(files)
@ -314,7 +344,14 @@ class GitRepo:
def ignored_file_raw(self, fname):
if self.subtree_only:
fname_path = Path(self.normalize_path(fname))
cwd_path = Path.cwd().resolve().relative_to(Path(self.root).resolve())
try:
cwd_path = Path.cwd().resolve().relative_to(Path(self.root).resolve())
except ValueError:
# Issue #1524
# ValueError: 'C:\\dev\\squid-certbot' is not in the subpath of
# 'C:\\dev\\squid-certbot'
# Clearly, fname is not under cwd... so ignore it
return True
if cwd_path not in fname_path.parents and fname_path != cwd_path:
return True
@ -332,6 +369,8 @@ class GitRepo:
def path_in_repo(self, path):
if not self.repo:
return
if not path:
return
tracked_files = set(self.get_tracked_files())
return self.normalize_path(path) in tracked_files
@ -363,8 +402,22 @@ class GitRepo:
return self.repo.is_dirty(path=path)
def get_head(self):
def get_head_commit(self):
try:
return self.repo.head.commit.hexsha
except ValueError:
return self.repo.head.commit
except (ValueError,) + ANY_GIT_ERROR:
return None
def get_head_commit_sha(self, short=False):
commit = self.get_head_commit()
if not commit:
return
if short:
return commit.hexsha[:7]
return commit.hexsha
def get_head_commit_message(self, default=None):
commit = self.get_head_commit()
if not commit:
return default
return commit.message

View file

@ -2,7 +2,10 @@ import colorsys
import math
import os
import random
import shutil
import sqlite3
import sys
import time
import warnings
from collections import Counter, defaultdict, namedtuple
from importlib import resources
@ -12,10 +15,10 @@ from diskcache import Cache
from grep_ast import TreeContext, filename_to_lang
from pygments.lexers import guess_lexer_for_filename
from pygments.token import Token
from pygments.util import ClassNotFound
from tqdm import tqdm
from aider.dump import dump
from aider.special import filter_important_files
from aider.utils import Spinner
# tree_sitter is throwing a FutureWarning
@ -25,6 +28,9 @@ from tree_sitter_languages import get_language, get_parser # noqa: E402
Tag = namedtuple("Tag", "rel_fname fname line name kind".split())
SQLITE_ERRORS = (sqlite3.OperationalError, sqlite3.DatabaseError)
class RepoMap:
CACHE_VERSION = 3
TAGS_CACHE_DIR = f".aider.tags.cache.v{CACHE_VERSION}"
@ -41,9 +47,11 @@ class RepoMap:
verbose=False,
max_context_window=None,
map_mul_no_files=8,
refresh="auto",
):
self.io = io
self.verbose = verbose
self.refresh = refresh
if not root:
root = os.getcwd()
@ -62,6 +70,14 @@ class RepoMap:
self.tree_cache = {}
self.tree_context_cache = {}
self.map_cache = {}
self.map_processing_time = 0
self.last_map = None
if self.verbose:
self.io.tool_output(
f"RepoMap initialized with map_mul_no_files: {self.map_mul_no_files}"
)
def token_count(self, text):
len_text = len(text)
@ -77,7 +93,14 @@ class RepoMap:
est_tokens = sample_tokens / len(sample_text) * len_text
return est_tokens
def get_repo_map(self, chat_files, other_files, mentioned_fnames=None, mentioned_idents=None):
def get_repo_map(
self,
chat_files,
other_files,
mentioned_fnames=None,
mentioned_idents=None,
force_refresh=False,
):
if self.max_map_tokens <= 0:
return
if not other_files:
@ -93,7 +116,7 @@ class RepoMap:
padding = 4096
if max_map_tokens and self.max_context_window:
target = min(
max_map_tokens * self.map_mul_no_files,
int(max_map_tokens * self.map_mul_no_files),
self.max_context_window - padding,
)
else:
@ -103,7 +126,12 @@ class RepoMap:
try:
files_listing = self.get_ranked_tags_map(
chat_files, other_files, max_map_tokens, mentioned_fnames, mentioned_idents
chat_files,
other_files,
max_map_tokens,
mentioned_fnames,
mentioned_idents,
force_refresh,
)
except RecursionError:
self.io.tool_error("Disabling repo map, git repo too large?")
@ -132,17 +160,59 @@ class RepoMap:
return repo_content
def get_rel_fname(self, fname):
return os.path.relpath(fname, self.root)
try:
return os.path.relpath(fname, self.root)
except ValueError:
# Issue #1288: ValueError: path is on mount 'C:', start on mount 'D:'
# Just return the full fname.
return fname
def split_path(self, path):
path = os.path.relpath(path, self.root)
return [path + ":"]
def tags_cache_error(self, original_error=None):
"""Handle SQLite errors by trying to recreate cache, falling back to dict if needed"""
if self.verbose and original_error:
self.io.tool_warning(f"Tags cache error: {str(original_error)}")
if isinstance(getattr(self, "TAGS_CACHE", None), dict):
return
path = Path(self.root) / self.TAGS_CACHE_DIR
# Try to recreate the cache
try:
# Delete existing cache dir
if path.exists():
shutil.rmtree(path)
# Try to create new cache
new_cache = Cache(path)
# Test that it works
test_key = "test"
new_cache[test_key] = "test"
_ = new_cache[test_key]
del new_cache[test_key]
# If we got here, the new cache works
self.TAGS_CACHE = new_cache
return
except (SQLITE_ERRORS, OSError) as e:
# If anything goes wrong, warn and fall back to dict
self.io.tool_warning(
f"Unable to use tags cache at {path}, falling back to memory cache"
)
if self.verbose:
self.io.tool_warning(f"Cache recreation error: {str(e)}")
self.TAGS_CACHE = dict()
def load_tags_cache(self):
path = Path(self.root) / self.TAGS_CACHE_DIR
if not path.exists():
self.cache_missing = True
self.TAGS_CACHE = Cache(path)
try:
self.TAGS_CACHE = Cache(path)
except SQLITE_ERRORS as e:
self.tags_cache_error(e)
def save_tags_cache(self):
pass
@ -151,7 +221,7 @@ class RepoMap:
try:
return os.path.getmtime(fname)
except FileNotFoundError:
self.io.tool_error(f"File not found error: {fname}")
self.io.tool_warning(f"File not found error: {fname}")
def get_tags(self, fname, rel_fname):
# Check if the file is in the cache and if the modification time has not changed
@ -160,15 +230,30 @@ class RepoMap:
return []
cache_key = fname
if cache_key in self.TAGS_CACHE and self.TAGS_CACHE[cache_key]["mtime"] == file_mtime:
return self.TAGS_CACHE[cache_key]["data"]
try:
val = self.TAGS_CACHE.get(cache_key) # Issue #1308
except SQLITE_ERRORS as e:
self.tags_cache_error(e)
val = self.TAGS_CACHE.get(cache_key)
if val is not None and val.get("mtime") == file_mtime:
try:
return self.TAGS_CACHE[cache_key]["data"]
except SQLITE_ERRORS as e:
self.tags_cache_error(e)
return self.TAGS_CACHE[cache_key]["data"]
# miss!
data = list(self.get_tags_raw(fname, rel_fname))
# Update the cache
self.TAGS_CACHE[cache_key] = {"mtime": file_mtime, "data": data}
self.save_tags_cache()
try:
self.TAGS_CACHE[cache_key] = {"mtime": file_mtime, "data": data}
self.save_tags_cache()
except SQLITE_ERRORS as e:
self.tags_cache_error(e)
self.TAGS_CACHE[cache_key] = {"mtime": file_mtime, "data": data}
return data
def get_tags_raw(self, fname, rel_fname):
@ -176,8 +261,12 @@ class RepoMap:
if not lang:
return
language = get_language(lang)
parser = get_parser(lang)
try:
language = get_language(lang)
parser = get_parser(lang)
except Exception as err:
print(f"Skipping file {fname}: {err}")
return
query_scm = get_scm_fname(lang)
if not query_scm.exists():
@ -227,7 +316,8 @@ class RepoMap:
try:
lexer = guess_lexer_for_filename(fname, code)
except ClassNotFound:
except Exception: # On Windows, bad ref to time.clock which is deprecated?
# self.io.tool_error(f"Error lexing {fname}")
return
tokens = list(lexer.get_tokens(code))
@ -262,7 +352,13 @@ class RepoMap:
# https://networkx.org/documentation/stable/_modules/networkx/algorithms/link_analysis/pagerank_alg.html#pagerank
personalize = 100 / len(fnames)
if len(fnames) - len(self.TAGS_CACHE) > 100:
try:
cache_size = len(self.TAGS_CACHE)
except SQLITE_ERRORS as e:
self.tags_cache_error(e)
cache_size = len(self.TAGS_CACHE)
if len(fnames) - cache_size > 100:
self.io.tool_output(
"Initial repo scan can be slow in larger repos, but only happens once."
)
@ -275,16 +371,18 @@ class RepoMap:
if progress and not showing_bar:
progress()
if not Path(fname).is_file():
if fname not in self.warned_files:
if Path(fname).exists():
self.io.tool_error(
f"Repo-map can't include {fname}, it is not a normal file"
)
else:
self.io.tool_error(f"Repo-map can't include {fname}, it no longer exists")
try:
file_ok = Path(fname).is_file()
except OSError:
file_ok = False
self.warned_files.add(fname)
if not file_ok:
if fname not in self.warned_files:
self.io.tool_warning(f"Repo-map can't include {fname}")
self.io.tool_output(
"Has it been deleted from the file system but not from git?"
)
self.warned_files.add(fname)
continue
# dump(fname)
@ -356,7 +454,11 @@ class RepoMap:
try:
ranked = nx.pagerank(G, weight="weight", **pers_args)
except ZeroDivisionError:
return []
# Issue #1536
try:
ranked = nx.pagerank(G, weight="weight")
except ZeroDivisionError:
return []
# distribute the rank from each source node, across all of its out edges
ranked_definitions = defaultdict(float)
@ -373,7 +475,9 @@ class RepoMap:
ranked_definitions[(dst, ident)] += data["rank"]
ranked_tags = []
ranked_definitions = sorted(ranked_definitions.items(), reverse=True, key=lambda x: x[1])
ranked_definitions = sorted(
ranked_definitions.items(), reverse=True, key=lambda x: (x[1], x[0])
)
# dump(ranked_definitions)
@ -406,6 +510,59 @@ class RepoMap:
max_map_tokens=None,
mentioned_fnames=None,
mentioned_idents=None,
force_refresh=False,
):
# Create a cache key
cache_key = [
tuple(sorted(chat_fnames)) if chat_fnames else None,
tuple(sorted(other_fnames)) if other_fnames else None,
max_map_tokens,
]
if self.refresh == "auto":
cache_key += [
tuple(sorted(mentioned_fnames)) if mentioned_fnames else None,
tuple(sorted(mentioned_idents)) if mentioned_idents else None,
]
cache_key = tuple(cache_key)
use_cache = False
if not force_refresh:
if self.refresh == "manual" and self.last_map:
return self.last_map
if self.refresh == "always":
use_cache = False
elif self.refresh == "files":
use_cache = True
elif self.refresh == "auto":
use_cache = self.map_processing_time > 1.0
# Check if the result is in the cache
if use_cache and cache_key in self.map_cache:
return self.map_cache[cache_key]
# If not in cache or force_refresh is True, generate the map
start_time = time.time()
result = self.get_ranked_tags_map_uncached(
chat_fnames, other_fnames, max_map_tokens, mentioned_fnames, mentioned_idents
)
end_time = time.time()
self.map_processing_time = end_time - start_time
# Store the result in the cache
self.map_cache[cache_key] = result
self.last_map = result
return result
def get_ranked_tags_map_uncached(
self,
chat_fnames,
other_fnames=None,
max_map_tokens=None,
mentioned_fnames=None,
mentioned_idents=None,
):
if not other_fnames:
other_fnames = list()
@ -416,7 +573,7 @@ class RepoMap:
if not mentioned_idents:
mentioned_idents = set()
spin = Spinner("Preparing repo map")
spin = Spinner("Updating repo map")
ranked_tags = self.get_ranked_tags(
chat_fnames,
@ -426,6 +583,14 @@ class RepoMap:
progress=spin.step,
)
other_rel_fnames = sorted(set(self.get_rel_fname(fname) for fname in other_fnames))
special_fnames = filter_important_files(other_rel_fnames)
ranked_tags_fnames = set(tag[0] for tag in ranked_tags)
special_fnames = [fn for fn in special_fnames if fn not in ranked_tags_fnames]
special_fnames = [(fn,) for fn in special_fnames]
ranked_tags = special_fnames + ranked_tags
spin.step()
num_tags = len(ranked_tags)
@ -469,12 +634,16 @@ class RepoMap:
tree_cache = dict()
def render_tree(self, abs_fname, rel_fname, lois):
key = (rel_fname, tuple(sorted(lois)))
mtime = self.get_mtime(abs_fname)
key = (rel_fname, tuple(sorted(lois)), mtime)
if key in self.tree_cache:
return self.tree_cache[key]
if rel_fname not in self.tree_context_cache:
if (
rel_fname not in self.tree_context_cache
or self.tree_context_cache[rel_fname]["mtime"] != mtime
):
code = self.io.read_text(abs_fname) or ""
if not code.endswith("\n"):
code += "\n"
@ -492,9 +661,9 @@ class RepoMap:
# header_max=30,
show_top_of_file_parent_scope=False,
)
self.tree_context_cache[rel_fname] = context
self.tree_context_cache[rel_fname] = {"context": context, "mtime": mtime}
context = self.tree_context_cache[rel_fname]
context = self.tree_context_cache[rel_fname]["context"]
context.lines_of_interest = set()
context.add_lines_of_interest(lois)
context.add_context()

200
aider/report.py Normal file
View file

@ -0,0 +1,200 @@
import os
import platform
import subprocess
import sys
import traceback
import urllib.parse
import webbrowser
from aider import __version__
from aider.urls import github_issues
from aider.versioncheck import VERSION_CHECK_FNAME
FENCE = "`" * 3
def get_python_info():
implementation = platform.python_implementation()
is_venv = sys.prefix != sys.base_prefix
return (
f"Python implementation: {implementation}\nVirtual environment:"
f" {'Yes' if is_venv else 'No'}"
)
def get_os_info():
return f"OS: {platform.system()} {platform.release()} ({platform.architecture()[0]})"
def get_git_info():
try:
git_version = subprocess.check_output(["git", "--version"]).decode().strip()
return f"Git version: {git_version}"
except Exception:
return "Git information unavailable"
def report_github_issue(issue_text, title=None, confirm=True):
"""
Compose a URL to open a new GitHub issue with the given text prefilled,
and attempt to launch it in the default web browser.
:param issue_text: The text of the issue to file
:param title: The title of the issue (optional)
:param confirm: Whether to ask for confirmation before opening the browser (default: True)
:return: None
"""
version_info = f"Aider version: {__version__}\n"
python_version = f"Python version: {sys.version.split()[0]}\n"
platform_info = f"Platform: {platform.platform()}\n"
python_info = get_python_info() + "\n"
os_info = get_os_info() + "\n"
git_info = get_git_info() + "\n"
system_info = (
version_info + python_version + platform_info + python_info + os_info + git_info + "\n"
)
issue_text = system_info + issue_text
params = {"body": issue_text}
if title is None:
title = "Bug report"
params["title"] = title
issue_url = f"{github_issues}?{urllib.parse.urlencode(params)}"
if confirm:
print(f"\n# {title}\n")
print(issue_text.strip())
print()
print("Please consider reporting this bug to help improve aider!")
prompt = "Open a GitHub Issue pre-filled with the above error in your browser? (Y/n) "
confirmation = input(prompt).strip().lower()
yes = not confirmation or confirmation.startswith("y")
if not yes:
return
print("Attempting to open the issue URL in your default web browser...")
try:
if webbrowser.open(issue_url):
print("Browser window should be opened.")
except Exception:
pass
if confirm:
print()
print()
print("You can also use this URL to file the GitHub Issue:")
print()
print(issue_url)
print()
print()
def exception_handler(exc_type, exc_value, exc_traceback):
# If it's a KeyboardInterrupt, just call the default handler
if issubclass(exc_type, KeyboardInterrupt):
return sys.__excepthook__(exc_type, exc_value, exc_traceback)
# We don't want any more exceptions
sys.excepthook = None
# Check if VERSION_CHECK_FNAME exists and delete it if so
try:
if VERSION_CHECK_FNAME.exists():
VERSION_CHECK_FNAME.unlink()
except Exception:
pass # Swallow any errors
# Format the traceback
tb_lines = traceback.format_exception(exc_type, exc_value, exc_traceback)
# Replace full paths with basenames in the traceback
tb_lines_with_basenames = []
for line in tb_lines:
try:
if "File " in line:
parts = line.split('"')
if len(parts) > 1:
full_path = parts[1]
basename = os.path.basename(full_path)
line = line.replace(full_path, basename)
except Exception:
pass
tb_lines_with_basenames.append(line)
tb_text = "".join(tb_lines_with_basenames)
# Find the innermost frame
innermost_tb = exc_traceback
while innermost_tb.tb_next:
innermost_tb = innermost_tb.tb_next
# Get the filename and line number from the innermost frame
filename = innermost_tb.tb_frame.f_code.co_filename
line_number = innermost_tb.tb_lineno
try:
basename = os.path.basename(filename)
except Exception:
basename = filename
# Get the exception type name
exception_type = exc_type.__name__
# Prepare the issue text
issue_text = f"An uncaught exception occurred:\n\n{FENCE}\n{tb_text}\n{FENCE}"
# Prepare the title
title = f"Uncaught {exception_type} in {basename} line {line_number}"
# Report the issue
report_github_issue(issue_text, title=title)
# Call the default exception handler
sys.__excepthook__(exc_type, exc_value, exc_traceback)
def report_uncaught_exceptions():
"""
Set up the global exception handler to report uncaught exceptions.
"""
sys.excepthook = exception_handler
def dummy_function1():
def dummy_function2():
def dummy_function3():
raise ValueError("boo")
dummy_function3()
dummy_function2()
def main():
report_uncaught_exceptions()
dummy_function1()
title = None
if len(sys.argv) > 2:
# Use the first command-line argument as the title and the second as the issue text
title = sys.argv[1]
issue_text = sys.argv[2]
elif len(sys.argv) > 1:
# Use the first command-line argument as the issue text
issue_text = sys.argv[1]
else:
# Read from stdin if no argument is provided
print("Enter the issue title (optional, press Enter to skip):")
title = input().strip()
if not title:
title = None
print("Enter the issue text (Ctrl+D to finish):")
issue_text = sys.stdin.read().strip()
report_github_issue(issue_text, title)
if __name__ == "__main__":
main()

View file

@ -0,0 +1,3 @@
# This ensures that importlib_resources.files("aider.resources")
# doesn't raise ImportError, even if there are no other files in this
# dir.

131
aider/run_cmd.py Normal file
View file

@ -0,0 +1,131 @@
import os
import platform
import subprocess
import sys
from io import BytesIO
import pexpect
import psutil
def run_cmd(command, verbose=False, error_print=None):
try:
if sys.stdin.isatty() and hasattr(pexpect, "spawn") and platform.system() != "Windows":
return run_cmd_pexpect(command, verbose)
return run_cmd_subprocess(command, verbose)
except OSError as e:
error_message = f"Error occurred while running command '{command}': {str(e)}"
if error_print is None:
print(error_message)
else:
error_print(error_message)
return 1, error_message
def get_windows_parent_process_name():
try:
current_process = psutil.Process()
while True:
parent = current_process.parent()
if parent is None:
break
parent_name = parent.name().lower()
if parent_name in ["powershell.exe", "cmd.exe"]:
return parent_name
current_process = parent
return None
except Exception:
return None
def run_cmd_subprocess(command, verbose=False):
if verbose:
print("Using run_cmd_subprocess:", command)
try:
shell = os.environ.get("SHELL", "/bin/sh")
parent_process = None
# Determine the appropriate shell
if platform.system() == "Windows":
parent_process = get_windows_parent_process_name()
if parent_process == "powershell.exe":
command = f"powershell -Command {command}"
if verbose:
print("Running command:", command)
print("SHELL:", shell)
if platform.system() == "Windows":
print("Parent process:", parent_process)
process = subprocess.Popen(
command,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
text=True,
shell=True,
encoding=sys.stdout.encoding,
errors="replace",
bufsize=0, # Set bufsize to 0 for unbuffered output
universal_newlines=True,
)
output = []
while True:
chunk = process.stdout.read(1)
if not chunk:
break
print(chunk, end="", flush=True) # Print the chunk in real-time
output.append(chunk) # Store the chunk for later use
process.wait()
return process.returncode, "".join(output)
except Exception as e:
return 1, str(e)
def run_cmd_pexpect(command, verbose=False):
"""
Run a shell command interactively using pexpect, capturing all output.
:param command: The command to run as a string.
:param verbose: If True, print output in real-time.
:return: A tuple containing (exit_status, output)
"""
if verbose:
print("Using run_cmd_pexpect:", command)
output = BytesIO()
def output_callback(b):
output.write(b)
return b
try:
# Use the SHELL environment variable, falling back to /bin/sh if not set
shell = os.environ.get("SHELL", "/bin/sh")
if verbose:
print("With shell:", shell)
if os.path.exists(shell):
# Use the shell from SHELL environment variable
if verbose:
print("Running pexpect.spawn with shell:", shell)
child = pexpect.spawn(shell, args=["-c", command], encoding="utf-8")
else:
# Fall back to spawning the command directly
if verbose:
print("Running pexpect.spawn without shell.")
child = pexpect.spawn(command, encoding="utf-8")
# Transfer control to the user, capturing output
child.interact(output_filter=output_callback)
# Wait for the command to finish and get the exit status
child.close()
return child.exitstatus, output.getvalue().decode("utf-8", errors="replace")
except (pexpect.ExceptionPexpect, TypeError, ValueError) as e:
error_msg = f"Error running command {command}: {e}"
return 1, error_msg

View file

@ -131,7 +131,9 @@ class Scraper:
# Internals...
def scrape_with_playwright(self, url):
import playwright
import playwright # noqa: F401
from playwright.sync_api import Error as PlaywrightError
from playwright.sync_api import TimeoutError as PlaywrightTimeoutError
from playwright.sync_api import sync_playwright
with sync_playwright() as p:
@ -156,18 +158,20 @@ class Scraper:
response = None
try:
response = page.goto(url, wait_until="networkidle", timeout=5000)
except playwright._impl._errors.TimeoutError:
except PlaywrightTimeoutError:
self.print_error(f"Timeout while loading {url}")
except playwright._impl._errors.Error as e:
except PlaywrightError as e:
self.print_error(f"Error navigating to {url}: {str(e)}")
return None, None
try:
content = page.content()
mime_type = (
response.header_value("content-type").split(";")[0] if response else None
)
except playwright._impl._errors.Error as e:
mime_type = None
if response:
content_type = response.header_value("content-type")
if content_type:
mime_type = content_type.split(";")[0]
except PlaywrightError as e:
self.print_error(f"Error retrieving page content: {str(e)}")
content = None
mime_type = None
@ -181,7 +185,9 @@ class Scraper:
headers = {"User-Agent": f"Mozilla./5.0 ({aider_user_agent})"}
try:
with httpx.Client(headers=headers, verify=self.verify_ssl) as client:
with httpx.Client(
headers=headers, verify=self.verify_ssl, follow_redirects=True
) as client:
response = client.get(url)
response.raise_for_status()
return response.text, response.headers.get("content-type", "").split(";")[0]
@ -220,7 +226,10 @@ class Scraper:
if not self.pandoc_available:
return page_source
md = pypandoc.convert_text(page_source, "markdown", format="html")
try:
md = pypandoc.convert_text(page_source, "markdown", format="html")
except OSError:
return page_source
md = re.sub(r"</div>", " ", md)
md = re.sub(r"<div>", " ", md)

View file

@ -1,5 +1,6 @@
import hashlib
import json
import time
import backoff
@ -13,21 +14,37 @@ CACHE_PATH = "~/.aider.send.cache.v1"
CACHE = None
# CACHE = Cache(CACHE_PATH)
RETRY_TIMEOUT = 60
def retry_exceptions():
import httpx
import openai
return (
# httpx
httpx.ConnectError,
httpx.RemoteProtocolError,
httpx.ReadTimeout,
litellm.exceptions.APIConnectionError,
litellm.exceptions.APIError,
litellm.exceptions.RateLimitError,
litellm.exceptions.ServiceUnavailableError,
litellm.exceptions.Timeout,
litellm.exceptions.InternalServerError,
litellm.llms.anthropic.AnthropicError,
#
# litellm exceptions inherit from openai exceptions
# https://docs.litellm.ai/docs/exception_mapping
#
# openai.BadRequestError,
# litellm.ContextWindowExceededError,
# litellm.ContentPolicyViolationError,
#
# openai.AuthenticationError,
# openai.PermissionDeniedError,
# openai.NotFoundError,
#
openai.APITimeoutError,
openai.UnprocessableEntityError,
openai.RateLimitError,
openai.APIConnectionError,
openai.APIError,
openai.APIStatusError,
openai.InternalServerError,
)
@ -36,7 +53,7 @@ def lazy_litellm_retry_decorator(func):
decorated_func = backoff.on_exception(
backoff.expo,
retry_exceptions(),
max_time=60,
max_time=RETRY_TIMEOUT,
on_backoff=lambda details: print(
f"{details.get('exception', 'Exception')}\nRetry in {details['wait']:.1f} seconds."
),
@ -47,23 +64,28 @@ def lazy_litellm_retry_decorator(func):
def send_completion(
model_name, messages, functions, stream, temperature=0, extra_headers=None, max_tokens=None
model_name,
messages,
functions,
stream,
temperature=0,
extra_params=None,
):
from aider.llm import litellm
kwargs = dict(
model=model_name,
messages=messages,
temperature=temperature,
stream=stream,
)
if temperature is not None:
kwargs["temperature"] = temperature
if functions is not None:
kwargs["functions"] = functions
if extra_headers is not None:
kwargs["extra_headers"] = extra_headers
if max_tokens is not None:
kwargs["max_tokens"] = max_tokens
function = functions[0]
kwargs["tools"] = [dict(type="function", function=function)]
kwargs["tool_choice"] = {"type": "function", "function": {"name": function["name"]}}
if extra_params is not None:
kwargs.update(extra_params)
key = json.dumps(kwargs, sort_keys=True).encode()
@ -73,8 +95,6 @@ def send_completion(
if not stream and CACHE is not None and key in CACHE:
return hash_object, CACHE[key]
# del kwargs['stream']
res = litellm.completion(**kwargs)
if not stream and CACHE is not None:
@ -83,15 +103,27 @@ def send_completion(
return hash_object, res
@lazy_litellm_retry_decorator
def simple_send_with_retries(model_name, messages):
try:
_hash, response = send_completion(
model_name=model_name,
messages=messages,
functions=None,
stream=False,
)
return response.choices[0].message.content
except (AttributeError, litellm.exceptions.BadRequestError):
return
def simple_send_with_retries(model_name, messages, extra_params=None):
retry_delay = 0.125
while True:
try:
kwargs = {
"model_name": model_name,
"messages": messages,
"functions": None,
"stream": False,
"extra_params": extra_params,
}
_hash, response = send_completion(**kwargs)
return response.choices[0].message.content
except retry_exceptions() as err:
print(str(err))
retry_delay *= 2
if retry_delay > RETRY_TIMEOUT:
break
print(f"Retrying in {retry_delay:.1f} seconds...")
time.sleep(retry_delay)
continue
except AttributeError:
return

202
aider/special.py Normal file
View file

@ -0,0 +1,202 @@
import os
ROOT_IMPORTANT_FILES = [
# Version Control
".gitignore",
".gitattributes",
# Documentation
"README",
"README.md",
"README.txt",
"README.rst",
"CONTRIBUTING",
"CONTRIBUTING.md",
"CONTRIBUTING.txt",
"CONTRIBUTING.rst",
"LICENSE",
"LICENSE.md",
"LICENSE.txt",
"CHANGELOG",
"CHANGELOG.md",
"CHANGELOG.txt",
"CHANGELOG.rst",
"SECURITY",
"SECURITY.md",
"SECURITY.txt",
"CODEOWNERS",
# Package Management and Dependencies
"requirements.txt",
"Pipfile",
"Pipfile.lock",
"pyproject.toml",
"setup.py",
"setup.cfg",
"package.json",
"package-lock.json",
"yarn.lock",
"npm-shrinkwrap.json",
"Gemfile",
"Gemfile.lock",
"composer.json",
"composer.lock",
"pom.xml",
"build.gradle",
"build.sbt",
"go.mod",
"go.sum",
"Cargo.toml",
"Cargo.lock",
"mix.exs",
"rebar.config",
"project.clj",
"Podfile",
"Cartfile",
"dub.json",
"dub.sdl",
# Configuration and Settings
".env",
".env.example",
".editorconfig",
"tsconfig.json",
"jsconfig.json",
".babelrc",
"babel.config.js",
".eslintrc",
".eslintignore",
".prettierrc",
".stylelintrc",
"tslint.json",
".pylintrc",
".flake8",
".rubocop.yml",
".scalafmt.conf",
".dockerignore",
".gitpod.yml",
"sonar-project.properties",
"renovate.json",
"dependabot.yml",
".pre-commit-config.yaml",
"mypy.ini",
"tox.ini",
".yamllint",
"pyrightconfig.json",
# Build and Compilation
"webpack.config.js",
"rollup.config.js",
"parcel.config.js",
"gulpfile.js",
"Gruntfile.js",
"build.xml",
"build.boot",
"project.json",
"build.cake",
"MANIFEST.in",
# Testing
"pytest.ini",
"phpunit.xml",
"karma.conf.js",
"jest.config.js",
"cypress.json",
".nycrc",
".nycrc.json",
# CI/CD
".travis.yml",
".gitlab-ci.yml",
"Jenkinsfile",
"azure-pipelines.yml",
"bitbucket-pipelines.yml",
"appveyor.yml",
"circle.yml",
".circleci/config.yml",
".github/dependabot.yml",
"codecov.yml",
".coveragerc",
# Docker and Containers
"Dockerfile",
"docker-compose.yml",
"docker-compose.override.yml",
# Cloud and Serverless
"serverless.yml",
"firebase.json",
"now.json",
"netlify.toml",
"vercel.json",
"app.yaml",
"terraform.tf",
"main.tf",
"cloudformation.yaml",
"cloudformation.json",
"ansible.cfg",
"kubernetes.yaml",
"k8s.yaml",
# Database
"schema.sql",
"liquibase.properties",
"flyway.conf",
# Framework-specific
"next.config.js",
"nuxt.config.js",
"vue.config.js",
"angular.json",
"gatsby-config.js",
"gridsome.config.js",
# API Documentation
"swagger.yaml",
"swagger.json",
"openapi.yaml",
"openapi.json",
# Development environment
".nvmrc",
".ruby-version",
".python-version",
"Vagrantfile",
# Quality and metrics
".codeclimate.yml",
"codecov.yml",
# Documentation
"mkdocs.yml",
"_config.yml",
"book.toml",
"readthedocs.yml",
".readthedocs.yaml",
# Package registries
".npmrc",
".yarnrc",
# Linting and formatting
".isort.cfg",
".markdownlint.json",
".markdownlint.yaml",
# Security
".bandit",
".secrets.baseline",
# Misc
".pypirc",
".gitkeep",
".npmignore",
]
# Normalize the lists once
NORMALIZED_ROOT_IMPORTANT_FILES = set(os.path.normpath(path) for path in ROOT_IMPORTANT_FILES)
def is_important(file_path):
file_name = os.path.basename(file_path)
dir_name = os.path.normpath(os.path.dirname(file_path))
normalized_path = os.path.normpath(file_path)
# Check for GitHub Actions workflow files
if dir_name == os.path.normpath(".github/workflows") and file_name.endswith(".yml"):
return True
return normalized_path in NORMALIZED_ROOT_IMPORTANT_FILES
def filter_important_files(file_paths):
"""
Filter a list of file paths to return only those that are commonly important in codebases.
:param file_paths: List of file paths to check
:return: List of file paths that match important file patterns
"""
return list(filter(is_important, file_paths))

View file

@ -8,3 +8,5 @@ model_warnings = "https://aider.chat/docs/llms/warnings.html"
token_limits = "https://aider.chat/docs/troubleshooting/token-limits.html"
llms = "https://aider.chat/docs/llms.html"
large_repos = "https://aider.chat/docs/faq.html#can-i-use-aider-in-a-large-mono-repo"
github_issues = "https://github.com/Aider-AI/aider/issues/new"
git_index_version = "https://github.com/Aider-AI/aider/issues/211"

View file

@ -1,5 +1,8 @@
import itertools
import os
import platform
import shlex
import shutil
import subprocess
import sys
import tempfile
@ -15,7 +18,10 @@ IMAGE_EXTENSIONS = {".png", ".jpg", ".jpeg", ".gif", ".bmp", ".tiff", ".webp"}
class IgnorantTemporaryDirectory:
def __init__(self):
self.temp_dir = tempfile.TemporaryDirectory()
if sys.version_info >= (3, 10):
self.temp_dir = tempfile.TemporaryDirectory(ignore_cleanup_errors=True)
else:
self.temp_dir = tempfile.TemporaryDirectory()
def __enter__(self):
return self.temp_dir.__enter__()
@ -26,8 +32,8 @@ class IgnorantTemporaryDirectory:
def cleanup(self):
try:
self.temp_dir.cleanup()
except (OSError, PermissionError):
pass # Ignore errors (Windows)
except (OSError, PermissionError, RecursionError):
pass # Ignore errors (Windows and potential recursion)
def __getattr__(self, item):
return getattr(self.temp_dir, item)
@ -188,12 +194,31 @@ def split_chat_history_markdown(text, include_tool=False):
return messages
# Copied from pip, MIT license
# https://github.com/pypa/pip/blob/b989e6ef04810bbd4033a3683020bd4ddcbdb627/src/pip/_internal/utils/entrypoints.py#L73
def get_best_invocation_for_this_python() -> str:
"""Try to figure out the best way to invoke the current Python."""
exe = sys.executable
exe_name = os.path.basename(exe)
# Try to use the basename, if it's the first executable.
found_executable = shutil.which(exe_name)
if found_executable and os.path.samefile(found_executable, exe):
return exe_name
# Use the full executable name, because we couldn't find something simpler.
return exe
def get_pip_install(args):
cmd = [
sys.executable,
get_best_invocation_for_this_python(),
"-m",
"pip",
"install",
"--upgrade",
"--upgrade-strategy",
"only-if-needed",
]
cmd += args
return cmd
@ -201,7 +226,7 @@ def get_pip_install(args):
def run_install(cmd):
print()
print("Installing: ", " ".join(cmd))
print("Installing:", printable_shell_command(cmd))
try:
output = []
@ -212,6 +237,8 @@ def run_install(cmd):
text=True,
bufsize=1,
universal_newlines=True,
encoding=sys.stdout.encoding,
errors="replace",
)
spinner = Spinner("Installing...")
@ -269,31 +296,85 @@ class Spinner:
print("\r" + " " * (len(self.text) + 3))
def check_pip_install_extra(io, module, prompt, pip_install_cmd):
def find_common_root(abs_fnames):
if len(abs_fnames) == 1:
return safe_abs_path(os.path.dirname(list(abs_fnames)[0]))
elif abs_fnames:
return safe_abs_path(os.path.commonpath(list(abs_fnames)))
else:
return safe_abs_path(os.getcwd())
def format_tokens(count):
if count < 1000:
return f"{count}"
elif count < 10000:
return f"{count / 1000:.1f}k"
else:
return f"{round(count / 1000)}k"
def touch_file(fname):
fname = Path(fname)
try:
__import__(module)
fname.parent.mkdir(parents=True, exist_ok=True)
fname.touch()
return True
except (ImportError, ModuleNotFoundError):
pass
except OSError:
return False
def check_pip_install_extra(io, module, prompt, pip_install_cmd, self_update=False):
if module:
try:
__import__(module)
return True
except (ImportError, ModuleNotFoundError, RuntimeError):
pass
cmd = get_pip_install(pip_install_cmd)
text = f"{prompt}:\n\n{' '.join(cmd)}\n"
io.tool_error(text)
if prompt:
io.tool_warning(prompt)
if not io.confirm_ask("Run pip install?", default="y"):
if self_update and platform.system() == "Windows":
io.tool_output("Run this command to update:")
print()
print(printable_shell_command(cmd)) # plain print so it doesn't line-wrap
return
if not io.confirm_ask("Run pip install?", default="y", subject=printable_shell_command(cmd)):
return
success, output = run_install(cmd)
if success:
if not module:
return True
try:
__import__(module)
return True
except (ImportError, ModuleNotFoundError) as err:
except (ImportError, ModuleNotFoundError, RuntimeError) as err:
io.tool_error(str(err))
pass
io.tool_error(output)
print()
print(f"Failed to install {pip_install_cmd[0]}")
print("Install failed, try running this command manually:")
print(printable_shell_command(cmd))
def printable_shell_command(cmd_list):
"""
Convert a list of command arguments to a properly shell-escaped string.
Args:
cmd_list (list): List of command arguments.
Returns:
str: Shell-escaped command string.
"""
if platform.system() == "Windows":
return subprocess.list2cmdline(cmd_list)
else:
return shlex.join(cmd_list)

View file

@ -9,13 +9,63 @@ import aider
from aider import utils
from aider.dump import dump # noqa: F401
VERSION_CHECK_FNAME = Path.home() / ".aider" / "caches" / "versioncheck"
def install_from_main_branch(io):
"""
Install the latest version of aider from the main branch of the GitHub repository.
"""
return utils.check_pip_install_extra(
io,
None,
"Install the development version of aider from the main branch?",
["git+https://github.com/Aider-AI/aider.git"],
self_update=True,
)
def install_upgrade(io, latest_version=None):
"""
Install the latest version of aider from PyPI.
"""
if latest_version:
new_ver_text = f"Newer aider version v{latest_version} is available."
else:
new_ver_text = "Install latest version of aider?"
docker_image = os.environ.get("AIDER_DOCKER_IMAGE")
if docker_image:
text = f"""
{new_ver_text} To upgrade, run:
docker pull {docker_image}
"""
io.tool_warning(text)
return True
success = utils.check_pip_install_extra(
io,
None,
new_ver_text,
["aider-chat"],
self_update=True,
)
if success:
io.tool_output("Re-run aider to use new version.")
sys.exit()
return
def check_version(io, just_check=False, verbose=False):
fname = Path.home() / ".aider" / "caches" / "versioncheck"
if not just_check and fname.exists():
if not just_check and VERSION_CHECK_FNAME.exists():
day = 60 * 60 * 24
since = time.time() - fname.stat().st_mtime
if since < day:
since = time.time() - os.path.getmtime(VERSION_CHECK_FNAME)
if 0 < since < day:
if verbose:
hours = since / 60 / 60
io.tool_output(f"Too soon to check version: {hours:.1f} hours")
@ -41,8 +91,11 @@ def check_version(io, just_check=False, verbose=False):
io.tool_error(f"Error checking pypi for new version: {err}")
return False
finally:
fname.parent.mkdir(parents=True, exist_ok=True)
fname.touch()
VERSION_CHECK_FNAME.parent.mkdir(parents=True, exist_ok=True)
VERSION_CHECK_FNAME.touch()
###
# is_update_available = True
if just_check or verbose:
if is_update_available:
@ -56,31 +109,5 @@ def check_version(io, just_check=False, verbose=False):
if not is_update_available:
return False
docker_image = os.environ.get("AIDER_DOCKER_IMAGE")
if docker_image:
text = f"""
Newer aider version v{latest_version} is available. To upgrade, run:
docker pull {docker_image}
"""
io.tool_error(text)
return True
cmd = utils.get_pip_install(["--upgrade", "aider-chat"])
text = f"""
Newer aider version v{latest_version} is available. To upgrade, run:
{' '.join(cmd)}
"""
io.tool_error(text)
if io.confirm_ask("Run pip install?"):
success, output = utils.run_install(cmd)
if success:
io.tool_output("Re-run aider to use new version.")
sys.exit()
else:
io.tool_error(output)
install_upgrade(io, latest_version)
return True

View file

@ -3,18 +3,25 @@ import os
import queue
import tempfile
import time
import warnings
from prompt_toolkit.shortcuts import prompt
from aider.llm import litellm
from .dump import dump # noqa: F401
warnings.filterwarnings(
"ignore", message="Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work"
)
from pydub import AudioSegment # noqa
try:
import soundfile as sf
except (OSError, ModuleNotFoundError):
sf = None
from prompt_toolkit.shortcuts import prompt
from .dump import dump # noqa: F401
class SoundDeviceError(Exception):
pass
@ -27,7 +34,7 @@ class Voice:
threshold = 0.15
def __init__(self):
def __init__(self, audio_format="wav"):
if sf is None:
raise SoundDeviceError
try:
@ -37,6 +44,9 @@ class Voice:
self.sd = sd
except (OSError, ModuleNotFoundError):
raise SoundDeviceError
if audio_format not in ["wav", "mp3", "webm"]:
raise ValueError(f"Unsupported audio format: {audio_format}")
self.audio_format = audio_format
def callback(self, indata, frames, time, status):
"""This is called (from a separate thread) for each audio block."""
@ -72,16 +82,24 @@ class Voice:
return self.raw_record_and_transcribe(history, language)
except KeyboardInterrupt:
return
except SoundDeviceError as e:
print(f"Error: {e}")
print("Please ensure you have a working audio input device connected and try again.")
return
def raw_record_and_transcribe(self, history, language):
self.q = queue.Queue()
filename = tempfile.mktemp(suffix=".wav")
temp_wav = tempfile.mktemp(suffix=".wav")
try:
sample_rate = int(self.sd.query_devices(None, "input")["default_samplerate"])
except (TypeError, ValueError):
sample_rate = 16000 # fallback to 16kHz if unable to query device
except self.sd.PortAudioError:
raise SoundDeviceError(
"No audio input device detected. Please check your audio settings and try again."
)
self.start_time = time.time()
@ -89,17 +107,31 @@ class Voice:
with self.sd.InputStream(samplerate=sample_rate, channels=1, callback=self.callback):
prompt(self.get_prompt, refresh_interval=0.1)
except self.sd.PortAudioError as err:
print(err)
return
raise SoundDeviceError(f"Error accessing audio input device: {err}")
with sf.SoundFile(filename, mode="x", samplerate=sample_rate, channels=1) as file:
with sf.SoundFile(temp_wav, mode="x", samplerate=sample_rate, channels=1) as file:
while not self.q.empty():
file.write(self.q.get())
if self.audio_format != "wav":
filename = tempfile.mktemp(suffix=f".{self.audio_format}")
audio = AudioSegment.from_wav(temp_wav)
audio.export(filename, format=self.audio_format)
os.remove(temp_wav)
else:
filename = temp_wav
with open(filename, "rb") as fh:
transcript = litellm.transcription(
model="whisper-1", file=fh, prompt=history, language=language
)
try:
transcript = litellm.transcription(
model="whisper-1", file=fh, prompt=history, language=language
)
except Exception as err:
print(f"Unable to transcribe {filename}: {err}")
return
if self.audio_format != "wav":
os.remove(filename)
text = transcript.text
return text

View file

@ -6,20 +6,277 @@ highlight_image: /assets/blame.jpg
description: Release notes and stats on aider writing its own code.
---
# Release history
{% include blame.md %}
<!--[[[cog
# This page is a copy of HISTORY.md, adding the front matter above.
text = open("HISTORY.md").read()
text = text.replace("# Release history", "")
cog.out(text)
]]]-->
# Release history
### main branch
- Load and save aider slash-commands to files:
- `/save <fname>` command will make a file of `/add` and `/read-only` commands that recreate the current file context in the chat.
- `/load <fname>` will replay the commands in the file.
- You can use `/load` to run any arbitrary set of slash-commands, not just `/add` and `/read-only`.
- Use `--load <fname>` to run a list of commands on launch, before the interactive chat begins.
- Aider follows litellm's `supports_vision` attribute to enable image support for models.
- Bugfix for when diff mode flexibly handles the model using the wrong filename.
- Displays filenames in sorted order for `/add` and `/read-only`.
- New `--no-fancy-input` switch disables prompt toolkit input, now still available with `--no-pretty`.
- Properly support all o1 models, regardless of provider.
- Improved handling of API errors, especially when accessing the weak model.
### Aider v0.60.1
- Enable image support for Sonnet 10/22.
- Display filenames in sorted order.
### Aider v0.60.0
- Full support for Sonnet 10/22, the new SOTA model on aider's code editing benchmark.
- Aider uses Sonnet 10/22 by default.
- Improved formatting of added and read-only files above chat prompt, by @jbellis.
- Improved support for o1 models by more flexibly parsing their nonconforming code edit replies.
- Corrected diff edit format prompt that only the first match is replaced.
- Stronger whole edit format prompt asking for clean file names.
- Now offers to add `.env` to the `.gitignore` file.
- Ships with a small model metadata json file to handle models not yet updated in litellm.
- Model settings for o1 models on azure.
- Bugfix to properly include URLs in `/help` RAG results.
- Aider wrote 49% of the code in this release.
### Aider v0.59.1
- Check for obsolete `yes: true` in yaml config, show helpful error.
- Model settings for openrouter/anthropic/claude-3.5-sonnet:beta
### Aider v0.59.0
- Improvements to `/read-only`:
- Now supports shell-style auto-complete of the full file system.
- Still auto-completes the full paths of the repo files like `/add`.
- Now supports globs like `src/**/*.py`
- Renamed `--yes` to `--yes-always`.
- Now uses `AIDER_YES_ALWAYS` env var and `yes-always:` yaml key.
- Existing YAML and .env files will need to be updated.
- Can still abbreviate to `--yes` on the command line.
- Config file now uses standard YAML list syntax with ` - list entries`, one per line.
- `/settings` now includes the same announcement lines that would print at launch.
- Sanity checks the `--editor-model` on launch now, same as main and weak models.
- Added `--skip-sanity-check-repo` switch to speedup launch in large repos.
- Bugfix so architect mode handles Control-C properly.
- Repo-map is deterministic now, with improved caching logic.
- Improved commit message prompt.
- Aider wrote 77% of the code in this release.
### Aider v0.58.1
- Fixed bug where cache warming pings caused subsequent user messages to trigger a tight loop of LLM requests.
### Aider v0.58.0
- [Use a pair of Architect/Editor models for improved coding](https://aider.chat/2024/09/26/architect.html)
- Use a strong reasoning model like o1-preview as your Architect.
- Use a cheaper, faster model like gpt-4o as your Editor.
- New `--o1-preview` and `--o1-mini` shortcuts.
- Support for new Gemini 002 models.
- Better support for Qwen 2.5 models.
- Many confirmation questions can be skipped for the rest of the session with "(D)on't ask again" response.
- Autocomplete for `/read-only` supports the entire filesystem.
- New settings for completion menu colors.
- New `/copy` command to copy the last LLM response to the clipboard.
- Renamed `/clipboard` to `/paste`.
- Will now follow HTTP redirects when scraping urls.
- New `--voice-format` switch to send voice audio as wav/mp3/webm, by @mbailey.
- ModelSettings takes `extra_params` dict to specify any extras to pass to `litellm.completion()`.
- Support for cursor shapes when in vim mode.
- Numerous bug fixes.
- Aider wrote 53% of the code in this release.
### Aider v0.57.1
- Fixed dependency conflict between aider-chat[help] and [playwright].
### Aider v0.57.0
- Support for OpenAI o1 models:
- o1-preview now works well with diff edit format.
- o1-preview with diff now matches SOTA leaderboard result with whole edit format.
- `aider --model o1-mini`
- `aider --model o1-preview`
- On Windows, `/run` correctly uses PowerShell or cmd.exe.
- Support for new 08-2024 Cohere models, by @jalammar.
- Can now recursively add directories with `/read-only`.
- User input prompts now fall back to simple `input()` if `--no-pretty` or a Windows console is not available.
- Improved sanity check of git repo on startup.
- Improvements to prompt cache chunking strategy.
- Removed "No changes made to git tracked files".
- Numerous bug fixes for corner case crashes.
- Updated all dependency versions.
- Aider wrote 70% of the code in this release.
### Aider v0.56.0
- Enables prompt caching for Sonnet via OpenRouter by @fry69
- Enables 8k output tokens for Sonnet via VertexAI and DeepSeek V2.5.
- New `/report` command to open your browser with a pre-populated GitHub Issue.
- New `--chat-language` switch to set the spoken language.
- Now `--[no-]suggest-shell-commands` controls both prompting for and offering to execute shell commands.
- Check key imports on launch, provide helpful error message if dependencies aren't available.
- Renamed `--models` to `--list-models` by @fry69.
- Numerous bug fixes for corner case crashes.
- Aider wrote 56% of the code in this release.
### Aider v0.55.0
- Only print the pip command when self updating on Windows, without running it.
- Converted many error messages to warning messages.
- Added `--tool-warning-color` setting.
- Blanket catch and handle git errors in any `/command`.
- Catch and handle glob errors in `/add`, errors writing files.
- Disabled built in linter for typescript.
- Catch and handle terminals which don't support pretty output.
- Catch and handle playwright and pandoc errors.
- Catch `/voice` transcription exceptions, show the WAV file so the user can recover it.
- Aider wrote 53% of the code in this release.
### Aider v0.54.12
- Switched to `vX.Y.Z.dev` version naming.
### Aider v0.54.11
- Improved printed pip command output on Windows.
### Aider v0.54.10
- Bugfix to test command in platform info.
### Aider v0.54.9
- Include important devops files in the repomap.
- Print quoted pip install commands to the user.
- Adopt setuptools_scm to provide dev versions with git hashes.
- Share active test and lint commands with the LLM.
- Catch and handle most errors creating new files, reading existing files.
- Catch and handle most git errors.
- Added --verbose debug output for shell commands.
### Aider v0.54.8
- Startup QOL improvements:
- Sanity check the git repo and exit gracefully on problems.
- Pause for confirmation after model sanity check to allow user to review warnings.
- Bug fix for shell commands on Windows.
- Do not fuzzy match filenames when LLM is creating a new file, by @ozapinq
- Numerous corner case bug fixes submitted via new crash report -> GitHub Issue feature.
- Crash reports now include python version, OS, etc.
### Aider v0.54.7
- Offer to submit a GitHub issue pre-filled with uncaught exception info.
- Bugfix for infinite output.
### Aider v0.54.6
- New `/settings` command to show active settings.
- Only show cache warming status update if `--verbose`.
### Aider v0.54.5
- Bugfix for shell commands on Windows.
- Refuse to make git repo in $HOME, warn user.
- Don't ask again in current session about a file the user has said not to add to the chat.
- Added `--update` as an alias for `--upgrade`.
### Aider v0.54.4
- Bugfix to completions for `/model` command.
- Bugfix: revert home dir special case.
### Aider v0.54.3
- Dependency `watchdog<5` for docker image.
### Aider v0.54.2
- When users launch aider in their home dir, help them find/create a repo in a subdir.
- Added missing `pexpect` dependency.
### Aider v0.54.0
- Added model settings for `gemini/gemini-1.5-pro-exp-0827` and `gemini/gemini-1.5-flash-exp-0827`.
- Shell and `/run` commands can now be interactive in environments where a pty is available.
- Optionally share output of suggested shell commands back to the LLM.
- New `--[no-]suggest-shell-commands` switch to configure shell commands.
- Performance improvements for autocomplete in large/mono repos.
- New `--upgrade` switch to install latest version of aider from pypi.
- Bugfix to `--show-prompt`.
- Disabled automatic reply to the LLM on `/undo` for all models.
- Removed pager from `/web` output.
- Aider wrote 64% of the code in this release.
### Aider v0.53.0
- [Keep your prompt cache from expiring](https://aider.chat/docs/usage/caching.html#preventing-cache-expiration) with `--cache-keepalive-pings`.
- Pings the API every 5min to keep the cache warm.
- You can now bulk accept/reject a series of add url and run shell confirmations.
- Improved matching of filenames from S/R blocks with files in chat.
- Stronger prompting for Sonnet to make edits in code chat mode.
- Stronger prompting for the LLM to specify full file paths.
- Improved shell command prompting.
- Weak model now uses `extra_headers`, to support Anthropic beta features.
- New `--install-main-branch` to update to the latest dev version of aider.
- Improved error messages on attempt to add not-git subdir to chat.
- Show model metadata info on `--verbose`.
- Improved warnings when LLMs env variables aren't set.
- Bugfix to windows filenames which contain `\_`.
- Aider wrote 59% of the code in this release.
### Aider v0.52.1
- Bugfix for NameError when applying edits.
### Aider v0.52.0
- Aider now offers to run shell commands:
- Launch a browser to view updated html/css/js.
- Install new dependencies.
- Run DB migrations.
- Run the program to exercise changes.
- Run new test cases.
- `/read` and `/drop` now expand `~` to the home dir.
- Show the active chat mode at aider prompt.
- New `/reset` command to `/drop` files and `/clear` chat history.
- New `--map-multiplier-no-files` to control repo map size multiplier when no files are in the chat.
- Reduced default multiplier to 2.
- Bugfixes and improvements to auto commit sequencing.
- Improved formatting of token reports and confirmation dialogs.
- Default OpenAI model is now `gpt-4o-2024-08-06`.
- Bumped dependencies to pickup litellm bugfixes.
- Aider wrote 68% of the code in this release.
### Aider v0.51.0
- Prompt caching for Anthropic models with `--cache-prompts`.
- Caches the system prompt, repo map and `/read-only` files.
- Repo map recomputes less often in large/mono repos or when caching enabled.
- Use `--map-refresh <always|files|manual|auto>` to configure.
- Improved cost estimate logic for caching.
- Improved editing performance on Jupyter Notebook `.ipynb` files.
- Work around litellm tokenizer bug for images.
- Show which config yaml file is loaded with `--verbose`.
- Bumped dependency versions.
- Bugfix: properly load `.aider.models.metadata.json` data.
- Bugfix: Using `--msg /ask ...` caused an exception.
- Bugfix: litellm tokenizer bug for images.
- Aider wrote 56% of the code in this release.
### Aider v0.50.1
@ -507,7 +764,7 @@ cog.out(text)
### Aider v0.14.0
- [Support for Claude2 and other LLMs via OpenRouter](https://aider.chat/docs/faq.html#accessing-other-llms-with-openrouter) by @joshuavial
- Documentation for [running the aider benchmarking suite](https://github.com/paul-gauthier/aider/tree/main/benchmark)
- Documentation for [running the aider benchmarking suite](https://github.com/Aider-AI/aider/tree/main/benchmark)
- Aider now requires Python >= 3.9
@ -552,7 +809,7 @@ cog.out(text)
- Added `/git` command to run git from inside aider chats.
- Use Meta-ENTER (Esc+ENTER in some environments) to enter multiline chat messages.
- Create a `.gitignore` with `.aider*` to prevent users from accidentaly adding aider files to git.
- Create a `.gitignore` with `.aider*` to prevent users from accidentally adding aider files to git.
- Check pypi for newer versions and notify user.
- Updated keyboard interrupt logic so that 2 ^C in 2 seconds always forces aider to exit.
- Provide GPT with detailed error if it makes a bad edit block, ask for a retry.

View file

@ -24,7 +24,7 @@ exclude:
aux_links:
"GitHub":
- "https://github.com/paul-gauthier/aider"
- "https://github.com/Aider-AI/aider"
"Discord":
- "https://discord.gg/Tv2uQnR88V"
"Blog":
@ -32,13 +32,17 @@ aux_links:
nav_external_links:
- title: "GitHub"
url: "https://github.com/paul-gauthier/aider"
url: "https://github.com/Aider-AI/aider"
- title: "Discord"
url: "https://discord.gg/Tv2uQnR88V"
repository: paul-gauthier/aider
repository: Aider-AI/aider
callouts:
tip:
title: Tip
color: green
note:
title: Note
color: yellow

View file

@ -0,0 +1,492 @@
- dirname: 2024-09-25-21-17-19--architect-sonnet-sonnet-diff
test_cases: 133
model: claude-3.5-sonnet
editor_model: claude-3.5-sonnet
editor_edit_format: diff
edit_format: architect
commit_hash: c18d6a8-dirty
pass_rate_1: 62.4
pass_rate_2: 80.5
percent_cases_well_formed: 100.0
error_outputs: 3
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 183
lazy_comments: 6
syntax_errors: 9
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 2
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-09-25
versions: 0.57.2.dev
seconds_per_case: 25.1
total_cost: 4.9502
- dirname: 2024-07-04-14-32-08--claude-3.5-sonnet-diff-continue
test_cases: 133
model: claude-3.5-sonnet
edit_format: diff
commit_hash: 35f21b5
pass_rate_1: 57.1
pass_rate_2: 77.4
percent_cases_well_formed: 99.2
error_outputs: 23
released: 2024-06-20
num_malformed_responses: 4
num_with_malformed_responses: 1
user_asks: 2
lazy_comments: 0
syntax_errors: 1
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
command: aider --sonnet
date: 2024-07-04
versions: 0.42.1-dev
seconds_per_case: 17.6
total_cost: 3.6346
- dirname: 2024-09-25-21-25-01--architect-o1mini-4o-jr-diff
test_cases: 133
model: o1-mini
editor_model: gpt-4o
editor_edit_format: diff
edit_format: architect
commit_hash: 3f682ed-dirty, 25e833b
pass_rate_1: 51.1
pass_rate_2: 70.7
percent_cases_well_formed: 100.0
error_outputs: 12
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 214
lazy_comments: 6
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model o1-mini
date: 2024-09-25
versions: 0.57.2.dev
seconds_per_case: 23.7
total_cost: 9.3158
- dirname: 2024-09-26-15-05-58--architect-o1mini-deep-jr-whole
test_cases: 133
model: o1-mini
edit_format: architect
commit_hash: 1676653-dirty
editor_model: deepseek
editor_edit_format: whole
pass_rate_1: 51.9
pass_rate_2: 71.4
percent_cases_well_formed: 100.0
error_outputs: 0
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 199
lazy_comments: 11
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 2
command: aider --model o1-mini
date: 2024-09-26
versions: 0.57.2.dev
seconds_per_case: 48.2
total_cost: 5.6069
- dirname: 2024-09-25-21-33-40--architect-4o-4o-jr-diff
test_cases: 133
model: gpt-4o
editor_model: gpt-4o
editor_edit_format: diff
edit_format: architect
commit_hash: 9f3cd92
pass_rate_1: 56.4
pass_rate_2: 75.2
percent_cases_well_formed: 100.0
error_outputs: 13
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 207
lazy_comments: 8
syntax_errors: 1
indentation_errors: 1
exhausted_context_windows: 0
test_timeouts: 3
command: aider --model gpt-4o
date: 2024-09-25
versions: 0.57.2.dev
seconds_per_case: 18.2
total_cost: 6.0918
- dirname: 2024-09-21-16-45-11--o1-preview-flex-sr-markers
test_cases: 133
model: o1-preview
edit_format: diff
commit_hash: 5493654-dirty
pass_rate_1: 57.9
pass_rate_2: 79.7
percent_cases_well_formed: 93.2
error_outputs: 11
num_malformed_responses: 11
num_with_malformed_responses: 9
user_asks: 3
lazy_comments: 0
syntax_errors: 10
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model o1-preview
date: 2024-09-21
versions: 0.56.1.dev
seconds_per_case: 80.9
total_cost: 63.9190
- dirname: 2024-09-25-21-39-05--architect-o1preview-4o-jr-diff
test_cases: 133
model: o1-preview
editor_model: gpt-4o
editor_edit_format: diff
edit_format: architect
commit_hash: 9f3cd92
pass_rate_1: 63.2
pass_rate_2: 80.5
percent_cases_well_formed: 100.0
error_outputs: 23
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 191
lazy_comments: 2
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 4
command: aider --model o1-preview
date: 2024-09-25
versions: 0.57.2.dev
seconds_per_case: 42.3
total_cost: 39.3766
- dirname: 2024-09-25-21-52-42--architect-o1preview-sonnet-jr-diff
test_cases: 133
model: o1-preview
editor_model: claude-3.5-sonnet
editor_edit_format: diff
edit_format: architect
commit_hash: 9f3cd92
editor_model: claude-3-5-sonnet
pass_rate_1: 60.9
pass_rate_2: 82.7
percent_cases_well_formed: 100.0
error_outputs: 1
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 180
lazy_comments: 3
syntax_errors: 9
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 3
command: aider --model o1-preview
date: 2024-09-25
versions: 0.57.2.dev
seconds_per_case: 44.9
total_cost: 37.6192
- dirname: 2024-09-21-16-40-56--o1-mini-flex-sr-markers
test_cases: 36
model: o1-mini
edit_format: diff
commit_hash: 5493654
pass_rate_1: 50.0
pass_rate_2: 61.1
percent_cases_well_formed: 100.0
error_outputs: 0
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 3
lazy_comments: 0
syntax_errors: 0
indentation_errors: 1
exhausted_context_windows: 0
test_timeouts: 0
command: aider --model o1-mini
date: 2024-09-21
versions: 0.56.1.dev
seconds_per_case: 26.7
total_cost: 2.4226
- dirname: 2024-09-25-23-12-14--architect-o1mini-deep-jr-diff
test_cases: 133
model: o1-mini
edit_format: architect
commit_hash: 9f3cd92-dirty
editor_model: deepseek
editor_edit_format: diff
pass_rate_1: 48.9
pass_rate_2: 69.2
percent_cases_well_formed: 100.0
error_outputs: 1
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 202
lazy_comments: 12
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 2
command: aider --model o1-mini
date: 2024-09-25
versions: 0.57.2.dev
seconds_per_case: 52.2
total_cost: 5.7927
- dirname: 2024-09-25-23-18-16--architect-o1preview-deep-jr-diff
test_cases: 133
model: o1-preview
edit_format: architect
commit_hash: 9f3cd92-dirty
editor_model: deepseek
editor_edit_format: diff
pass_rate_1: 64.7
pass_rate_2: 80.5
percent_cases_well_formed: 100.0
error_outputs: 5
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 180
lazy_comments: 2
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model o1-preview
date: 2024-09-25
versions: 0.57.2.dev
seconds_per_case: 73.2
total_cost: 35.7887
- dirname: 2024-09-25-23-30-36--architect-o1preview-deep-jr-whole
test_cases: 133
model: o1-preview
edit_format: architect
commit_hash: 9f3cd92-dirty
editor_model: deepseek
editor_edit_format: whole
pass_rate_1: 63.9
pass_rate_2: 85.0
percent_cases_well_formed: 100.0
error_outputs: 0
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 181
lazy_comments: 12
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 2
command: aider --model o1-preview
date: 2024-09-25
versions: 0.57.2.dev
seconds_per_case: 67.4
total_cost: 35.3152
- dirname: 2024-09-26-15-15-17--architect-sonnet-deep-jr-whole
test_cases: 133
model: claude-3.5-sonnet
edit_format: architect
commit_hash: bc1559f-dirty
editor_model: deepseek
editor_edit_format: whole
pass_rate_1: 61.7
pass_rate_2: 78.9
percent_cases_well_formed: 100.0
error_outputs: 0
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 184
lazy_comments: 5
syntax_errors: 9
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 3
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-09-26
versions: 0.57.2.dev
seconds_per_case: 37.2
total_cost: 2.1510
- dirname: 2024-09-26-15-33-28--costs-gpt4o-diff
test_cases: 133
model: gpt-4o
edit_format: diff
commit_hash: 89aa385-dirty
pass_rate_1: 55.6
pass_rate_2: 71.4
percent_cases_well_formed: 97.7
error_outputs: 5
num_malformed_responses: 5
num_with_malformed_responses: 3
user_asks: 10
lazy_comments: 0
syntax_errors: 0
indentation_errors: 1
exhausted_context_windows: 0
test_timeouts: 0
command: aider --model gpt-4o
date: 2024-09-26
versions: 0.57.2.dev
seconds_per_case: 9.7
total_cost: 3.8088
- dirname: 2024-09-26-15-41-08--architect-4o-deep-jr-whole
test_cases: 133
model: gpt-4o
edit_format: architect
commit_hash: 89aa385-dirty
editor_model: deepseek
editor_edit_format: whole
pass_rate_1: 60.9
pass_rate_2: 73.7
percent_cases_well_formed: 100.0
error_outputs: 0
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 187
lazy_comments: 12
syntax_errors: 5
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model gpt-4o
date: 2024-09-26
versions: 0.57.2.dev
seconds_per_case: 38.0
total_cost: 2.4737
- dirname: 2024-09-26-15-54-08--architect-4o-deep-jr-diff
test_cases: 133
model: gpt-4o
edit_format: architect
commit_hash: 89aa385-dirty
editor_model: deepseek
editor_edit_format: diff
pass_rate_1: 57.1
pass_rate_2: 74.4
percent_cases_well_formed: 100.0
error_outputs: 4
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 192
lazy_comments: 6
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 2
command: aider --model gpt-4o
date: 2024-09-26
versions: 0.57.2.dev
seconds_per_case: 44.0
total_cost: 2.5498
- dirname: 2024-09-26-16-06-39--architect-sonnet-deep-jr-diff
test_cases: 133
model: claude-3.5-sonnet
edit_format: architect
commit_hash: 89aa385-dirty
editor_model: deepseek
editor_edit_format: diff
pass_rate_1: 61.7
pass_rate_2: 78.9
percent_cases_well_formed: 100.0
error_outputs: 2
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 184
lazy_comments: 2
syntax_errors: 9
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 2
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-09-26
versions: 0.57.2.dev
seconds_per_case: 43.2
total_cost: 2.1488
- dirname: 2024-09-27-18-15-32--architect-4omini-4omini
test_cases: 133
model: gpt-4o-mini
edit_format: architect
commit_hash: 0bd8058-dirty
editor_model: gpt-4o-mini
editor_edit_format: whole
pass_rate_1: 43.6
pass_rate_2: 60.2
percent_cases_well_formed: 100.0
error_outputs: 0
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 208
lazy_comments: 2
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 3
command: aider --model gpt-4o-mini
date: 2024-09-27
versions: 0.57.2.dev
seconds_per_case: 21.0
total_cost: 0.1527
- dirname: 2024-07-18-18-57-46--gpt-4o-mini-whole
test_cases: 133
model: gpt-4o-mini
edit_format: whole
commit_hash: d31eef3-dirty
pass_rate_1: 40.6
pass_rate_2: 55.6
released: 2024-07-18
percent_cases_well_formed: 100.0
error_outputs: 1
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 1
lazy_comments: 0
syntax_errors: 1
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 2
command: aider --model gpt-4o-mini
date: 2024-07-18
versions: 0.44.1-dev
seconds_per_case: 7.8
total_cost: 0.0916
- dirname: 2024-09-29-22-35-36--architect-o1preview-o1mini-whole
test_cases: 133
model: o1-preview
edit_format: architect
commit_hash: 53ca83b
editor_model: o1-mini
editor_edit_format: whole
pass_rate_1: 65.4
pass_rate_2: 85.0
percent_cases_well_formed: 100.0
error_outputs: 0
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 179
lazy_comments: 4
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model o1-preview
date: 2024-09-29
versions: 0.58.1.dev
seconds_per_case: 39.7
total_cost: 36.2078

File diff suppressed because it is too large Load diff

View file

@ -317,29 +317,6 @@
seconds_per_case: 22.9
total_cost: 2.7494
- dirname: 2024-05-09-18-57-52--deepseek-chat-v2-diff-reverted-and-helpful-assistant2
test_cases: 133
model: DeepSeek Chat V2 (original)
released: 2024-05-06
edit_format: diff
commit_hash: 80a3f6d
pass_rate_1: 44.4
pass_rate_2: 60.9
percent_cases_well_formed: 97.0
error_outputs: 14
num_malformed_responses: 4
user_asks: 2
lazy_comments: 0
syntax_errors: 13
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 3
command: aider --model deepseek/deepseek-chat
date: 2024-05-09
versions: 0.33.1-dev
seconds_per_case: 86.8
total_cost: 0.0941
- dirname: 2024-05-07-20-32-37--qwen1.5-110b-chat-whole
test_cases: 133
model: qwen1.5-110b-chat
@ -387,7 +364,7 @@
- dirname: 2024-05-13-17-39-05--gpt-4o-diff
test_cases: 133
model: gpt-4o
model: gpt-4o-2024-05-13
released: 2024-05-13
edit_format: diff
commit_hash: b6cd852
@ -570,7 +547,7 @@
- dirname: 2024-07-04-14-32-08--claude-3.5-sonnet-diff-continue
test_cases: 133
model: claude-3.5-sonnet
model: claude-3.5-sonnet-20240620
edit_format: diff
commit_hash: 35f21b5
pass_rate_1: 57.1
@ -586,7 +563,7 @@
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
command: aider --sonnet
command: aider --model claude-3.5-sonnet-20240620
date: 2024-07-04
versions: 0.42.1-dev
seconds_per_case: 17.6
@ -665,7 +642,7 @@
- dirname: 2024-07-19-08-57-13--openrouter-deepseek-chat-v2-0628
test_cases: 133
model: DeepSeek Chat V2 0628
model: DeepSeek Chat V2 0628 (deprecated)
edit_format: diff
commit_hash: 96ff06e-dirty
pass_rate_1: 60.9
@ -737,7 +714,7 @@
- dirname: 2024-07-24-07-10-58--deepseek-coder2-0724-diff-direct
test_cases: 133
model: DeepSeek Coder V2 0724
model: DeepSeek Coder V2 0724 (deprecated)
edit_format: diff
commit_hash: 89965bf
pass_rate_1: 57.9
@ -855,27 +832,782 @@
seconds_per_case: 6.5
total_cost: 0.0000
- dirname: 2024-08-14-13-07-12--chatgpt-4o-latest-diff
- dirname: 2024-08-28-07-10-50--gemini-1.5-pro-exp-0827-diff-fenced
test_cases: 133
model: chatgpt-4o-latest
edit_format: diff
commit_hash: b1c3769
pass_rate_1: 53.4
pass_rate_2: 69.2
percent_cases_well_formed: 97.7
error_outputs: 27
num_malformed_responses: 5
num_with_malformed_responses: 3
user_asks: 7
model: gemini-1.5-pro-exp-0827
edit_format: diff-fenced
commit_hash: d8adc75
pass_rate_1: 54.9
pass_rate_2: 66.9
percent_cases_well_formed: 94.7
error_outputs: 112
num_malformed_responses: 26
num_with_malformed_responses: 7
user_asks: 38
lazy_comments: 0
syntax_errors: 1
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model gemini/gemini-1.5-pro-exp-0827
date: 2024-08-28
versions: 0.53.1-dev
seconds_per_case: 14.5
total_cost: 0.0000
- dirname: 2024-08-27-19-20-19--gemini-1.5-flash-exp-0827
test_cases: 133
model: gemini-1.5-flash-exp-0827
edit_format: whole
commit_hash: d8adc75
pass_rate_1: 40.6
pass_rate_2: 52.6
percent_cases_well_formed: 100.0
error_outputs: 1
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 1
lazy_comments: 3
syntax_errors: 1
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 4
command: aider --model gemini/gemini-1.5-flash-exp-0827
date: 2024-08-27
versions: 0.53.1-dev
seconds_per_case: 6.3
total_cost: 0.0000
- dirname: 2024-08-27-19-42-05--gemini-1.5-flash-8b-exp-0827
test_cases: 133
model: gemini-1.5-flash-8b-exp-0827
edit_format: whole
commit_hash: d8adc75
pass_rate_1: 31.6
pass_rate_2: 38.3
percent_cases_well_formed: 100.0
error_outputs: 12
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 10
lazy_comments: 250
syntax_errors: 6
indentation_errors: 1
exhausted_context_windows: 0
test_timeouts: 0
command: aider --model gemini/gemini-1.5-flash-8b-exp-0827
date: 2024-08-27
versions: 0.53.1-dev
seconds_per_case: 7.2
total_cost: 0.0000
- dirname: 2024-08-30-15-02-05--nous405b-whole
test_cases: 133
model: nousresearch/hermes-3-llama-3.1-405b
edit_format: whole
commit_hash: 2d9d605
pass_rate_1: 51.1
pass_rate_2: 63.9
percent_cases_well_formed: 100.0
error_outputs: 0
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 0
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 0
command: aider --model openai/chatgpt-4o-latest
date: 2024-08-14
released: 2024-08-08
versions: 0.50.2-dev
seconds_per_case: 26.3
total_cost: 3.6113
command: aider --model openrouter/nousresearch/hermes-3-llama-3.1-405b
date: 2024-08-30
versions: 0.54.8-dev
seconds_per_case: 38.3
total_cost: 0.0000
- dirname: 2024-09-04-16-08-09--yi-coder-9b-whole
test_cases: 133
model: Yi Coder 9B Chat
edit_format: whole
commit_hash: c4e4967
pass_rate_1: 46.6
pass_rate_2: 54.1
percent_cases_well_formed: 100.0
error_outputs: 0
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 9
lazy_comments: 0
syntax_errors: 14
indentation_errors: 2
exhausted_context_windows: 0
test_timeouts: 4
command: aider --model openai/hf:01-ai/Yi-Coder-9B-Chat --openai-api-base https://glhf.chat/api/openai/v1
date: 2024-09-04
versions: 0.54.13.dev
seconds_per_case: 8.3
total_cost: 0.0000
released: 2024-09-04
- dirname: 2024-09-04-16-17-33--yi-coder-9b-chat-q4_0-whole
test_cases: 133
model: yi-coder:9b-chat-q4_0
edit_format: whole
commit_hash: c4e4967
pass_rate_1: 41.4
pass_rate_2: 45.1
percent_cases_well_formed: 100.0
error_outputs: 0
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 48
lazy_comments: 1
syntax_errors: 1
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 0
command: aider --model ollama/yi-coder:9b-chat-q4_0
date: 2024-09-04
versions: 0.54.13.dev
seconds_per_case: 125.3
total_cost: 0.0000
- dirname: 2024-09-05-14-50-11--deepseek-sep5-no-shell
test_cases: 133
model: DeepSeek V2.5
edit_format: diff
commit_hash: 1279c86
pass_rate_1: 54.9
pass_rate_2: 72.2
percent_cases_well_formed: 96.2
error_outputs: 5
num_malformed_responses: 5
num_with_malformed_responses: 5
user_asks: 4
lazy_comments: 0
syntax_errors: 1
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 2
command: aider --deepseek
date: 2024-09-05
versions: 0.55.1.dev
seconds_per_case: 49.6
total_cost: 0.0998
- dirname: 2024-09-06-19-55-17--reflection-hyperbolic-whole-output2
test_cases: 133
model: Reflection-70B
edit_format: whole
commit_hash: 74631ee-dirty, 2aef59e-dirty
pass_rate_1: 33.1
pass_rate_2: 42.1
percent_cases_well_formed: 100.0
error_outputs: 2
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 10
lazy_comments: 26
syntax_errors: 1
indentation_errors: 3
exhausted_context_windows: 0
test_timeouts: 3
command: (not currently supported)
date: 2024-09-06
versions: 0.55.1.dev
seconds_per_case: 61.6
total_cost: 0.0000
- dirname: 2024-09-11-15-42-17--command-r-plus-08-2024-whole
test_cases: 133
model: Command R+ (08-24)
edit_format: whole
commit_hash: b43ed20
pass_rate_1: 27.1
pass_rate_2: 38.3
percent_cases_well_formed: 100.0
error_outputs: 0
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 7
lazy_comments: 10
syntax_errors: 0
indentation_errors: 3
exhausted_context_windows: 0
test_timeouts: 4
command: aider --model command-r-plus-08-2024
date: 2024-09-11
versions: 0.56.1.dev
seconds_per_case: 20.3
total_cost: 0.0000
- dirname: 2024-09-11-15-47-02--command-r-08-2024-whole
test_cases: 133
model: Command R (08-24)
edit_format: whole
commit_hash: b43ed20-dirty
pass_rate_1: 30.1
pass_rate_2: 38.3
percent_cases_well_formed: 100.0
error_outputs: 0
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 4
lazy_comments: 0
syntax_errors: 1
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 2
command: aider --model command-r-08-2024
date: 2024-09-11
versions: 0.56.1.dev
seconds_per_case: 7.6
total_cost: 0.0000
- dirname: 2024-09-12-19-57-35--o1-mini-whole
test_cases: 133
model: o1-mini (whole)
edit_format: whole
commit_hash: 36fa773-dirty, 291b456
pass_rate_1: 49.6
pass_rate_2: 70.7
percent_cases_well_formed: 90.0
error_outputs: 0
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 17
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model o1-mini
date: 2024-09-12
versions: 0.56.1.dev
seconds_per_case: 103.0
total_cost: 5.3725
- dirname: 2024-09-21-16-40-56--o1-mini-flex-sr-markers
test_cases: 36
model: o1-mini
edit_format: diff
commit_hash: 5493654
pass_rate_1: 50.0
pass_rate_2: 61.1
percent_cases_well_formed: 100.0
error_outputs: 0
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 3
lazy_comments: 0
syntax_errors: 0
indentation_errors: 1
exhausted_context_windows: 0
test_timeouts: 0
command: aider --model o1-mini
date: 2024-09-21
versions: 0.56.1.dev
seconds_per_case: 26.7
total_cost: 2.4226
- dirname: 2024-09-21-16-45-11--o1-preview-flex-sr-markers
test_cases: 133
model: o1-preview
edit_format: diff
commit_hash: 5493654-dirty
pass_rate_1: 57.9
pass_rate_2: 79.7
percent_cases_well_formed: 93.2
error_outputs: 11
num_malformed_responses: 11
num_with_malformed_responses: 9
user_asks: 3
lazy_comments: 0
syntax_errors: 10
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model o1-preview
date: 2024-09-21
versions: 0.56.1.dev
seconds_per_case: 80.9
total_cost: 63.9190
- dirname: 2024-09-19-16-58-29--qwen2.5-coder:7b-instruct-q8_0
test_cases: 133
model: qwen2.5-coder:7b-instruct-q8_0
edit_format: whole
commit_hash: 6f2b064-dirty
pass_rate_1: 45.1
pass_rate_2: 51.9
percent_cases_well_formed: 100.0
error_outputs: 0
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 4
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 2
command: aider --model ollama/qwen2.5-coder:7b-instruct-q8_0
date: 2024-09-19
versions: 0.56.0
seconds_per_case: 9.3
total_cost: 0.0000
- dirname: 2024-09-20-20-20-19--qwen-2.5-72b-instruct-diff
test_cases: 133
model: qwen-2.5-72b-instruct (bf16)
edit_format: diff
commit_hash: 5139594
pass_rate_1: 53.4
pass_rate_2: 65.4
percent_cases_well_formed: 96.2
error_outputs: 9
num_malformed_responses: 9
num_with_malformed_responses: 5
user_asks: 3
lazy_comments: 0
syntax_errors: 2
indentation_errors: 1
exhausted_context_windows: 0
test_timeouts: 3
command: aider --model openrouter/qwen/qwen-2.5-72b-instruct
date: 2024-09-20
versions: 0.56.1.dev
seconds_per_case: 39.8
total_cost: 0.0000
- dirname: 2024-09-21-11-56-43--Codestral-22B-v0.1-Q4_K_M.gguf_whole
test_cases: 133
model: Codestral-22B-v0.1-Q4_K_M
edit_format: whole
commit_hash: 2753ac6-dirty
pass_rate_1: 36.1
pass_rate_2: 48.1
percent_cases_well_formed: 100.0
error_outputs: 0
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 8
lazy_comments: 6
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 4
command: aider --model Codestral-22B-v0.1-Q4_K_M
date: 2024-09-21
versions: 0.56.1.dev
seconds_per_case: 656.4
total_cost: 0.9108
- dirname: 2024-09-24-16-26-45--gemini-1.5-pro-002-diff-fenced
test_cases: 133
model: gemini-1.5-pro-002
edit_format: diff-fenced
commit_hash: 6b5fe9b, 3edcd71
pass_rate_1: 49.6
pass_rate_2: 65.4
percent_cases_well_formed: 96.2
error_outputs: 17
num_malformed_responses: 17
num_with_malformed_responses: 5
user_asks: 3
lazy_comments: 0
syntax_errors: 2
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 4
command: aider --model gemini/gemini-1.5-pro-002
date: 2024-09-24
versions: 0.57.2.dev
seconds_per_case: 11.6
total_cost: 2.8166
- dirname: 2024-09-24-16-33-23--gemini-1.5-flash-002-whole
test_cases: 133
model: gemini-1.5-flash-002
edit_format: whole
commit_hash: 3edcd71
pass_rate_1: 37.6
pass_rate_2: 51.1
percent_cases_well_formed: 100.0
error_outputs: 0
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 3
lazy_comments: 0
syntax_errors: 1
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 2
command: aider --model gemini/gemini-1.5-flash-002
date: 2024-09-24
versions: 0.57.2.dev
seconds_per_case: 5.1
total_cost: 0.0515
- dirname: 2024-09-24-15-18-59--gemini-1.5-flash-8b-exp-0924-whole
test_cases: 133
model: gemini-1.5-flash-8b-exp-0924
edit_format: whole
commit_hash: 86faaa6
pass_rate_1: 33.1
pass_rate_2: 38.3
percent_cases_well_formed: 100.0
error_outputs: 0
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 9
lazy_comments: 6
syntax_errors: 8
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model gemini/gemini-1.5-flash-8b-exp-0924
date: 2024-09-24
versions: 0.57.2.dev
seconds_per_case: 6.6
total_cost: 0.0000
- dirname: 2024-09-28-18-30-20--codestral-whole
test_cases: 133
model: ollama/codestral
edit_format: whole
commit_hash: 1971285-dirty
pass_rate_1: 33.8
pass_rate_2: 45.9
percent_cases_well_formed: 98.5
error_outputs: 8
num_malformed_responses: 8
num_with_malformed_responses: 2
user_asks: 12
lazy_comments: 6
syntax_errors: 5
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 4
command: aider --model ollama/codestral
date: 2024-09-28
versions: 0.57.2.dev
seconds_per_case: 67.2
total_cost: 0.0000
- dirname: 2024-09-29-17-51-11--codegeex4-whole-2
test_cases: 133
model: ollama/codegeex4
edit_format: whole
commit_hash: 228ae24
pass_rate_1: 28.6
pass_rate_2: 32.3
percent_cases_well_formed: 97.0
error_outputs: 20
num_malformed_responses: 20
num_with_malformed_responses: 4
user_asks: 56
lazy_comments: 5
syntax_errors: 5
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 4
command: aider --model ollama/codegeex4
date: 2024-09-29
versions: 0.57.2.dev
seconds_per_case: 128.1
total_cost: 0.0000
- dirname: 2024-09-30-00-09-00--wojtek-opencodeinterpreter-6.7b-whole-2
test_cases: 133
model: ollama/wojtek/opencodeinterpreter:6.7b
edit_format: whole
commit_hash: 6d586fd
pass_rate_1: 26.3
pass_rate_2: 30.1
percent_cases_well_formed: 91.0
error_outputs: 18
num_malformed_responses: 18
num_with_malformed_responses: 12
user_asks: 79
lazy_comments: 7
syntax_errors: 0
indentation_errors: 1
exhausted_context_windows: 0
test_timeouts: 6
command: aider --model ollama/wojtek/opencodeinterpreter:6.7b
date: 2024-09-30
versions: 0.58.1.dev
seconds_per_case: 59.3
total_cost: 0.0000
- dirname: 2024-09-30-03-49-01--mistral-nemo-12b-instruct-2407-q4_K_M-whole-1
test_cases: 133
model: ollama/mistral-nemo:12b-instruct-2407-q4_K_M
edit_format: whole
commit_hash: ba4dec8
pass_rate_1: 22.6
pass_rate_2: 33.1
percent_cases_well_formed: 100.0
error_outputs: 0
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 53
lazy_comments: 37
syntax_errors: 2
indentation_errors: 2
exhausted_context_windows: 0
test_timeouts: 2
command: aider --model ollama/mistral-nemo:12b-instruct-2407-q4_K_M
date: 2024-09-30
versions: 0.58.1.dev
seconds_per_case: 34.7
total_cost: 0.0000
- dirname: 2024-09-30-14-09-43--qwen2.5-32b-whole-2
test_cases: 133
model: ollama/qwen2.5:32b
edit_format: whole
commit_hash: 765c4cb
pass_rate_1: 44.4
pass_rate_2: 54.1
percent_cases_well_formed: 100.0
error_outputs: 0
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 9
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 3
command: aider --model ollama/qwen2.5:32b
date: 2024-09-30
versions: 0.58.1.dev
seconds_per_case: 134.9
total_cost: 0.0000
- dirname: 2024-09-30-19-35-40--llama3.2-3b-instruct-fp16-whole-1
test_cases: 133
model: ollama/llama3.2:3b-instruct-fp16
edit_format: whole
commit_hash: 3f12290
pass_rate_1: 20.3
pass_rate_2: 26.3
percent_cases_well_formed: 97.0
error_outputs: 21
num_malformed_responses: 21
num_with_malformed_responses: 4
user_asks: 73
lazy_comments: 11
syntax_errors: 1
indentation_errors: 3
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model ollama/llama3.2:3b-instruct-fp16
date: 2024-09-30
versions: 0.58.1.dev
seconds_per_case: 66.6
total_cost: 0.0000
- dirname: 2024-09-30-23-01-24--hermes3-8b-llama3.1-fp16-whole-2
test_cases: 133
model: ollama/hermes3:8b-llama3.1-fp16
edit_format: whole
commit_hash: c5ba4f7
pass_rate_1: 24.1
pass_rate_2: 30.1
percent_cases_well_formed: 98.5
syntax_errors: 0
exhausted_context_windows: 0
command: aider --model ollama/hermes3:8b-llama3.1-fp16
date: 2024-09-30
versions: 0.58.1.dev
seconds_per_case: 64.7
total_cost: 0.0000
- dirname: 2024-10-01-02-33-11--mistral-small-whole-1
test_cases: 133
model: ollama/mistral-small
edit_format: whole
commit_hash: 8a908fa
pass_rate_1: 30.1
pass_rate_2: 38.3
percent_cases_well_formed: 99.2
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
command: aider --model ollama/mistral-small
date: 2024-10-01
versions: 0.58.1.dev
seconds_per_case: 84.6
total_cost: 0.0000
- dirname: 2024-10-01-07-05-40--yi-coder-9b-chat-fp16-whole-1
test_cases: 133
model: ollama/yi-coder:9b-chat-fp16
edit_format: whole
commit_hash: 52c6632-dirty
pass_rate_1: 39.8
pass_rate_2: 43.6
percent_cases_well_formed: 99.2
lazy_comments: 0
indentation_errors: 0
exhausted_context_windows: 0
command: aider --model ollama/yi-coder:9b-chat-fp16
date: 2024-10-01
versions: 0.58.1.dev
seconds_per_case: 63.7
total_cost: 0.0000
- dirname: 2024-10-01-16-50-09--hermes3-whole-4
test_cases: 133
model: ollama/hermes3
edit_format: whole
commit_hash: 415e898
pass_rate_1: 21.1
pass_rate_2: 22.6
percent_cases_well_formed: 98.5
exhausted_context_windows: 0
command: aider --model ollama/hermes3
date: 2024-10-01
versions: 0.58.1.dev
seconds_per_case: 24.8
total_cost: 0.0000
- dirname: 2024-10-04-16-30-08--chatgpt-4o-latest-diff-oct4
test_cases: 133
model: openai/chatgpt-4o-latest
edit_format: diff
commit_hash: af10953
pass_rate_1: 56.4
pass_rate_2: 72.2
percent_cases_well_formed: 97.0
error_outputs: 4
num_malformed_responses: 4
num_with_malformed_responses: 4
user_asks: 21
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model openai/chatgpt-4o-latest
date: 2024-10-04
versions: 0.58.2.dev
seconds_per_case: 23.7
total_cost: 4.0641
- dirname: 2024-10-05-20-03-10--dracarys-glhf-whole
test_cases: 133
model: Dracarys2-72B-Instruct
edit_format: whole
commit_hash: 04a2cbb
pass_rate_1: 55.6
pass_rate_2: 66.9
percent_cases_well_formed: 100.0
error_outputs: 0
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 1
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 0
command: (via glhf.chat)
date: 2024-10-05
versions: 0.59.2.dev
seconds_per_case: 46.7
total_cost: 0.0000
- dirname: 2024-10-13-21-33-42--grok2-whole
test_cases: 133
model: Grok-2
edit_format: whole
commit_hash: 0a497b7
pass_rate_1: 45.9
pass_rate_2: 58.6
percent_cases_well_formed: 98.5
error_outputs: 7
num_malformed_responses: 7
num_with_malformed_responses: 2
user_asks: 24
lazy_comments: 4
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model openrouter/x-ai/grok-2
date: 2024-10-13
versions: 0.59.2.dev
seconds_per_case: 34.6
total_cost: 0.0000
- dirname: 2024-10-13-23-58-44--grok2mini-whole
test_cases: 133
model: Grok-2-mini
edit_format: whole
commit_hash: 0a497b7-dirty, 0a497b7
pass_rate_1: 40.6
pass_rate_2: 54.9
percent_cases_well_formed: 100.0
error_outputs: 0
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 8
lazy_comments: 2
syntax_errors: 2
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 0
command: aider --model openrouter/x-ai/grok-2-mini
date: 2024-10-13
versions: 0.59.2.dev
seconds_per_case: 32.1
total_cost: 0.0000
- dirname: 2024-10-16-15-55-37--nemotron-glhf-whole3
test_cases: 133
model: Llama-3.1-Nemotron-70B-Instruct-HF
edit_format: whole
commit_hash: 6bb9b25-dirty
pass_rate_1: 36.8
pass_rate_2: 54.9
percent_cases_well_formed: 99.2
error_outputs: 17
num_malformed_responses: 1
num_with_malformed_responses: 1
user_asks: 53
lazy_comments: 17
syntax_errors: 1
indentation_errors: 2
exhausted_context_windows: 0
test_timeouts: 3
command: (via glhf.chat)
date: 2024-10-16
versions: 0.59.2.dev
seconds_per_case: 64.9
total_cost: 0.0000
- dirname: 2024-10-22-17-45-28--sonnet-1022-diff-fixed-model-settings
test_cases: 133
model: claude-3-5-sonnet-20241022
edit_format: diff
commit_hash: 3b14eb9
pass_rate_1: 69.2
pass_rate_2: 84.2
percent_cases_well_formed: 99.2
error_outputs: 1
num_malformed_responses: 1
num_with_malformed_responses: 1
user_asks: 0
lazy_comments: 1
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 0
command: aider --model anthropic/claude-3-5-sonnet-20241022
date: 2024-10-22
versions: 0.59.2.dev
seconds_per_case: 18.6
total_cost: 0.0000

View file

@ -0,0 +1,186 @@
- dirname: 2024-07-18-18-57-46--gpt-4o-mini-whole
test_cases: 133
model: gpt-4o-mini (whole)
edit_format: whole
commit_hash: d31eef3-dirty
pass_rate_1: 40.6
pass_rate_2: 55.6
released: 2024-07-18
percent_cases_well_formed: 100.0
error_outputs: 1
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 1
lazy_comments: 0
syntax_errors: 1
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 2
command: aider --model gpt-4o-mini
date: 2024-07-18
versions: 0.44.1-dev
seconds_per_case: 7.8
total_cost: 0.0916
- dirname: 2024-07-04-14-32-08--claude-3.5-sonnet-diff-continue
test_cases: 133
model: claude-3.5-sonnet (diff)
edit_format: diff
commit_hash: 35f21b5
pass_rate_1: 57.1
pass_rate_2: 77.4
percent_cases_well_formed: 99.2
error_outputs: 23
released: 2024-06-20
num_malformed_responses: 4
num_with_malformed_responses: 1
user_asks: 2
lazy_comments: 0
syntax_errors: 1
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
command: aider --sonnet
date: 2024-07-04
versions: 0.42.1-dev
seconds_per_case: 17.6
total_cost: 3.6346
- dirname: 2024-08-06-18-28-39--gpt-4o-2024-08-06-diff-again
test_cases: 133
model: gpt-4o-2024-08-06 (diff)
edit_format: diff
commit_hash: ed9ed89
pass_rate_1: 57.1
pass_rate_2: 71.4
percent_cases_well_formed: 98.5
error_outputs: 18
num_malformed_responses: 2
num_with_malformed_responses: 2
user_asks: 10
lazy_comments: 0
syntax_errors: 6
indentation_errors: 2
exhausted_context_windows: 0
test_timeouts: 5
released: 2024-08-06
command: aider --model openai/gpt-4o-2024-08-06
date: 2024-08-06
versions: 0.48.1-dev
seconds_per_case: 6.5
total_cost: 0.0000
- dirname: 2024-09-12-19-57-35--o1-mini-whole
test_cases: 133
model: o1-mini (whole)
edit_format: whole
commit_hash: 36fa773-dirty, 291b456
pass_rate_1: 49.6
pass_rate_2: 70.7
percent_cases_well_formed: 90.0
error_outputs: 0
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 17
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model o1-mini
date: 2024-09-12
versions: 0.56.1.dev
seconds_per_case: 103.0
total_cost: 5.3725
- dirname: 2024-09-12-20-56-22--o1-mini-diff
test_cases: 133
model: o1-mini (diff)
edit_format: diff
commit_hash: 4598a37-dirty, 291b456, 752e823-dirty
pass_rate_1: 45.1
pass_rate_2: 62.4
percent_cases_well_formed: 85.7
error_outputs: 26
num_malformed_responses: 26
num_with_malformed_responses: 19
user_asks: 2
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model o1-mini --edit-format diff
date: 2024-09-12
versions: 0.56.1.dev
seconds_per_case: 177.7
total_cost: 11.1071
- dirname: 2024-09-05-21-26-49--sonnet-whole-sep5
test_cases: 133
model: claude-3.5-sonnet (whole)
edit_format: whole
commit_hash: 8cfdcbd
pass_rate_1: 55.6
pass_rate_2: 75.2
percent_cases_well_formed: 100.0
error_outputs: 0
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 0
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 0
command: aider --model openrouter/anthropic/claude-3.5-sonnet --edit-format whole
date: 2024-09-05
versions: 0.55.1.dev
seconds_per_case: 15.2
total_cost: 2.3502
- dirname: 2024-09-12-22-44-14--o1-preview-diff
test_cases: 133
model: o1-preview (diff)
edit_format: diff
commit_hash: 72f52bd
pass_rate_1: 56.4
pass_rate_2: 75.2
percent_cases_well_formed: 84.2
error_outputs: 27
num_malformed_responses: 27
num_with_malformed_responses: 21
user_asks: 8
lazy_comments: 0
syntax_errors: 7
indentation_errors: 3
exhausted_context_windows: 0
test_timeouts: 3
command: aider --model o1-preview
date: 2024-09-12
versions: 0.56.1.dev
seconds_per_case: 95.8
total_cost: 71.7927
- dirname: 2024-09-13-02-13-59--o1-preview-whole
test_cases: 133
model: o1-preview (whole)
edit_format: whole
commit_hash: 72f52bd-dirty
pass_rate_1: 58.6
pass_rate_2: 79.7
percent_cases_well_formed: 100.0
error_outputs: 0
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 2
lazy_comments: 0
syntax_errors: 1
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 2
command: aider --model o1-preview
date: 2024-09-13
versions: 0.56.1.dev
seconds_per_case: 47.4
total_cost: 38.0612

View file

@ -145,7 +145,7 @@
- dirname: 2024-07-01-18-30-33--refac-claude-3.5-sonnet-diff-not-lazy
test_cases: 89
model: claude-3.5-sonnet (diff)
model: claude-3.5-sonnet-20240620
edit_format: diff
commit_hash: 7396e38-dirty
pass_rate_1: 64.0
@ -167,7 +167,7 @@
- dirname: 2024-07-24-07-49-39--refac-deepseek-coder-v2-0724
test_cases: 89
model: DeepSeek Coder V2 0724
model: DeepSeek Coder V2 0724 (deprecated)
edit_format: diff
commit_hash: bb6e597
pass_rate_1: 32.6
@ -209,3 +209,90 @@
seconds_per_case: 16.9
total_cost: 4.0873
- dirname: 2024-09-05-15-19-05--refac-deepseek-v2.5-no-shell
test_cases: 89
model: DeepSeek Chat V2.5
edit_format: diff
commit_hash: 1279c86, 1279c86-dirty
pass_rate_1: 31.5
percent_cases_well_formed: 67.4
error_outputs: 90
num_malformed_responses: 88
num_with_malformed_responses: 29
user_asks: 8
lazy_comments: 7
syntax_errors: 0
indentation_errors: 6
exhausted_context_windows: 2
test_timeouts: 0
command: aider --deepseek
date: 2024-09-05
versions: 0.55.1.dev
seconds_per_case: 225.4
total_cost: 1.0338
- dirname: 2024-10-22-19-57-27--refac-openrouter-sonnet-1022
test_cases: 89
model: claude-3-5-sonnet-20241022
edit_format: diff
commit_hash: 4a3e6ef
pass_rate_1: 92.1
percent_cases_well_formed: 91.0
error_outputs: 13
num_malformed_responses: 12
num_with_malformed_responses: 8
user_asks: 14
lazy_comments: 2
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 0
command: aider --sonnet
date: 2024-10-22
versions: 0.60.1.dev
seconds_per_case: 32.5
total_cost: 8.4644
- dirname: 2024-10-22-20-03-10--refac-o1mini
test_cases: 89
model: o1-mini
edit_format: diff
commit_hash: 4a3e6ef-dirty
pass_rate_1: 44.9
percent_cases_well_formed: 29.2
error_outputs: 151
num_malformed_responses: 150
num_with_malformed_responses: 63
user_asks: 28
lazy_comments: 2
syntax_errors: 5
indentation_errors: 4
exhausted_context_windows: 1
test_timeouts: 0
command: aider --model o1-mini
date: 2024-10-22
versions: 0.60.1.dev
seconds_per_case: 115.3
total_cost: 29.0492
- dirname: 2024-10-22-20-26-36--refac-o1preview
test_cases: 89
model: o1-preview
edit_format: diff
commit_hash: 4a3e6ef-dirty
pass_rate_1: 75.3
percent_cases_well_formed: 57.3
error_outputs: 75
num_malformed_responses: 74
num_with_malformed_responses: 38
user_asks: 19
lazy_comments: 2
syntax_errors: 2
indentation_errors: 3
exhausted_context_windows: 1
test_timeouts: 0
command: aider --model o1-preview
date: 2024-10-22
versions: 0.60.1.dev
seconds_per_case: 231.7
total_cost: 120.9850

View file

@ -0,0 +1,459 @@
- dirname: 2024-06-20-15-16-41--claude-3.5-sonnet-diff
test_cases: 133
model: openrouter/anthropic/claude-3.5-sonnet
edit_format: diff
commit_hash: 068609e-dirty
pass_rate_1: 57.9
pass_rate_2: 74.4
percent_cases_well_formed: 97.0
error_outputs: 48
num_malformed_responses: 11
num_with_malformed_responses: 4
user_asks: 0
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-06-20
versions: 0.38.1-dev
seconds_per_case: 21.6
total_cost: 0.0000
- dirname: 2024-06-24-12-48-43--claude-3.5-sonnet-udiff
test_cases: 133
model: openrouter/anthropic/claude-3.5-sonnet
edit_format: udiff
commit_hash: 7be08c7
pass_rate_1: 62.4
pass_rate_2: 74.4
percent_cases_well_formed: 100.0
error_outputs: 10
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 10
lazy_comments: 0
syntax_errors: 1
indentation_errors: 2
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-06-24
versions: 0.39.1-dev
seconds_per_case: 14.3
total_cost: 0.0000
- dirname: 2024-06-24-17-44-31--claude-3.5-sonnet-diff-less-chatty
test_cases: 133
model: openrouter/anthropic/claude-3.5-sonnet
edit_format: diff
commit_hash: 0d484e5
pass_rate_1: 57.9
pass_rate_2: 74.4
percent_cases_well_formed: 99.2
error_outputs: 14
num_malformed_responses: 3
num_with_malformed_responses: 1
user_asks: 2
lazy_comments: 0
syntax_errors: 1
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 4
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-06-24
versions: 0.39.1-dev
seconds_per_case: 16.0
total_cost: 0.0000
- dirname: 2024-06-24-17-50-46--claude-3.5-sonnet-diff-less-chatty2
test_cases: 133
model: openrouter/anthropic/claude-3.5-sonnet
edit_format: diff
commit_hash: 3015495
pass_rate_1: 59.4
pass_rate_2: 76.7
percent_cases_well_formed: 99.2
error_outputs: 5
num_malformed_responses: 1
num_with_malformed_responses: 1
user_asks: 1
lazy_comments: 0
syntax_errors: 0
indentation_errors: 1
exhausted_context_windows: 0
test_timeouts: 2
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-06-24
versions: 0.39.1-dev
seconds_per_case: 15.7
total_cost: 0.0000
- dirname: 2024-06-24-17-56-40--claude-3.5-sonnet-diff-less-chatty-sys-examples
test_cases: 133
model: openrouter/anthropic/claude-3.5-sonnet
edit_format: diff
commit_hash: 3015495-dirty
pass_rate_1: 58.6
pass_rate_2: 75.9
percent_cases_well_formed: 100.0
error_outputs: 2
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 0
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 3
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-06-24
versions: 0.39.1-dev
seconds_per_case: 15.9
total_cost: 0.0000
- dirname: 2024-07-04-14-32-08--claude-3.5-sonnet-diff-continue
test_cases: 133
model: openrouter/anthropic/claude-3.5-sonnet
edit_format: diff
commit_hash: 35f21b5
pass_rate_1: 57.1
pass_rate_2: 77.4
percent_cases_well_formed: 99.2
error_outputs: 23
num_malformed_responses: 4
num_with_malformed_responses: 1
user_asks: 2
lazy_comments: 0
syntax_errors: 1
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-07-04
versions: 0.42.1-dev
seconds_per_case: 17.6
total_cost: 3.6346
- dirname: 2024-07-06-19-39-59--claude-3.5-sonnet-diff-platform
test_cases: 133
model: openrouter/anthropic/claude-3.5-sonnet
edit_format: diff
commit_hash: e47c2a9-dirty
pass_rate_1: 57.9
pass_rate_2: 78.2
percent_cases_well_formed: 100.0
error_outputs: 0
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 0
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-07-06
versions: 0.42.1-dev
seconds_per_case: 14.6
total_cost: 3.5616
- dirname: 2024-07-24-17-11-07--claude-3.5-sonnet-diff-july24
test_cases: 133
model: openrouter/anthropic/claude-3.5-sonnet
edit_format: diff
commit_hash: 859a13e
pass_rate_1: 59.4
pass_rate_2: 78.2
percent_cases_well_formed: 99.2
error_outputs: 6
num_malformed_responses: 1
num_with_malformed_responses: 1
user_asks: 1
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-07-24
versions: 0.45.2-dev
seconds_per_case: 16.9
total_cost: 3.4981
- dirname: 2024-07-28-20-23-42--claude-3.5-sonnet-diff-no-reminder
test_cases: 94
model: openrouter/anthropic/claude-3.5-sonnet
edit_format: diff
commit_hash: e799e89-dirty
pass_rate_1: 59.6
pass_rate_2: 83.0
percent_cases_well_formed: 98.9
error_outputs: 12
num_malformed_responses: 2
num_with_malformed_responses: 1
user_asks: 2
lazy_comments: 0
syntax_errors: 1
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-07-28
versions: 0.45.2-dev
seconds_per_case: 15.7
total_cost: 2.4340
- dirname: 2024-08-14-00-46-09--claude-3.5-sonnet-diff-no-ipynb-again
test_cases: 133
model: openrouter/anthropic/claude-3.5-sonnet
edit_format: diff
commit_hash: 139f799
pass_rate_1: 57.9
pass_rate_2: 75.9
percent_cases_well_formed: 98.5
error_outputs: 22
num_malformed_responses: 5
num_with_malformed_responses: 2
user_asks: 249
lazy_comments: 0
syntax_errors: 1
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-08-14
versions: 0.50.1-dev
seconds_per_case: 18.0
total_cost: 3.7058
- dirname: 2024-06-21-00-07-01--claude-3.5-sonnet-do-over
test_cases: 133
model: openrouter/anthropic/claude-3.5-sonnet
edit_format: diff
commit_hash: fb26174-dirty
pass_rate_1: 59.4
pass_rate_2: 80.5
percent_cases_well_formed: 99.2
error_outputs: 20
num_malformed_responses: 4
num_with_malformed_responses: 1
user_asks: 1
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-06-21
versions: 0.39.1-dev
seconds_per_case: 18.3
total_cost: 0.0000
- dirname: 2024-06-21-00-18-25--claude-3.5-sonnet-do-over2
test_cases: 133
model: openrouter/anthropic/claude-3.5-sonnet
edit_format: diff
commit_hash: fb26174-dirty
pass_rate_1: 58.6
pass_rate_2: 77.4
percent_cases_well_formed: 98.5
error_outputs: 22
num_malformed_responses: 4
num_with_malformed_responses: 2
user_asks: 0
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 0
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-06-21
versions: 0.39.1-dev
seconds_per_case: 17.3
total_cost: 0.0000
- dirname: 2024-06-24-00-09-40--claude-3.5-sonnet-chatty
test_cases: 133
model: openrouter/anthropic/claude-3.5-sonnet
edit_format: diff
commit_hash: b44c246-dirty
pass_rate_1: 59.4
pass_rate_2: 75.2
percent_cases_well_formed: 98.5
error_outputs: 21
num_malformed_responses: 5
num_with_malformed_responses: 2
user_asks: 0
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 2
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-06-24
versions: 0.39.1-dev
seconds_per_case: 15.7
total_cost: 0.0000
- dirname: 2024-06-24-00-33-35--claude-3.5-sonnet-chatty-do-over
test_cases: 133
model: openrouter/anthropic/claude-3.5-sonnet
edit_format: diff
commit_hash: bc1dfa3
pass_rate_1: 58.6
pass_rate_2: 76.7
percent_cases_well_formed: 97.7
error_outputs: 26
num_malformed_responses: 6
num_with_malformed_responses: 3
user_asks: 0
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 2
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-06-24
versions: 0.39.1-dev
seconds_per_case: 16.4
total_cost: 0.0000
- dirname: 2024-08-18-19-57-30--claude-3.5-sonnet-aug18
test_cases: 133
model: openrouter/anthropic/claude-3.5-sonnet
edit_format: diff
commit_hash: 5099a5c
pass_rate_1: 54.9
pass_rate_2: 78.9
percent_cases_well_formed: 97.7
error_outputs: 47
num_malformed_responses: 11
num_with_malformed_responses: 3
user_asks: 0
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 2
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-08-18
versions: 0.50.2-dev
seconds_per_case: 22.3
total_cost: 3.9008
- dirname: 2024-08-18-20-23-50--claude-3.5-sonnet-aug18-cache-prompts
test_cases: 133
model: openrouter/anthropic/claude-3.5-sonnet
edit_format: diff
commit_hash: 53db8cf-dirty
pass_rate_1: 56.4
pass_rate_2: 78.9
percent_cases_well_formed: 97.7
error_outputs: 16
num_malformed_responses: 4
num_with_malformed_responses: 3
user_asks: 0
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 3
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-08-18
versions: 0.50.2-dev
seconds_per_case: 21.1
total_cost: 3.6918
- dirname: 2024-08-18-23-11-04--claude-3.5-sonnet-aug18-cache-prompts-cold
test_cases: 133
model: openrouter/anthropic/claude-3.5-sonnet
edit_format: diff
commit_hash: 53db8cf-dirty
pass_rate_1: 56.4
pass_rate_2: 78.2
percent_cases_well_formed: 97.0
error_outputs: 30
num_malformed_responses: 7
num_with_malformed_responses: 4
user_asks: 1
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 2
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-08-18
versions: 0.50.2-dev
seconds_per_case: 21.8
total_cost: 3.7858
- dirname: 2024-08-21-01-07-39--sonnet-diff-cache
test_cases: 133
model: claude-3-5-sonnet-20240620
edit_format: diff
commit_hash: e12157b-dirty
pass_rate_1: 57.1
pass_rate_2: 82.0
percent_cases_well_formed: 98.5
error_outputs: 12
num_malformed_responses: 2
num_with_malformed_responses: 2
user_asks: 0
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 2
command: aider --model claude-3-5-sonnet-20240620
date: 2024-08-21
versions: 0.51.2-dev
seconds_per_case: 14.5
total_cost: 3.1795
- dirname: 2024-08-21-00-50-49--shell-cmds-sonnet-user-remind
test_cases: 133
model: openrouter/anthropic/claude-3.5-sonnet
edit_format: diff
commit_hash: 919ea05
pass_rate_1: 63.2
pass_rate_2: 79.7
percent_cases_well_formed: 98.5
error_outputs: 18
num_malformed_responses: 4
num_with_malformed_responses: 2
user_asks: 26
lazy_comments: 0
syntax_errors: 0
indentation_errors: 2
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-08-21
versions: 0.51.2-dev
seconds_per_case: 16.3
total_cost: 3.4738
- dirname: 2024-08-21-00-55-30--shell-cmds-sonnet-no-user-remind
test_cases: 133
model: openrouter/anthropic/claude-3.5-sonnet
edit_format: diff
commit_hash: 5c7707a
pass_rate_1: 63.9
pass_rate_2: 80.5
percent_cases_well_formed: 97.7
error_outputs: 51
num_malformed_responses: 12
num_with_malformed_responses: 3
user_asks: 24
lazy_comments: 0
syntax_errors: 0
indentation_errors: 1
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-08-21
versions: 0.51.2-dev
seconds_per_case: 17.7
total_cost: 3.8990

View file

@ -1,5 +1,5 @@
<canvas id="blameChart" width="800" height="360" style="margin-top: 20px"></canvas>
<canvas id="linesChart" width="800" height="360" style="margin-top: 20px"></canvas>
<canvas id="blameChart" width="800" height="360" style="margin-top: 20px"></canvas>
<script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
<script src="https://cdn.jsdelivr.net/npm/moment"></script>
<script src="https://cdn.jsdelivr.net/npm/chartjs-adapter-moment"></script>

View file

@ -2,7 +2,7 @@
You can get started quickly like this:
```
python -m pip install aider-chat
python -m pip install -U aider-chat
# Change directory into a git repo
cd /to/your/git/repo

View file

@ -1,5 +1,5 @@
If you need more help, please check our
[GitHub issues](https://github.com/paul-gauthier/aider/issues)
[GitHub issues](https://github.com/Aider-AI/aider/issues)
and file a new issue if your problem isn't discussed.
Or drop into our
[Discord](https://discord.gg/Tv2uQnR88V)

View file

@ -0,0 +1,170 @@
<canvas id="{{ include.chart_id }}" width="800" height="450" style="margin-top: 20px"></canvas>
<script>
document.addEventListener('DOMContentLoaded', function () {
var ctx = document.getElementById('{{ include.chart_id }}').getContext('2d');
var leaderboardData = {
labels: [],
datasets: [{
label: 'Percent completed correctly',
data: [],
backgroundColor: [],
borderColor: [],
borderWidth: 1
}]
};
var allData = [];
{% for row in include.data %}
allData.push({
model: '{{ row.model }}',
pass_rate: {{ row[include.pass_rate_key] }},
percent_cases_well_formed: {{ row.percent_cases_well_formed }},
edit_format: '{{ row.edit_format }}'
});
{% endfor %}
function updateChart() {
var selectedRows = document.querySelectorAll('tr.selected');
var showAll = selectedRows.length === 0;
leaderboardData.labels = [];
leaderboardData.datasets[0].data = [];
leaderboardData.datasets[0].backgroundColor = [];
leaderboardData.datasets[0].borderColor = [];
allData.forEach(function(row, index) {
var rowElement = document.getElementById('{{ include.row_prefix }}-' + index);
if (showAll) {
rowElement.classList.remove('selected');
}
if (showAll || rowElement.classList.contains('selected')) {
leaderboardData.labels.push(row.model);
leaderboardData.datasets[0].data.push(row.pass_rate);
switch (row.edit_format) {
case 'whole':
leaderboardData.datasets[0].backgroundColor.push('rgba(255, 99, 132, 0.2)');
leaderboardData.datasets[0].borderColor.push('rgba(255, 99, 132, 1)');
break;
case 'diff':
leaderboardData.datasets[0].backgroundColor.push('rgba(54, 162, 235, 0.2)');
leaderboardData.datasets[0].borderColor.push('rgba(54, 162, 235, 1)');
break;
case 'udiff':
leaderboardData.datasets[0].backgroundColor.push('rgba(75, 192, 192, 0.2)');
leaderboardData.datasets[0].borderColor.push('rgba(75, 192, 192, 1)');
break;
case 'diff-fenced':
leaderboardData.datasets[0].backgroundColor.push('rgba(153, 102, 255, 0.2)');
leaderboardData.datasets[0].borderColor.push('rgba(153, 102, 255, 1)');
break;
default:
leaderboardData.datasets[0].backgroundColor.push('rgba(201, 203, 207, 0.2)');
leaderboardData.datasets[0].borderColor.push('rgba(201, 203, 207, 1)');
}
}
});
// Apply legend filtering
var meta = leaderboardChart.getDatasetMeta(0);
meta.data.forEach(function(bar, index) {
if (leaderboardData.labels.includes(allData[index].model)) {
bar.hidden = (allData[index].edit_format === 'whole' && meta.data[0].hidden) ||
(allData[index].edit_format !== 'whole' && meta.data[1].hidden);
} else {
bar.hidden = true;
}
});
leaderboardChart.update();
}
var tableBody = document.querySelector('table tbody');
allData.forEach(function(row, index) {
var tr = tableBody.children[index];
tr.id = '{{ include.row_prefix }}-' + index;
tr.style.cursor = 'pointer';
tr.onclick = function() {
this.classList.toggle('selected');
updateChart();
};
});
var leaderboardChart = new Chart(ctx, {
type: 'bar',
data: leaderboardData,
options: {
scales: {
y: {
beginAtZero: true,
title: {
display: true,
text: 'Correct Exercises (%)'
}
},
x: {
ticks: {
autoSkip: false,
maxRotation: 90,
minRotation: 0
}
}
},
plugins: {
legend: {
display: true,
position: 'top',
labels: {
generateLabels: function(chart) {
var uniqueFormats = [...new Set(allData.map(item => item.edit_format))];
return uniqueFormats.map(format => {
var color;
switch (format) {
case 'whole':
color = { fill: 'rgba(255, 99, 132, 0.2)', stroke: 'rgba(255, 99, 132, 1)' };
break;
case 'diff':
color = { fill: 'rgba(54, 162, 235, 0.2)', stroke: 'rgba(54, 162, 235, 1)' };
break;
case 'udiff':
color = { fill: 'rgba(75, 192, 192, 0.2)', stroke: 'rgba(75, 192, 192, 1)' };
break;
case 'diff-fenced':
color = { fill: 'rgba(153, 102, 255, 0.2)', stroke: 'rgba(153, 102, 255, 1)' };
break;
default:
color = { fill: 'rgba(201, 203, 207, 0.2)', stroke: 'rgba(201, 203, 207, 1)' };
}
return {
text: format,
fillStyle: color.fill,
strokeStyle: color.stroke,
lineWidth: 1,
hidden: false
};
});
}
},
onClick: function(e, legendItem, legend) {
var ci = legend.chart;
var clickedFormat = legendItem.text;
legendItem.hidden = !legendItem.hidden;
ci.data.datasets[0].data.forEach(function(dataPoint, i) {
var meta = ci.getDatasetMeta(0);
if (allData[i].edit_format === clickedFormat) {
meta.data[i].hidden = legendItem.hidden;
}
});
ci.update();
}
}
}
}
});
updateChart();
});
</script>

View file

@ -2,3 +2,4 @@ You can send long, multi-line messages in the chat in a few ways:
- Paste a multi-line message directly into the chat.
- Enter `{` alone on the first line to start a multiline message and `}` alone on the last line to end it.
- Use Meta-ENTER to start a new line without sending the message (Esc+ENTER in some environments).
- Use `/paste` to paste text from the clipboard into the chat.

View file

@ -1,7 +1,7 @@
<footer class="site-footer">
Aider is AI pair programming in your terminal.
Aider is on
<a href="https://github.com/paul-gauthier/aider">GitHub</a>
<a href="https://github.com/Aider-AI/aider">GitHub</a>
and
<a href="https://discord.gg/Tv2uQnR88V">Discord</a>.
</footer>

View file

@ -0,0 +1,9 @@
To use aider with pipx on replit, you can run these commands in the replit shell:
```
pip install pipx
pipx run aider-chat ...normal aider args...
```
If you install aider with pipx on replit and try and run it as just `aider` it will crash with a missing `libstdc++.so.6` library.

View file

@ -110,9 +110,9 @@ source code, by including the critical lines of code for each definition.
Here's a
sample of the map of the aider repo, just showing the maps of
[base_coder.py](https://github.com/paul-gauthier/aider/blob/main/aider/coders/base_coder.py)
[base_coder.py](https://github.com/Aider-AI/aider/blob/main/aider/coders/base_coder.py)
and
[commands.py](https://github.com/paul-gauthier/aider/blob/main/aider/commands.py)
[commands.py](https://github.com/Aider-AI/aider/blob/main/aider/commands.py)
:
```
@ -188,7 +188,7 @@ It specifically uses the
[py-tree-sitter-languages](https://github.com/grantjenks/py-tree-sitter-languages)
python module,
which provides simple, pip-installable binary wheels for
[most popular programming languages](https://github.com/paul-gauthier/grep-ast/blob/main/grep_ast/parsers.py).
[most popular programming languages](https://github.com/Aider-AI/grep-ast/blob/main/grep_ast/parsers.py).
Tree-sitter parses source code into an Abstract Syntax Tree (AST) based
on the syntax of the programming language.
@ -209,7 +209,7 @@ that aider originally used.
Switching from ctags to tree-sitter provides a bunch of benefits:
- The map is richer, showing full function call signatures and other details straight from the source files.
- Thanks to `py-tree-sitter-languages`, we get full support for many programming languages via a python package that's automatically installed as part of the normal `python -m pip install aider-chat`.
- Thanks to `py-tree-sitter-languages`, we get full support for many programming languages via a python package that's automatically installed as part of the normal `python -m pip install -U aider-chat`.
- We remove the requirement for users to manually install `universal-ctags` via some external tool or package manager (brew, apt, choco, etc).
- Tree-sitter integration is a key enabler for future work and capabilities for aider.
@ -245,7 +245,7 @@ just install [aider](https://aider.chat/docs/install.html).
## Credits
Aider uses
[modified versions of the tags.scm files](https://github.com/paul-gauthier/aider/tree/main/aider/queries)
[modified versions of the tags.scm files](https://github.com/Aider-AI/aider/tree/main/aider/queries)
from these
open source tree-sitter language implementations:

View file

@ -23,14 +23,14 @@ making it the best available model for pair programming with AI.
To use Claude 3 Opus with aider:
```
python -m pip install aider-chat
python -m pip install -U aider-chat
export ANTHROPIC_API_KEY=sk-...
aider --opus
```
## Aider's code editing benchmark
[Aider](https://github.com/paul-gauthier/aider)
[Aider](https://github.com/Aider-AI/aider)
is an open source command line chat tool that lets you
pair program with AI on code in your local git repo.

View file

@ -52,7 +52,7 @@ def some_complex_method(foo, bar):
# ... implement method here ...
```
Aider uses a ["laziness" benchmark suite](https://github.com/paul-gauthier/refactor-benchmark)
Aider uses a ["laziness" benchmark suite](https://github.com/Aider-AI/refactor-benchmark)
which is designed to both provoke and quantify lazy coding.
It consists of
89 python refactoring tasks

View file

@ -46,7 +46,7 @@ It also supports [connecting to almost any LLM](https://aider.chat/docs/llms.htm
Use the `--browser` switch to launch the browser version of aider:
```
python -m pip install aider-chat
python -m pip install -U aider-chat
export OPENAI_API_KEY=<key> # Mac/Linux
setx OPENAI_API_KEY <key> # Windows, restart shell after setx

View file

@ -15,7 +15,7 @@ nav_exclude: true
I recently wanted to draw a graph showing how LLM code editing skill has been
changing over time as new models have been released by OpenAI, Anthropic and others.
I have all the
[data in a yaml file](https://github.com/paul-gauthier/aider/blob/main/website/_data/edit_leaderboard.yml) that is used to render
[data in a yaml file](https://github.com/Aider-AI/aider/blob/main/website/_data/edit_leaderboard.yml) that is used to render
[aider's LLM leaderboards](https://aider.chat/docs/leaderboards/).
Below is the aider chat transcript, which shows:

View file

@ -25,7 +25,7 @@ This increases the ability of the LLM to understand the problem and
make the correct changes to resolve it.
Aider ships with basic linters built with tree-sitter that support
[most popular programming languages](https://github.com/paul-gauthier/grep-ast/blob/main/grep_ast/parsers.py).
[most popular programming languages](https://github.com/Aider-AI/grep-ast/blob/main/grep_ast/parsers.py).
These built in linters will detect syntax errors and other fatal problems with the code.
You can also configure aider to use your preferred linters.

View file

@ -76,7 +76,7 @@ The held out "acceptance tests" were *only* used
after benchmarking to compute statistics on which problems aider
correctly resolved.
The [full harness to run aider on SWE Bench Lite is available on GitHub](https://github.com/paul-gauthier/aider-swe-bench).
The [full harness to run aider on SWE Bench Lite is available on GitHub](https://github.com/Aider-AI/aider-swe-bench).
The benchmarking process was similar to how a developer might use aider to
resolve a GitHub issue:

View file

@ -13,7 +13,7 @@ nav_exclude: true
[![self assembly](/assets/self-assembly.jpg)](https://aider.chat/assets/self-assembly.jpg)
The
[aider git repo](https://github.com/paul-gauthier/aider)
[aider git repo](https://github.com/Aider-AI/aider)
currently contains about 4K commits and 14K lines of code.
Aider made 15% of the commits, inserting 4.8K and deleting 1.5K lines of code.

View file

@ -64,7 +64,7 @@ with the problem statement
submitted as the opening chat message from "the user".
- After that aider ran as normal, except all of aider's
suggestions were always accepted without user approval.
- A [simple harness](https://github.com/paul-gauthier/aider-swe-bench#the-aider-agent) was used to retry the SWE Bench problem if aider produced code that wasn't *plausibly correct*.
- A [simple harness](https://github.com/Aider-AI/aider-swe-bench#the-aider-agent) was used to retry the SWE Bench problem if aider produced code that wasn't *plausibly correct*.
Plausibly correct means that aider reported that it had successfully edited the repo
without causing syntax errors or breaking any *pre-existing* tests.
- If the solution from aider with GPT-4o wasn't plausible, the harness launched aider to try again from scratch using Claude 3 Opus.
@ -90,7 +90,7 @@ For a detailed discussion of the benchmark
methodology, see the
[article about aider's SWE Bench Lite results](https://aider.chat/2024/05/22/swe-bench-lite.html).
Also, the
[aider SWE Bench repository on GitHub](https://github.com/paul-gauthier/aider-swe-bench)
[aider SWE Bench repository on GitHub](https://github.com/Aider-AI/aider-swe-bench)
contains the harness and statistics code used for the benchmarks.
The benchmarking process was similar to how a developer might use aider to

View file

@ -37,8 +37,8 @@ Users who tested Sonnet with a preview of
[aider's latest release](https://aider.chat/HISTORY.html#aider-v0410)
were thrilled:
- *Works like a charm. It is a monster. It refactors files of any size like it is nothing. The continue trick with Sonnet is truly the holy grail. Aider beats [other tools] hands down. I'm going to cancel both subscriptions.* -- [Emasoft](https://github.com/paul-gauthier/aider/issues/705#issuecomment-2200338971)
- *Thanks heaps for this feature - it's a real game changer. I can be more ambitious when asking Claude for larger features.* -- [cngarrison](https://github.com/paul-gauthier/aider/issues/705#issuecomment-2196026656)
- *Works like a charm. It is a monster. It refactors files of any size like it is nothing. The continue trick with Sonnet is truly the holy grail. Aider beats [other tools] hands down. I'm going to cancel both subscriptions.* -- [Emasoft](https://github.com/Aider-AI/aider/issues/705#issuecomment-2200338971)
- *Thanks heaps for this feature - it's a real game changer. I can be more ambitious when asking Claude for larger features.* -- [cngarrison](https://github.com/Aider-AI/aider/issues/705#issuecomment-2196026656)
- *Fantastic...! It's such an improvement not being constrained by output token length issues. [I refactored] a single JavaScript file into seven smaller files using a single Aider request.* -- [John Galt](https://discord.com/channels/1131200896827654144/1253492379336441907/1256250487934554143)
## Hitting the 4k token output limit
@ -116,7 +116,7 @@ for more details, but
you can get started quickly with aider and Sonnet like this:
```
$ python -m pip install aider-chat
$ python -m pip install -U aider-chat
$ export ANTHROPIC_API_KEY=<key> # Mac/Linux
$ setx ANTHROPIC_API_KEY <key> # Windows, restart shell after setx

View file

@ -30,7 +30,7 @@ included for scale.
You can code with all of these models using aider like this:
```
$ python -m pip install aider-chat
$ python -m pip install -U aider-chat
# Change directory into a git repo to work on
$ cd /to/your/git/repo

View file

@ -11,7 +11,8 @@ nav_exclude: true
# LLMs are bad at returning code in JSON
LLMs produce lower quality code if theyre asked to return it as part of a structured JSON response. This seems to be true for many top models, including those with specialized support for JSON. Benchmarks show that models struggle with syntactic issues related to quoting and escaping.
LLMs produce lower quality code if theyre asked to return it as part of a structured JSON response. This seems to be true for many top models, including those with specialized support for JSON. Benchmarks show that models struggle with syntax errors in the code
they write, related to quoting and escaping it into JSON.
The benchmark results also imply a decreased capacity for solving coding problems due to the burden of JSON formatting.
{% include code-in-json-benchmark.js %}
@ -150,7 +151,8 @@ to assess the impact of JSON-wrapping code:
- gpt-4o-2024-05-13
- gpt-4o-2024-08-06
Each combination of model and code wrapping strategy was benchmarked 5 times.
Each combination of model and code wrapping strategy was benchmarked 5 times
on all 133 problems.
### Overall coding skill
@ -172,7 +174,11 @@ Both JSON results were well below the markdown result.
### Syntax errors
Models tend to make more syntax errors when asked to wrap code in JSON.
Models tend to make more syntax errors *in the code they write*
when asked to wrap it in JSON.
The models can reliably
produce valid JSON, but code inside is more prone to syntax errors.
Figure 2 shows the number of syntax errors found in the code produced by each
model and code wrapping strategy.
It totals up the `SyntaxError` and `IndentationError` errors from all 5 runs,

View file

@ -0,0 +1,145 @@
---
title: Sonnet seems as good as ever
excerpt: Sonnet's score on the aider code editing benchmark has been stable since it launched.
highlight_image: /assets/sonnet-seems-fine.jpg
---
{% if page.date %}
<p class="post-date">{{ page.date | date: "%B %d, %Y" }}</p>
{% endif %}
# Sonnet seems as good as ever
Recently there has been a lot of speculation that Sonnet has been
dumbed-down, nerfed or is otherwise performing worse.
Sonnet seems as good as ever, when performing the
[aider code editing benchmark](/docs/benchmarks.html#the-benchmark)
via the API.
Below is a graph showing the performance of Claude 3.5 Sonnet over time.
It shows every clean, comparable benchmark run performed since Sonnet launched.
Benchmarks were performed for various reasons, usually
to evaluate the effects of small changes to aider's system prompts.
The graph shows variance, but no indication of a noteworthy
degradation.
There is always some variance in benchmark results, typically +/- 2%
between runs with identical prompts.
It's worth noting that these results would not capture any changes
made to Anthropic web chat's use of Sonnet.
<div class="chart-container" style="position: relative; height:400px; width:100%">
<canvas id="sonnetPerformanceChart"></canvas>
</div>
<script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
<script src="https://cdn.jsdelivr.net/npm/moment@2.29.4/moment.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/chartjs-adapter-moment@1.0.1/dist/chartjs-adapter-moment.min.js"></script>
<script>
document.addEventListener('DOMContentLoaded', function() {
var ctx = document.getElementById('sonnetPerformanceChart').getContext('2d');
var sonnetData = {{ site.data.sonnet-fine | jsonify }};
var chartData = sonnetData.map(item => ({
x: moment(item.date).toDate(),
y1: item.pass_rate_1,
y2: item.pass_rate_2
})).sort((a, b) => a.x - b.x);
new Chart(ctx, {
type: 'scatter',
data: {
datasets: [{
label: 'Pass Rate 1',
data: chartData.map(item => ({ x: item.x, y: item.y1 })),
backgroundColor: 'rgb(75, 192, 192)',
pointRadius: 5,
pointHoverRadius: 7
}, {
label: 'Pass Rate 2',
data: chartData.map(item => ({ x: item.x, y: item.y2 })),
backgroundColor: 'rgb(255, 99, 132)',
pointRadius: 5,
pointHoverRadius: 7
}]
},
options: {
responsive: true,
maintainAspectRatio: false,
scales: {
y: {
beginAtZero: true,
title: {
display: true,
text: 'Pass Rate (%)',
font: {
size: 14
}
},
ticks: {
font: {
size: 12
}
}
},
x: {
type: 'time',
time: {
unit: 'day'
},
title: {
display: true,
text: 'Date',
font: {
size: 14
}
},
ticks: {
font: {
size: 12
}
}
}
},
plugins: {
title: {
display: true,
text: 'Claude 3.5 Sonnet Performance Over Time',
font: {
size: 18
}
},
legend: {
labels: {
font: {
size: 14
}
}
},
tooltip: {
callbacks: {
label: function(context) {
let label = context.dataset.label || '';
if (label) {
label += ': ';
}
if (context.parsed.y !== null) {
label += context.parsed.y.toFixed(1) + '%';
}
return label;
}
}
}
}
}
});
});
</script>
> This graph shows the performance of Claude 3.5 Sonnet on
[aider's code editing benchmark](/docs/benchmarks.html#the-benchmark)
> over time. 'Pass Rate 1' represents the initial success rate, while 'Pass Rate 2' shows the success rate after a second attempt with a chance to fix testing errors.
> The
> [aider LLM code editing leaderboard](https://aider.chat/docs/leaderboards/)
> ranks models based on Pass Rate 2.

View file

@ -0,0 +1,116 @@
---
title: o1-preview is SOTA on the aider leaderboard
excerpt: Preliminary benchmark results for the new OpenAI o1 models.
nav_exclude: true
---
{% if page.date %}
<p class="post-date">{{ page.date | date: "%B %d, %Y" }}</p>
{% endif %}
# OpenAI o1-preview is SOTA on the aider leaderboard
<script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
{% assign edit_sorted = site.data.o1_results | sort: 'pass_rate_2' | reverse %}
{% include leaderboard_graph.html
chart_id="editChart"
data=edit_sorted
row_prefix="edit-row"
pass_rate_key="pass_rate_2"
%}
## o1-preview
OpenAI o1-preview scored 79.7% on aider's code editing benchmark,
a state of the art result.
It achieved this result with the
["whole" edit format](/docs/leaderboards/#notes-on-the-edit-format),
where the LLM returns a full copy of the source code file with changes.
It is much more practical to use aider's
["diff" edit format](/docs/leaderboards/#notes-on-the-edit-format),
which allows the LLM to return search/replace blocks to
efficiently edit the source code.
This saves significant time and token costs.
Using the diff edit format the o1-preview model had a strong
benchmark score of 75.2%.
This likely places o1-preview between Sonnet and GPT-4o for practical use,
but at significantly higher cost.
## o1-mini
OpenAI o1-mini is priced similarly to GPT-4o and Claude 3.5 Sonnet,
but scored below those models.
It also works best with the whole edit format.
## Future work
The o1-preview model had trouble conforming to aider's diff edit format.
The o1-mini model had trouble conforming to both the whole and diff edit formats.
Aider is extremely permissive and tries hard to accept anything close
to the correct formats.
It is surprising that such strong models had trouble with
the syntactic requirements of simple text output formats.
It seems likely that aider could optimize its prompts and edit formats to
better harness the o1 models.
## Using aider with o1
OpenAI's new o1 models are supported in v0.57.0 of aider:
```
aider --model o1-mini
aider --model o1-preview
```
{: .note }
> These are initial benchmark results for the o1 models,
> based on aider v0.56.1-dev.
> See the [aider leaderboards](/docs/leaderboards/) for up-to-date results
> based on the latest aider releases.
<table style="width: 100%; max-width: 800px; margin: auto; border-collapse: collapse; box-shadow: 0 2px 4px rgba(0,0,0,0.1); font-size: 14px;">
<thead style="background-color: #f2f2f2;">
<tr>
<th style="padding: 8px; text-align: left;">Model</th>
<th style="padding: 8px; text-align: center;">Percent completed correctly</th>
<th style="padding: 8px; text-align: center;">Percent using correct edit format</th>
<th style="padding: 8px; text-align: left;">Command</th>
<th style="padding: 8px; text-align: center;">Edit format</th>
</tr>
</thead>
<tbody>
{% for row in edit_sorted %}
<tr style="border-bottom: 1px solid #ddd;">
<td style="padding: 8px;">{{ row.model }}</td>
<td style="padding: 8px; text-align: center;">{{ row.pass_rate_2 }}%</td>
<td style="padding: 8px; text-align: center;">{{ row.percent_cases_well_formed }}%</td>
<td style="padding: 8px;"><code>{{ row.command }}</code></td>
<td style="padding: 8px; text-align: center;">{{ row.edit_format }}</td>
</tr>
{% endfor %}
</tbody>
</table>
<style>
tr.selected {
color: #0056b3;
}
table {
table-layout: fixed;
}
td, th {
word-wrap: break-word;
overflow-wrap: break-word;
}
td:nth-child(3), td:nth-child(4) {
font-size: 12px;
}
</style>

View file

@ -0,0 +1,418 @@
---
title: Separating code reasoning and editing
excerpt: An Architect model describes how to solve the coding problem, and an Editor model translates that into file edits. This Architect/Editor approach produces SOTA benchmark results.
highlight_image: /assets/architect.jpg
draft: false
nav_exclude: true
---
{% if page.date %}
<p class="post-date">{{ page.date | date: "%B %d, %Y" }}</p>
{% endif %}
# Separating code reasoning and editing
Aider now has experimental support for using two models to complete each coding task:
- An Architect model is asked to describe how to solve the coding problem.
- An Editor model is given the Architect's solution and asked to produce specific code editing instructions to apply those changes to existing source files.
Splitting up "code reasoning" and "code editing" in this manner
has produced SOTA results on
[aider's code editing benchmark](/docs/benchmarks.html#the-benchmark).
Using o1-preview as the Architect with either DeepSeek or o1-mini as the
Editor produced the SOTA score of 85%.
Using the Architect/Editor approach
also significantly improved the benchmark scores of many
models, compared to their previous "solo" baseline scores (striped bars).
<style>
.shaded td {
background-color: #f2f2f2;
border-top: 1px solid #ccc;
}
.table-container {
max-width: 100%;
overflow-x: auto;
}
.responsive-table {
border-collapse: separate;
border-spacing: 0;
width: 100%;
font-size: 16px;
border: 1px solid #ddd;
}
.responsive-table th, .responsive-table td {
padding: 8px;
text-align: left;
border-bottom: 1px solid #ddd;
word-break: break-word;
}
.responsive-table th {
background-color: #e2e2e2;
}
.responsive-table th:first-child,
.responsive-table td:first-child {
border-left: 1px solid #ddd;
}
.responsive-table th:last-child,
.responsive-table td:last-child {
border-right: 1px solid #ddd;
}
@media screen and (max-width: 600px) {
.responsive-table {
font-size: 12px;
}
.responsive-table th, .responsive-table td {
padding: 4px;
}
}
</style>
<style>
#passRateChart {
max-width: 100%;
height: auto !important;
}
</style>
<script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
<script src="https://cdn.jsdelivr.net/npm/chartjs-plugin-annotation@1.0.2"></script>
{% assign sorted_data = site.data.architect | sort: "pass_rate_2" | reverse %}
<canvas id="passRateChart" width="400" height="250"></canvas>
<script>
document.addEventListener("DOMContentLoaded", function() {
var ctx = document.getElementById('passRateChart').getContext('2d');
// Function to determine aspect ratio and base font size based on screen width
function getChartSettings() {
if (window.innerWidth < 600) {
return { aspectRatio: 1, baseFontSize: 8 }; // Slightly taller for small screens
} else if (window.innerWidth < 800) {
return { aspectRatio: 1.2, baseFontSize: 10 }; // Slightly taller for small screens
} else {
return { aspectRatio: 1.4, baseFontSize: 12 }; // Slightly taller for larger screens
}
}
var chartSettings = getChartSettings();
var baseFontSize = chartSettings.baseFontSize;
var labels = [];
var data = [];
var colorMapping = {
"claude-3.5-sonnet": "rgba(75, 192, 192, 0.2)",
"gpt-4o": "rgba(255, 99, 132, 0.2)",
"o1-preview": "rgba(54, 162, 235, 0.2)",
"o1-mini": "rgba(255, 206, 86, 0.2)",
"gpt-4o-mini": "rgba(153, 102, 255, 0.2)"
};
var borderColorMapping = {
"claude-3.5-sonnet": "rgba(75, 192, 192, 1)",
"gpt-4o": "rgba(255, 99, 132, 1)",
"o1-preview": "rgba(54, 162, 235, 1)",
"o1-mini": "rgba(255, 206, 86, 1)",
"gpt-4o-mini": "rgba(153, 102, 255, 1)"
};
var backgroundColors = [];
var borderColors = [];
var patterns = {};
for (var key in colorMapping) {
patterns[key] = ctx.createPattern(createStripePattern(colorMapping[key]), 'repeat');
}
{% assign grouped_data = sorted_data | group_by: "model" %}
{% for group in grouped_data %}
{% for item in group.items %}
if ("{{ item.editor_model }}" == "") {
labels.push("Baseline");
} else {
labels.push("{{ item.editor_model }}/{{ item.editor_edit_format | default: item.edit_format }}");
}
data.push({{ item.pass_rate_2 }});
if ("{{ item.editor_model }}" == "") {
backgroundColors.push(patterns["{{ item.model }}"]);
} else {
backgroundColors.push(colorMapping["{{ item.model }}"]);
}
borderColors.push(borderColorMapping["{{ item.model }}"]);
{% endfor %}
{% endfor %}
labels.reverse();
data.reverse();
backgroundColors.reverse();
borderColors.reverse();
var chart = new Chart(ctx, {
type: 'bar',
data: {
labels: labels,
datasets: [{
label: 'Pass Rate',
data: data,
backgroundColor: backgroundColors,
borderColor: borderColors,
borderWidth: 1
}]
},
options: {
responsive: true,
maintainAspectRatio: true,
aspectRatio: chartSettings.aspectRatio,
scales: {
y: {
beginAtZero: true,
title: {
display: true,
text: 'Pass Rate (%)',
font: {
size: baseFontSize + 6
}
},
ticks: {
font: {
size: baseFontSize
}
}
},
x: {
title: {
display: true,
text: 'Editor model and edit format',
font: {
size: baseFontSize + 6
}
},
ticks: {
font: {
size: baseFontSize + 4
},
maxRotation: 90, // Allow full rotation if needed
minRotation: 45 // Start rotating at 45 degrees to fit more labels
}
}
},
plugins: {
annotation: {
annotations: {
line1: {
type: 'line',
yMin: 79.7,
yMax: 79.7,
borderColor: 'rgba(255, 99, 132, 0.8)',
borderWidth: 2,
borderDash: [6, 6],
label: {
content: 'Previous SOTA',
enabled: true,
position: 'start',
xAdjust: 10,
font: {
size: baseFontSize
}
}
}
}
},
legend: {
display: true,
title: {
display: true,
text: 'Architect model',
font: {
size: baseFontSize + 2,
weight: 'bold'
}
},
labels: {
font: {
size: baseFontSize + 4
},
generateLabels: function(chart) {
var colorMapping = {
"o1-preview": "rgba(54, 162, 235, 0.2)",
"claude-3.5-sonnet": "rgba(75, 192, 192, 0.2)",
"gpt-4o": "rgba(255, 99, 132, 0.2)",
"o1-mini": "rgba(255, 206, 86, 0.2)",
"gpt-4o-mini": "rgba(153, 102, 255, 0.2)"
};
return Object.keys(colorMapping).reverse().map(function(key) {
return {
text: key,
fillStyle: colorMapping[key],
strokeStyle: colorMapping[key].replace('0.2', '1'),
lineWidth: 1
};
});
}
}
}
}
}
});
// Update aspect ratio and font sizes on window resize
window.addEventListener('resize', function() {
var newSettings = getChartSettings();
chart.options.aspectRatio = newSettings.aspectRatio;
baseFontSize = newSettings.baseFontSize;
// Update font sizes
chart.options.scales.y.title.font.size = baseFontSize + 6;
chart.options.scales.y.ticks.font.size = baseFontSize;
chart.options.scales.x.title.font.size = baseFontSize + 6;
chart.options.scales.x.ticks.font.size = baseFontSize + 4;
chart.options.plugins.annotation.annotations.line1.label.font.size = baseFontSize;
chart.options.plugins.legend.title.font.size = baseFontSize + 4;
chart.options.plugins.legend.labels.font.size = baseFontSize + 4;
chart.update();
});
});
function createStripePattern(baseColor) {
var canvas = document.createElement('canvas');
canvas.width = 10;
canvas.height = 10;
var ctx = canvas.getContext('2d');
ctx.fillStyle = baseColor;
ctx.fillRect(0, 0, canvas.width, canvas.height);
ctx.strokeStyle = 'rgba(0, 0, 0, 0.1)';
ctx.lineWidth = 2;
ctx.beginPath();
ctx.moveTo(0, 0);
ctx.lineTo(10, 10);
ctx.stroke();
return canvas;
}
</script>
## Motivation
This approach was motivated by the release of OpenAI's o1 models.
They are strong at reasoning, but often fail to output properly formatted
code editing instructions.
It helps to instead let them describe the solution
however they prefer and then pass that output to a more traditional LLM.
This second Editor LLM can then interpret the solution description and
produce the code editing instructions needed to update
the existing source code.
This approach has recently become attractive for aider due to
rapid improvements in the speed and costs of frontier models.
In particular, chaining older LLMs would have been quite slow and
incompatible with aider's goal of providing an interactive,
pair programming AI coding experience.
## Code reasoning and code editing
Normally aider asks the model to solve a coding problem in one prompt,
asking the LLM to explain the solution and return
a well formatted series of file edits.
All of [aider's editing formats](/docs/more/edit-formats.html)
require the LLM to return source code edits in a specific text
format, so that aider can process the edits and apply them to the local source files.
Because this all happens in a single prompt/response round trip to the LLM,
the model has to split its attention between
solving the coding problem and conforming to the edit format.
The Architect/Editor approach splits this into two inference steps, possibly
using two different LLMs:
1. Solve the coding problem (Architect).
2. Turn the proposed solution into a series of well formed code edits (Editor).
The Architect/Editor approach allows the Architect to focus on solving the coding problem
and *describe the solution however comes naturally to it*.
Similarly, the Editor can focus all of its attention on properly formatting the edits
without needing to reason much about how to solve the coding problem.
We can assign the Architect and Editor roles to LLMs which are well suited to their needs.
Strong reasoning model like o1-preview make excellent Architects, while
the Editor role can be assigned to an appropriate model based on cost, speed
and code editing skill.
## Results
The graph above and the table below show the
[aider's code editing benchmark](/docs/benchmarks.html#the-benchmark)
score for various combinations of Architect and Editor models.
Some noteworthy observations:
- Pairing o1-preview as Architect with either Deepseek or o1-mini as Editor sets a SOTA significantly above the previous best score. This result is obtained with the "whole" editing format, requiring the Editor to output a full update copy of each edited source file. Both of these steps are therefore quite slow, so probably not practical for interactive use with aider.
- Pairing OpenAI's o1-preview with Anthropic's Sonnet as the Editor produces the second best result. This is an entirely practical configuration for users able to work with both providers.
- Pairing many models with themselves in the Architect/Editor configuration can provide
significant benefits.
Sonnet, GPT-4o and GPT-4o-mini all scored higher when used as an Architect/Editor pair.
- Deepseek is surprisingly effective as an Editor model. It seems remarkably capable at turning proposed coding solutions into new, updated versions of the source files. Using the efficient "diff" editing format, Deepseek helps all the Architect models except for Sonnet.
## Try it!
The development version of aider
has built in defaults to support Architect/Editor coding with
o1-preview, o1-mini, GPT-4o and Claude 3.5 Sonnet.
Run aider with `--architect` or get started quickly like this:
```
pip install -U aider-chat
# Change directory into a git repo
cd /to/your/git/repo
# Work with Claude 3.5 Sonnet as the Architect and Editor
export ANTHROPIC_API_KEY=your-key-goes-here
aider --sonnet --architect
# Work with OpenAI models, using gpt-4o as the Editor
export OPENAI_API_KEY=your-key-goes-here
aider --4o --architect
aider --o1-mini --architect
aider --o1-preview --architect
```
## More info
Aider has a number of "chat modes", and "architect" is available as a new chat mode.
The `--architect` switch is a shortcut for `--chat-mode architect`.
For more details, see documentation on
[aider's chat modes](/docs/usage/modes.html).
## Full results
Below are the benchmark results using various models as the Architect, paired with
various models as the Editor.
Each section includes a "baseline" result,
where the model works
by itself in aider's normal "code" editing mode
(not as part of an Architect/Editor configuration).
This "solo" baseline represents the performance previously available when using
this model with aider.
<div class="table-container">
<table class="responsive-table">
<thead>
<tr>
<th>Architect</th>
<th>Editor</th>
<th>Edit Format</th>
<th>Pass Rate</th>
</tr>
</thead>
<tbody>
{% for group in grouped_data %}
{% assign group_class = forloop.index | modulo: 2 | plus: 1 %}
{% for item in group.items %}
<tr class="{% if group_class == 1 %}shaded{% endif %}">
<td>{{ item.model }}</td>
<td>{% if item.editor_model %}{{ item.editor_model }}{% else %}<b>Baseline</b>{% endif %}</td>
<td style="text-align: center;">{{ item.editor_edit_format | default: item.edit_format }}</td>
<td style="text-align: right;">{{ item.pass_rate_2 }}%</td>
</tr>
{% endfor %}
{% endfor %}
</tbody>
</table>
</div>

Binary file not shown.

After

Width:  |  Height:  |  Size: 337 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 307 KiB

View file

@ -12,30 +12,30 @@
# options:
## show this help message and exit
#help:
#help: xxx
#######
# Main:
## Specify the OpenAI API key
#openai-api-key:
#openai-api-key: xxx
## Specify the Anthropic API key
#anthropic-api-key:
#anthropic-api-key: xxx
## Specify the model to use for the main chat
#model:
#model: xxx
## Use claude-3-opus-20240229 model for the main chat
#opus: false
## Use claude-3-5-sonnet-20240620 model for the main chat
## Use claude-3-5-sonnet-20241022 model for the main chat
#sonnet: false
## Use gpt-4-0613 model for the main chat
#4: false
## Use gpt-4o model for the main chat
## Use gpt-4o-2024-08-06 model for the main chat
#4o: false
## Use gpt-4o-mini model for the main chat
@ -50,26 +50,32 @@
## Use deepseek/deepseek-coder model for the main chat
#deepseek: false
## Use o1-mini model for the main chat
#o1-mini: false
## Use o1-preview model for the main chat
#o1-preview: false
#################
# Model Settings:
## List known models which match the (partial) MODEL name
#models:
#list-models: xxx
## Specify the api base url
#openai-api-base:
#openai-api-base: xxx
## Specify the api_type
#openai-api-type:
#openai-api-type: xxx
## Specify the api_version
#openai-api-version:
#openai-api-version: xxx
## Specify the deployment_id
#openai-api-deployment-id:
#openai-api-deployment-id: xxx
## Specify the OpenAI organization ID
#openai-organization-id:
#openai-organization-id: xxx
## Specify a file with aider model settings for unknown models
#model-settings-file: .aider.model.settings.yml
@ -81,23 +87,50 @@
#verify-ssl: true
## Specify what edit format the LLM should use (default depends on model)
#edit-format:
#edit-format: xxx
## Use architect edit format for the main chat
#architect: false
## Specify the model to use for commit messages and chat history summarization (default depends on --model)
#weak-model:
#weak-model: xxx
## Specify the model to use for editor tasks (default depends on --model)
#editor-model: xxx
## Specify the edit format for the editor model (default: depends on editor model)
#editor-edit-format: xxx
## Only work with models that have meta-data available (default: True)
#show-model-warnings: true
## Max number of tokens to use for repo map, use 0 to disable (default: 1024)
#map-tokens:
## Maximum number of tokens to use for chat history. If not specified, uses the model's max_chat_history_tokens.
#max-chat-history-tokens:
## Soft limit on tokens for chat history, after which summarization begins. If unspecified, defaults to the model's max_chat_history_tokens.
#max-chat-history-tokens: xxx
## Specify the .env file to load (default: .env in git root)
#env-file: .env
#################
# Cache Settings:
## Enable caching of prompts (default: False)
#cache-prompts: false
## Number of times to ping at 5min intervals to keep prompt cache warm (default: 0)
#cache-keepalive-pings: false
###################
# Repomap Settings:
## Suggested number of tokens to use for repo map, use 0 to disable (default: 1024)
#map-tokens: xxx
## Control how often the repo map is refreshed. Options: auto, always, files, manual (default: auto)
#map-refresh: auto
## Multiplier for map tokens when no files are specified (default: 2)
#map-multiplier-no-files: true
################
# History Files:
@ -111,7 +144,7 @@
#restore-chat-history: false
## Log the conversation with the LLM to this file (for example, .aider.llm.history)
#llm-history-file:
#llm-history-file: xxx
##################
# Output Settings:
@ -132,14 +165,29 @@
#user-input-color: #00cc00
## Set the color for tool output (default: None)
#tool-output-color:
#tool-output-color: xxx
## Set the color for tool error messages (default: red)
## Set the color for tool error messages (default: #FF2222)
#tool-error-color: #FF2222
## Set the color for tool warning messages (default: #FFA500)
#tool-warning-color: #FFA500
## Set the color for assistant output (default: #0088ff)
#assistant-output-color: #0088ff
## Set the color for the completion menu (default: terminal's default text color)
#completion-menu-color: xxx
## Set the background color for the completion menu (default: terminal's default background color)
#completion-menu-bg-color: xxx
## Set the color for the current item in the completion menu (default: terminal's default background color)
#completion-menu-current-color: xxx
## Set the background color for the current item in the completion menu (default: terminal's default text color)
#completion-menu-current-bg-color: xxx
## Set the markdown code theme (default: default, other options include monokai, solarized-dark, solarized-light)
#code-theme: default
@ -183,11 +231,14 @@
#commit: false
## Specify a custom prompt for generating commit messages
#commit-prompt:
#commit-prompt: xxx
## Perform a dry run without modifying files (default: False)
#dry-run: false
## Skip the sanity check for the git repository (default: False)
#skip-sanity-check-repo: false
########################
# Fixing and committing:
@ -195,13 +246,18 @@
#lint: false
## Specify lint commands to run for different languages, eg: "python: flake8 --select=..." (can be used multiple times)
#lint-cmd: xxx
## Specify multiple values like this:
#lint-cmd:
# - xxx
# - yyy
# - zzz
## Enable/disable automatic linting after changes (default: True)
#auto-lint: true
## Specify command to run tests
#test-cmd:
#test-cmd: xxx
## Enable/disable automatic testing after changes (default: False)
#auto-test: false
@ -225,19 +281,29 @@
# Other Settings:
## specify a file to edit (can be used multiple times)
#file: xxx
## Specify multiple values like this:
#file:
# - xxx
# - yyy
# - zzz
## specify a read-only file (can be used multiple times)
#read: xxx
## Specify multiple values like this:
#read:
# - xxx
# - yyy
# - zzz
## Use VI editing mode in the terminal (default: False)
#vim: false
## Specify the language for voice using ISO 639-1 code (default: auto)
#voice-language: en
## Specify the language to use in the chat (default: None, uses system settings)
#chat-language: xxx
## Show the version number and exit
#version:
#version: xxx
## Check for updates and return status in the exit code
#just-check-update: false
@ -245,11 +311,17 @@
## Check for new aider versions on launch
#check-update: true
## Install the latest version from the main branch
#install-main-branch: false
## Upgrade aider to the latest version from PyPI
#upgrade: false
## Apply the changes from the given file instead of running the chat (debug)
#apply:
#apply: xxx
## Always say yes to every confirmation
#yes: false
#yes-always: false
## Enable verbose output
#verbose: false
@ -264,16 +336,34 @@
#exit: false
## Specify a single message to send the LLM, process reply then exit (disables chat mode)
#message:
#message: xxx
## Specify a file containing the message to send the LLM, process reply, then exit (disables chat mode)
#message-file:
#message-file: xxx
## Load and execute /commands from a file on launch
#load: xxx
## Specify the encoding for input and output (default: utf-8)
#encoding: utf-8
## Specify the config file (default: search for .aider.conf.yml in git root, cwd or home directory)
#config:
#config: xxx
## Run aider in your browser
#gui: false
## Enable/disable suggesting shell commands (default: True)
#suggest-shell-commands: true
## Enable/disable fancy input with history and completion (default: True)
#fancy-input: true
#################
# Voice Settings:
## Audio format for voice recording (default: wav). webm and mp3 require ffmpeg
#voice-format: wav
## Specify the language for voice using ISO 639-1 code (default: auto)
#voice-language: en

View file

@ -33,13 +33,13 @@
## Use claude-3-opus-20240229 model for the main chat
#AIDER_OPUS=
## Use claude-3-5-sonnet-20240620 model for the main chat
## Use claude-3-5-sonnet-20241022 model for the main chat
#AIDER_SONNET=
## Use gpt-4-0613 model for the main chat
#AIDER_4=
## Use gpt-4o model for the main chat
## Use gpt-4o-2024-08-06 model for the main chat
#AIDER_4O=
## Use gpt-4o-mini model for the main chat
@ -54,11 +54,17 @@
## Use deepseek/deepseek-coder model for the main chat
#AIDER_DEEPSEEK=
## Use o1-mini model for the main chat
#AIDER_O1_MINI=
## Use o1-preview model for the main chat
#AIDER_O1_PREVIEW=
#################
# Model Settings:
## List known models which match the (partial) MODEL name
#AIDER_MODELS=
#AIDER_LIST_MODELS=
## Specify the api base url
#OPENAI_API_BASE=
@ -87,21 +93,48 @@
## Specify what edit format the LLM should use (default depends on model)
#AIDER_EDIT_FORMAT=
## Use architect edit format for the main chat
#AIDER_ARCHITECT=
## Specify the model to use for commit messages and chat history summarization (default depends on --model)
#AIDER_WEAK_MODEL=
## Specify the model to use for editor tasks (default depends on --model)
#AIDER_EDITOR_MODEL=
## Specify the edit format for the editor model (default: depends on editor model)
#AIDER_EDITOR_EDIT_FORMAT=
## Only work with models that have meta-data available (default: True)
#AIDER_SHOW_MODEL_WARNINGS=true
## Max number of tokens to use for repo map, use 0 to disable (default: 1024)
#AIDER_MAP_TOKENS=
## Maximum number of tokens to use for chat history. If not specified, uses the model's max_chat_history_tokens.
## Soft limit on tokens for chat history, after which summarization begins. If unspecified, defaults to the model's max_chat_history_tokens.
#AIDER_MAX_CHAT_HISTORY_TOKENS=
## Specify the .env file to load (default: .env in git root)
#AIDER_ENV_FILE=.env
#################
# Cache Settings:
## Enable caching of prompts (default: False)
#AIDER_CACHE_PROMPTS=false
## Number of times to ping at 5min intervals to keep prompt cache warm (default: 0)
#AIDER_CACHE_KEEPALIVE_PINGS=false
###################
# Repomap Settings:
## Suggested number of tokens to use for repo map, use 0 to disable (default: 1024)
#AIDER_MAP_TOKENS=
## Control how often the repo map is refreshed. Options: auto, always, files, manual (default: auto)
#AIDER_MAP_REFRESH=auto
## Multiplier for map tokens when no files are specified (default: 2)
#AIDER_MAP_MULTIPLIER_NO_FILES=true
################
# History Files:
@ -138,12 +171,27 @@
## Set the color for tool output (default: None)
#AIDER_TOOL_OUTPUT_COLOR=
## Set the color for tool error messages (default: red)
## Set the color for tool error messages (default: #FF2222)
#AIDER_TOOL_ERROR_COLOR=#FF2222
## Set the color for tool warning messages (default: #FFA500)
#AIDER_TOOL_WARNING_COLOR=#FFA500
## Set the color for assistant output (default: #0088ff)
#AIDER_ASSISTANT_OUTPUT_COLOR=#0088ff
## Set the color for the completion menu (default: terminal's default text color)
#AIDER_COMPLETION_MENU_COLOR=
## Set the background color for the completion menu (default: terminal's default background color)
#AIDER_COMPLETION_MENU_BG_COLOR=
## Set the color for the current item in the completion menu (default: terminal's default background color)
#AIDER_COMPLETION_MENU_CURRENT_COLOR=
## Set the background color for the current item in the completion menu (default: terminal's default text color)
#AIDER_COMPLETION_MENU_CURRENT_BG_COLOR=
## Set the markdown code theme (default: default, other options include monokai, solarized-dark, solarized-light)
#AIDER_CODE_THEME=default
@ -192,6 +240,9 @@
## Perform a dry run without modifying files (default: False)
#AIDER_DRY_RUN=false
## Skip the sanity check for the git repository (default: False)
#AIDER_SKIP_SANITY_CHECK_REPO=false
########################
# Fixing and committing:
@ -237,8 +288,8 @@
## Use VI editing mode in the terminal (default: False)
#AIDER_VIM=false
## Specify the language for voice using ISO 639-1 code (default: auto)
#AIDER_VOICE_LANGUAGE=en
## Specify the language to use in the chat (default: None, uses system settings)
#AIDER_CHAT_LANGUAGE=
## Check for updates and return status in the exit code
#AIDER_JUST_CHECK_UPDATE=false
@ -246,11 +297,17 @@
## Check for new aider versions on launch
#AIDER_CHECK_UPDATE=true
## Install the latest version from the main branch
#AIDER_INSTALL_MAIN_BRANCH=false
## Upgrade aider to the latest version from PyPI
#AIDER_UPGRADE=false
## Apply the changes from the given file instead of running the chat (debug)
#AIDER_APPLY=
## Always say yes to every confirmation
#AIDER_YES=
#AIDER_YES_ALWAYS=
## Enable verbose output
#AIDER_VERBOSE=false
@ -270,8 +327,26 @@
## Specify a file containing the message to send the LLM, process reply, then exit (disables chat mode)
#AIDER_MESSAGE_FILE=
## Load and execute /commands from a file on launch
#AIDER_LOAD=
## Specify the encoding for input and output (default: utf-8)
#AIDER_ENCODING=utf-8
## Run aider in your browser
#AIDER_GUI=false
## Enable/disable suggesting shell commands (default: True)
#AIDER_SUGGEST_SHELL_COMMANDS=true
## Enable/disable fancy input with history and completion (default: True)
#AIDER_FANCY_INPUT=true
#################
# Voice Settings:
## Audio format for voice recording (default: wav). webm and mp3 require ffmpeg
#AIDER_VOICE_FORMAT=wav
## Specify the language for voice using ISO 639-1 code (default: auto)
#AIDER_VOICE_LANGUAGE=en

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 519 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 147 KiB

View file

@ -19,7 +19,7 @@ and there's a lot
of interest about their ability to code compared to the previous versions.
With that in mind, I've been benchmarking the new models.
[Aider](https://github.com/paul-gauthier/aider)
[Aider](https://github.com/Aider-AI/aider)
is an open source command line chat tool that lets you work with GPT to edit
code in your local git repo.
To do this, aider needs to be able to reliably recognize when GPT wants to edit

View file

@ -20,7 +20,7 @@ and there's a lot
of interest about their capabilities and performance.
With that in mind, I've been benchmarking the new models.
[Aider](https://github.com/paul-gauthier/aider)
[Aider](https://github.com/Aider-AI/aider)
is an open source command line chat tool that lets you work with GPT to edit
code in your local git repo.
Aider relies on a

View file

@ -55,7 +55,7 @@ about prompting GPT for complex tasks like coding. It's beneficial to
minimize the "cognitive overhead" of formatting the response, allowing
GPT to concentrate on the coding task at hand.
As a thought experiment, imagine a slack conversation with a junior developer where
As a thought experiment, imagine a slack conversation with a editor developer where
you ask them to write the code to add some new feature to your app.
They're going to type the response back to you by hand in the chat.
Should they type out the
@ -168,7 +168,7 @@ requests:
### whole
The
[whole](https://github.com/paul-gauthier/aider/blob/main/aider/coders/wholefile_prompts.py)
[whole](https://github.com/Aider-AI/aider/blob/main/aider/coders/wholefile_prompts.py)
format asks GPT to return an updated copy of the entire file, including any changes.
The file should be
formatted with normal markdown triple-backtick fences, inlined with the rest of its response text.
@ -187,7 +187,7 @@ def main():
### diff
The [diff](https://github.com/paul-gauthier/aider/blob/main/aider/coders/editblock_prompts.py)
The [diff](https://github.com/Aider-AI/aider/blob/main/aider/coders/editblock_prompts.py)
format also asks GPT to return edits as part of the normal response text,
in a simple diff format.
Each edit is a fenced code block that
@ -209,7 +209,7 @@ demo.py
### whole-func
The [whole-func](https://github.com/paul-gauthier/aider/blob/main/aider/coders/wholefile_func_coder.py)
The [whole-func](https://github.com/Aider-AI/aider/blob/main/aider/coders/wholefile_func_coder.py)
format requests updated copies of whole files to be returned using the function call API.
@ -227,7 +227,7 @@ format requests updated copies of whole files to be returned using the function
### diff-func
The
[diff-func](https://github.com/paul-gauthier/aider/blob/main/aider/coders/editblock_func_coder.py)
[diff-func](https://github.com/Aider-AI/aider/blob/main/aider/coders/editblock_func_coder.py)
format requests a list of
original/updated style edits to be returned using the function call API.

File diff suppressed because it is too large Load diff

View file

@ -21,11 +21,28 @@ load whichever is found first.
{% include env-keys-tip.md %}
## A note on lists
Lists of values can be specified either as a bulleted list:
```
read:
- CONVENTIONS.md
- anotherfile.txt
- thirdfile.py
```
Or lists can be specified using commas and square brackets:
```
read: [CONVENTIONS.md, anotherfile.txt, thirdfile.py]
```
## Sample YAML config file
Below is a sample of the YAML config file, which you
can also
[download from GitHub](https://github.com/paul-gauthier/aider/blob/main/aider/website/assets/sample.aider.conf.yml).
[download from GitHub](https://github.com/Aider-AI/aider/blob/main/aider/website/assets/sample.aider.conf.yml).
<!--[[[cog
from aider.args import get_sample_yaml
@ -51,30 +68,30 @@ cog.outl("```")
# options:
## show this help message and exit
#help:
#help: xxx
#######
# Main:
## Specify the OpenAI API key
#openai-api-key:
#openai-api-key: xxx
## Specify the Anthropic API key
#anthropic-api-key:
#anthropic-api-key: xxx
## Specify the model to use for the main chat
#model:
#model: xxx
## Use claude-3-opus-20240229 model for the main chat
#opus: false
## Use claude-3-5-sonnet-20240620 model for the main chat
## Use claude-3-5-sonnet-20241022 model for the main chat
#sonnet: false
## Use gpt-4-0613 model for the main chat
#4: false
## Use gpt-4o model for the main chat
## Use gpt-4o-2024-08-06 model for the main chat
#4o: false
## Use gpt-4o-mini model for the main chat
@ -89,26 +106,32 @@ cog.outl("```")
## Use deepseek/deepseek-coder model for the main chat
#deepseek: false
## Use o1-mini model for the main chat
#o1-mini: false
## Use o1-preview model for the main chat
#o1-preview: false
#################
# Model Settings:
## List known models which match the (partial) MODEL name
#models:
#list-models: xxx
## Specify the api base url
#openai-api-base:
#openai-api-base: xxx
## Specify the api_type
#openai-api-type:
#openai-api-type: xxx
## Specify the api_version
#openai-api-version:
#openai-api-version: xxx
## Specify the deployment_id
#openai-api-deployment-id:
#openai-api-deployment-id: xxx
## Specify the OpenAI organization ID
#openai-organization-id:
#openai-organization-id: xxx
## Specify a file with aider model settings for unknown models
#model-settings-file: .aider.model.settings.yml
@ -120,23 +143,50 @@ cog.outl("```")
#verify-ssl: true
## Specify what edit format the LLM should use (default depends on model)
#edit-format:
#edit-format: xxx
## Use architect edit format for the main chat
#architect: false
## Specify the model to use for commit messages and chat history summarization (default depends on --model)
#weak-model:
#weak-model: xxx
## Specify the model to use for editor tasks (default depends on --model)
#editor-model: xxx
## Specify the edit format for the editor model (default: depends on editor model)
#editor-edit-format: xxx
## Only work with models that have meta-data available (default: True)
#show-model-warnings: true
## Max number of tokens to use for repo map, use 0 to disable (default: 1024)
#map-tokens:
## Maximum number of tokens to use for chat history. If not specified, uses the model's max_chat_history_tokens.
#max-chat-history-tokens:
## Soft limit on tokens for chat history, after which summarization begins. If unspecified, defaults to the model's max_chat_history_tokens.
#max-chat-history-tokens: xxx
## Specify the .env file to load (default: .env in git root)
#env-file: .env
#################
# Cache Settings:
## Enable caching of prompts (default: False)
#cache-prompts: false
## Number of times to ping at 5min intervals to keep prompt cache warm (default: 0)
#cache-keepalive-pings: false
###################
# Repomap Settings:
## Suggested number of tokens to use for repo map, use 0 to disable (default: 1024)
#map-tokens: xxx
## Control how often the repo map is refreshed. Options: auto, always, files, manual (default: auto)
#map-refresh: auto
## Multiplier for map tokens when no files are specified (default: 2)
#map-multiplier-no-files: true
################
# History Files:
@ -150,7 +200,7 @@ cog.outl("```")
#restore-chat-history: false
## Log the conversation with the LLM to this file (for example, .aider.llm.history)
#llm-history-file:
#llm-history-file: xxx
##################
# Output Settings:
@ -171,14 +221,29 @@ cog.outl("```")
#user-input-color: #00cc00
## Set the color for tool output (default: None)
#tool-output-color:
#tool-output-color: xxx
## Set the color for tool error messages (default: red)
## Set the color for tool error messages (default: #FF2222)
#tool-error-color: #FF2222
## Set the color for tool warning messages (default: #FFA500)
#tool-warning-color: #FFA500
## Set the color for assistant output (default: #0088ff)
#assistant-output-color: #0088ff
## Set the color for the completion menu (default: terminal's default text color)
#completion-menu-color: xxx
## Set the background color for the completion menu (default: terminal's default background color)
#completion-menu-bg-color: xxx
## Set the color for the current item in the completion menu (default: terminal's default background color)
#completion-menu-current-color: xxx
## Set the background color for the current item in the completion menu (default: terminal's default text color)
#completion-menu-current-bg-color: xxx
## Set the markdown code theme (default: default, other options include monokai, solarized-dark, solarized-light)
#code-theme: default
@ -222,11 +287,14 @@ cog.outl("```")
#commit: false
## Specify a custom prompt for generating commit messages
#commit-prompt:
#commit-prompt: xxx
## Perform a dry run without modifying files (default: False)
#dry-run: false
## Skip the sanity check for the git repository (default: False)
#skip-sanity-check-repo: false
########################
# Fixing and committing:
@ -234,13 +302,18 @@ cog.outl("```")
#lint: false
## Specify lint commands to run for different languages, eg: "python: flake8 --select=..." (can be used multiple times)
#lint-cmd: xxx
## Specify multiple values like this:
#lint-cmd:
# - xxx
# - yyy
# - zzz
## Enable/disable automatic linting after changes (default: True)
#auto-lint: true
## Specify command to run tests
#test-cmd:
#test-cmd: xxx
## Enable/disable automatic testing after changes (default: False)
#auto-test: false
@ -264,19 +337,29 @@ cog.outl("```")
# Other Settings:
## specify a file to edit (can be used multiple times)
#file: xxx
## Specify multiple values like this:
#file:
# - xxx
# - yyy
# - zzz
## specify a read-only file (can be used multiple times)
#read: xxx
## Specify multiple values like this:
#read:
# - xxx
# - yyy
# - zzz
## Use VI editing mode in the terminal (default: False)
#vim: false
## Specify the language for voice using ISO 639-1 code (default: auto)
#voice-language: en
## Specify the language to use in the chat (default: None, uses system settings)
#chat-language: xxx
## Show the version number and exit
#version:
#version: xxx
## Check for updates and return status in the exit code
#just-check-update: false
@ -284,11 +367,17 @@ cog.outl("```")
## Check for new aider versions on launch
#check-update: true
## Install the latest version from the main branch
#install-main-branch: false
## Upgrade aider to the latest version from PyPI
#upgrade: false
## Apply the changes from the given file instead of running the chat (debug)
#apply:
#apply: xxx
## Always say yes to every confirmation
#yes: false
#yes-always: false
## Enable verbose output
#verbose: false
@ -303,18 +392,36 @@ cog.outl("```")
#exit: false
## Specify a single message to send the LLM, process reply then exit (disables chat mode)
#message:
#message: xxx
## Specify a file containing the message to send the LLM, process reply, then exit (disables chat mode)
#message-file:
#message-file: xxx
## Load and execute /commands from a file on launch
#load: xxx
## Specify the encoding for input and output (default: utf-8)
#encoding: utf-8
## Specify the config file (default: search for .aider.conf.yml in git root, cwd or home directory)
#config:
#config: xxx
## Run aider in your browser
#gui: false
## Enable/disable suggesting shell commands (default: True)
#suggest-shell-commands: true
## Enable/disable fancy input with history and completion (default: True)
#fancy-input: true
#################
# Voice Settings:
## Audio format for voice recording (default: wav). webm and mp3 require ffmpeg
#voice-format: wav
## Specify the language for voice using ISO 639-1 code (default: auto)
#voice-language: en
```
<!--[[[end]]]-->

View file

@ -28,7 +28,7 @@ If the files above exist, they will be loaded in that order. Files loaded last w
Below is a sample `.env` file, which you
can also
[download from GitHub](https://github.com/paul-gauthier/aider/blob/main/aider/website/assets/sample.env).
[download from GitHub](https://github.com/Aider-AI/aider/blob/main/aider/website/assets/sample.env).
<!--[[[cog
from aider.args import get_sample_dotenv
@ -75,13 +75,13 @@ cog.outl("```")
## Use claude-3-opus-20240229 model for the main chat
#AIDER_OPUS=
## Use claude-3-5-sonnet-20240620 model for the main chat
## Use claude-3-5-sonnet-20241022 model for the main chat
#AIDER_SONNET=
## Use gpt-4-0613 model for the main chat
#AIDER_4=
## Use gpt-4o model for the main chat
## Use gpt-4o-2024-08-06 model for the main chat
#AIDER_4O=
## Use gpt-4o-mini model for the main chat
@ -96,11 +96,17 @@ cog.outl("```")
## Use deepseek/deepseek-coder model for the main chat
#AIDER_DEEPSEEK=
## Use o1-mini model for the main chat
#AIDER_O1_MINI=
## Use o1-preview model for the main chat
#AIDER_O1_PREVIEW=
#################
# Model Settings:
## List known models which match the (partial) MODEL name
#AIDER_MODELS=
#AIDER_LIST_MODELS=
## Specify the api base url
#OPENAI_API_BASE=
@ -129,21 +135,48 @@ cog.outl("```")
## Specify what edit format the LLM should use (default depends on model)
#AIDER_EDIT_FORMAT=
## Use architect edit format for the main chat
#AIDER_ARCHITECT=
## Specify the model to use for commit messages and chat history summarization (default depends on --model)
#AIDER_WEAK_MODEL=
## Specify the model to use for editor tasks (default depends on --model)
#AIDER_EDITOR_MODEL=
## Specify the edit format for the editor model (default: depends on editor model)
#AIDER_EDITOR_EDIT_FORMAT=
## Only work with models that have meta-data available (default: True)
#AIDER_SHOW_MODEL_WARNINGS=true
## Max number of tokens to use for repo map, use 0 to disable (default: 1024)
#AIDER_MAP_TOKENS=
## Maximum number of tokens to use for chat history. If not specified, uses the model's max_chat_history_tokens.
## Soft limit on tokens for chat history, after which summarization begins. If unspecified, defaults to the model's max_chat_history_tokens.
#AIDER_MAX_CHAT_HISTORY_TOKENS=
## Specify the .env file to load (default: .env in git root)
#AIDER_ENV_FILE=.env
#################
# Cache Settings:
## Enable caching of prompts (default: False)
#AIDER_CACHE_PROMPTS=false
## Number of times to ping at 5min intervals to keep prompt cache warm (default: 0)
#AIDER_CACHE_KEEPALIVE_PINGS=false
###################
# Repomap Settings:
## Suggested number of tokens to use for repo map, use 0 to disable (default: 1024)
#AIDER_MAP_TOKENS=
## Control how often the repo map is refreshed. Options: auto, always, files, manual (default: auto)
#AIDER_MAP_REFRESH=auto
## Multiplier for map tokens when no files are specified (default: 2)
#AIDER_MAP_MULTIPLIER_NO_FILES=true
################
# History Files:
@ -180,12 +213,27 @@ cog.outl("```")
## Set the color for tool output (default: None)
#AIDER_TOOL_OUTPUT_COLOR=
## Set the color for tool error messages (default: red)
## Set the color for tool error messages (default: #FF2222)
#AIDER_TOOL_ERROR_COLOR=#FF2222
## Set the color for tool warning messages (default: #FFA500)
#AIDER_TOOL_WARNING_COLOR=#FFA500
## Set the color for assistant output (default: #0088ff)
#AIDER_ASSISTANT_OUTPUT_COLOR=#0088ff
## Set the color for the completion menu (default: terminal's default text color)
#AIDER_COMPLETION_MENU_COLOR=
## Set the background color for the completion menu (default: terminal's default background color)
#AIDER_COMPLETION_MENU_BG_COLOR=
## Set the color for the current item in the completion menu (default: terminal's default background color)
#AIDER_COMPLETION_MENU_CURRENT_COLOR=
## Set the background color for the current item in the completion menu (default: terminal's default text color)
#AIDER_COMPLETION_MENU_CURRENT_BG_COLOR=
## Set the markdown code theme (default: default, other options include monokai, solarized-dark, solarized-light)
#AIDER_CODE_THEME=default
@ -234,6 +282,9 @@ cog.outl("```")
## Perform a dry run without modifying files (default: False)
#AIDER_DRY_RUN=false
## Skip the sanity check for the git repository (default: False)
#AIDER_SKIP_SANITY_CHECK_REPO=false
########################
# Fixing and committing:
@ -279,8 +330,8 @@ cog.outl("```")
## Use VI editing mode in the terminal (default: False)
#AIDER_VIM=false
## Specify the language for voice using ISO 639-1 code (default: auto)
#AIDER_VOICE_LANGUAGE=en
## Specify the language to use in the chat (default: None, uses system settings)
#AIDER_CHAT_LANGUAGE=
## Check for updates and return status in the exit code
#AIDER_JUST_CHECK_UPDATE=false
@ -288,11 +339,17 @@ cog.outl("```")
## Check for new aider versions on launch
#AIDER_CHECK_UPDATE=true
## Install the latest version from the main branch
#AIDER_INSTALL_MAIN_BRANCH=false
## Upgrade aider to the latest version from PyPI
#AIDER_UPGRADE=false
## Apply the changes from the given file instead of running the chat (debug)
#AIDER_APPLY=
## Always say yes to every confirmation
#AIDER_YES=
#AIDER_YES_ALWAYS=
## Enable verbose output
#AIDER_VERBOSE=false
@ -312,11 +369,29 @@ cog.outl("```")
## Specify a file containing the message to send the LLM, process reply, then exit (disables chat mode)
#AIDER_MESSAGE_FILE=
## Load and execute /commands from a file on launch
#AIDER_LOAD=
## Specify the encoding for input and output (default: utf-8)
#AIDER_ENCODING=utf-8
## Run aider in your browser
#AIDER_GUI=false
## Enable/disable suggesting shell commands (default: True)
#AIDER_SUGGEST_SHELL_COMMANDS=true
## Enable/disable fancy input with history and completion (default: True)
#AIDER_FANCY_INPUT=true
#################
# Voice Settings:
## Audio format for voice recording (default: wav). webm and mp3 require ffmpeg
#AIDER_VOICE_FORMAT=wav
## Specify the language for voice using ISO 639-1 code (default: auto)
#AIDER_VOICE_LANGUAGE=en
```
<!--[[[end]]]-->

View file

@ -27,21 +27,30 @@ cog.out(get_md_help())
```
usage: aider [-h] [--openai-api-key] [--anthropic-api-key] [--model]
[--opus] [--sonnet] [--4] [--4o] [--mini] [--4-turbo]
[--35turbo] [--deepseek] [--models] [--openai-api-base]
[--openai-api-type] [--openai-api-version]
[--openai-api-deployment-id] [--openai-organization-id]
[--model-settings-file] [--model-metadata-file]
[--35turbo] [--deepseek] [--o1-mini] [--o1-preview]
[--list-models] [--openai-api-base] [--openai-api-type]
[--openai-api-version] [--openai-api-deployment-id]
[--openai-organization-id] [--model-settings-file]
[--model-metadata-file]
[--verify-ssl | --no-verify-ssl] [--edit-format]
[--weak-model]
[--architect] [--weak-model] [--editor-model]
[--editor-edit-format]
[--show-model-warnings | --no-show-model-warnings]
[--map-tokens] [--max-chat-history-tokens] [--env-file]
[--max-chat-history-tokens] [--env-file]
[--cache-prompts | --no-cache-prompts]
[--cache-keepalive-pings] [--map-tokens]
[--map-refresh] [--map-multiplier-no-files]
[--input-history-file] [--chat-history-file]
[--restore-chat-history | --no-restore-chat-history]
[--llm-history-file] [--dark-mode] [--light-mode]
[--pretty | --no-pretty] [--stream | --no-stream]
[--user-input-color] [--tool-output-color]
[--tool-error-color] [--assistant-output-color]
[--code-theme] [--show-diffs] [--git | --no-git]
[--tool-error-color] [--tool-warning-color]
[--assistant-output-color] [--completion-menu-color]
[--completion-menu-bg-color]
[--completion-menu-current-color]
[--completion-menu-current-bg-color] [--code-theme]
[--show-diffs] [--git | --no-git]
[--gitignore | --no-gitignore] [--aiderignore]
[--subtree-only] [--auto-commits | --no-auto-commits]
[--dirty-commits | --no-dirty-commits]
@ -50,14 +59,20 @@ usage: aider [-h] [--openai-api-key] [--anthropic-api-key] [--model]
[--attribute-commit-message-author | --no-attribute-commit-message-author]
[--attribute-commit-message-committer | --no-attribute-commit-message-committer]
[--commit] [--commit-prompt] [--dry-run | --no-dry-run]
[--lint] [--lint-cmd] [--auto-lint | --no-auto-lint]
[--test-cmd] [--auto-test | --no-auto-test] [--test]
[--skip-sanity-check-repo] [--lint] [--lint-cmd]
[--auto-lint | --no-auto-lint] [--test-cmd]
[--auto-test | --no-auto-test] [--test]
[--analytics | --no-analytics] [--analytics-log]
[--analytics-disable] [--file] [--read] [--vim]
[--voice-language] [--version] [--just-check-update]
[--check-update | --no-check-update] [--apply] [--yes]
[-v] [--show-repo-map] [--show-prompts] [--exit]
[--message] [--message-file] [--encoding] [-c] [--gui]
[--chat-language] [--version] [--just-check-update]
[--check-update | --no-check-update]
[--install-main-branch] [--upgrade] [--apply]
[--yes-always] [-v] [--show-repo-map] [--show-prompts]
[--exit] [--message] [--message-file] [--load]
[--encoding] [-c] [--gui]
[--suggest-shell-commands | --no-suggest-shell-commands]
[--fancy-input | --no-fancy-input] [--voice-format]
[--voice-language]
```
@ -88,7 +103,7 @@ Use claude-3-opus-20240229 model for the main chat
Environment variable: `AIDER_OPUS`
### `--sonnet`
Use claude-3-5-sonnet-20240620 model for the main chat
Use claude-3-5-sonnet-20241022 model for the main chat
Environment variable: `AIDER_SONNET`
### `--4`
@ -99,7 +114,7 @@ Aliases:
- `-4`
### `--4o`
Use gpt-4o model for the main chat
Use gpt-4o-2024-08-06 model for the main chat
Environment variable: `AIDER_4O`
### `--mini`
@ -123,11 +138,22 @@ Aliases:
Use deepseek/deepseek-coder model for the main chat
Environment variable: `AIDER_DEEPSEEK`
### `--o1-mini`
Use o1-mini model for the main chat
Environment variable: `AIDER_O1_MINI`
### `--o1-preview`
Use o1-preview model for the main chat
Environment variable: `AIDER_O1_PREVIEW`
## Model Settings:
### `--models MODEL`
### `--list-models MODEL`
List known models which match the (partial) MODEL name
Environment variable: `AIDER_MODELS`
Environment variable: `AIDER_LIST_MODELS`
Aliases:
- `--list-models MODEL`
- `--models MODEL`
### `--openai-api-base OPENAI_API_BASE`
Specify the api base url
@ -174,10 +200,22 @@ Aliases:
- `--edit-format EDIT_FORMAT`
- `--chat-mode EDIT_FORMAT`
### `--architect`
Use architect edit format for the main chat
Environment variable: `AIDER_ARCHITECT`
### `--weak-model WEAK_MODEL`
Specify the model to use for commit messages and chat history summarization (default depends on --model)
Environment variable: `AIDER_WEAK_MODEL`
### `--editor-model EDITOR_MODEL`
Specify the model to use for editor tasks (default depends on --model)
Environment variable: `AIDER_EDITOR_MODEL`
### `--editor-edit-format EDITOR_EDIT_FORMAT`
Specify the edit format for the editor model (default: depends on editor model)
Environment variable: `AIDER_EDITOR_EDIT_FORMAT`
### `--show-model-warnings`
Only work with models that have meta-data available (default: True)
Default: True
@ -186,12 +224,8 @@ Aliases:
- `--show-model-warnings`
- `--no-show-model-warnings`
### `--map-tokens VALUE`
Max number of tokens to use for repo map, use 0 to disable (default: 1024)
Environment variable: `AIDER_MAP_TOKENS`
### `--max-chat-history-tokens VALUE`
Maximum number of tokens to use for chat history. If not specified, uses the model's max_chat_history_tokens.
Soft limit on tokens for chat history, after which summarization begins. If unspecified, defaults to the model's max_chat_history_tokens.
Environment variable: `AIDER_MAX_CHAT_HISTORY_TOKENS`
### `--env-file ENV_FILE`
@ -199,6 +233,37 @@ Specify the .env file to load (default: .env in git root)
Default: .env
Environment variable: `AIDER_ENV_FILE`
## Cache Settings:
### `--cache-prompts`
Enable caching of prompts (default: False)
Default: False
Environment variable: `AIDER_CACHE_PROMPTS`
Aliases:
- `--cache-prompts`
- `--no-cache-prompts`
### `--cache-keepalive-pings VALUE`
Number of times to ping at 5min intervals to keep prompt cache warm (default: 0)
Default: 0
Environment variable: `AIDER_CACHE_KEEPALIVE_PINGS`
## Repomap Settings:
### `--map-tokens VALUE`
Suggested number of tokens to use for repo map, use 0 to disable (default: 1024)
Environment variable: `AIDER_MAP_TOKENS`
### `--map-refresh VALUE`
Control how often the repo map is refreshed. Options: auto, always, files, manual (default: auto)
Default: auto
Environment variable: `AIDER_MAP_REFRESH`
### `--map-multiplier-no-files VALUE`
Multiplier for map tokens when no files are specified (default: 2)
Default: 2
Environment variable: `AIDER_MAP_MULTIPLIER_NO_FILES`
## History Files:
### `--input-history-file INPUT_HISTORY_FILE`
@ -261,15 +326,36 @@ Set the color for tool output (default: None)
Environment variable: `AIDER_TOOL_OUTPUT_COLOR`
### `--tool-error-color VALUE`
Set the color for tool error messages (default: red)
Set the color for tool error messages (default: #FF2222)
Default: #FF2222
Environment variable: `AIDER_TOOL_ERROR_COLOR`
### `--tool-warning-color VALUE`
Set the color for tool warning messages (default: #FFA500)
Default: #FFA500
Environment variable: `AIDER_TOOL_WARNING_COLOR`
### `--assistant-output-color VALUE`
Set the color for assistant output (default: #0088ff)
Default: #0088ff
Environment variable: `AIDER_ASSISTANT_OUTPUT_COLOR`
### `--completion-menu-color COLOR`
Set the color for the completion menu (default: terminal's default text color)
Environment variable: `AIDER_COMPLETION_MENU_COLOR`
### `--completion-menu-bg-color COLOR`
Set the background color for the completion menu (default: terminal's default background color)
Environment variable: `AIDER_COMPLETION_MENU_BG_COLOR`
### `--completion-menu-current-color COLOR`
Set the color for the current item in the completion menu (default: terminal's default background color)
Environment variable: `AIDER_COMPLETION_MENU_CURRENT_COLOR`
### `--completion-menu-current-bg-color COLOR`
Set the background color for the current item in the completion menu (default: terminal's default text color)
Environment variable: `AIDER_COMPLETION_MENU_CURRENT_BG_COLOR`
### `--code-theme VALUE`
Set the markdown code theme (default: default, other options include monokai, solarized-dark, solarized-light)
Default: default
@ -373,6 +459,11 @@ Aliases:
- `--dry-run`
- `--no-dry-run`
### `--skip-sanity-check-repo`
Skip the sanity check for the git repository (default: False)
Default: False
Environment variable: `AIDER_SKIP_SANITY_CHECK_REPO`
## Fixing and committing:
### `--lint`
@ -426,7 +517,7 @@ Specify a file to log analytics events
Environment variable: `AIDER_ANALYTICS_LOG`
### `--analytics-disable`
Disable analytics forever
Permanently disable analytics
Default: False
Environment variable: `AIDER_ANALYTICS_DISABLE`
@ -445,10 +536,9 @@ Use VI editing mode in the terminal (default: False)
Default: False
Environment variable: `AIDER_VIM`
### `--voice-language VOICE_LANGUAGE`
Specify the language for voice using ISO 639-1 code (default: auto)
Default: en
Environment variable: `AIDER_VOICE_LANGUAGE`
### `--chat-language CHAT_LANGUAGE`
Specify the language to use in the chat (default: None, uses system settings)
Environment variable: `AIDER_CHAT_LANGUAGE`
### `--version`
Show the version number and exit
@ -466,13 +556,26 @@ Aliases:
- `--check-update`
- `--no-check-update`
### `--install-main-branch`
Install the latest version from the main branch
Default: False
Environment variable: `AIDER_INSTALL_MAIN_BRANCH`
### `--upgrade`
Upgrade aider to the latest version from PyPI
Default: False
Environment variable: `AIDER_UPGRADE`
Aliases:
- `--upgrade`
- `--update`
### `--apply FILE`
Apply the changes from the given file instead of running the chat (debug)
Environment variable: `AIDER_APPLY`
### `--yes`
### `--yes-always`
Always say yes to every confirmation
Environment variable: `AIDER_YES`
Environment variable: `AIDER_YES_ALWAYS`
### `--verbose`
Enable verbose output
@ -512,6 +615,10 @@ Aliases:
- `--message-file MESSAGE_FILE`
- `-f MESSAGE_FILE`
### `--load LOAD_FILE`
Load and execute /commands from a file on launch
Environment variable: `AIDER_LOAD`
### `--encoding VALUE`
Specify the encoding for input and output (default: utf-8)
Default: utf-8
@ -530,4 +637,32 @@ Environment variable: `AIDER_GUI`
Aliases:
- `--gui`
- `--browser`
### `--suggest-shell-commands`
Enable/disable suggesting shell commands (default: True)
Default: True
Environment variable: `AIDER_SUGGEST_SHELL_COMMANDS`
Aliases:
- `--suggest-shell-commands`
- `--no-suggest-shell-commands`
### `--fancy-input`
Enable/disable fancy input with history and completion (default: True)
Default: True
Environment variable: `AIDER_FANCY_INPUT`
Aliases:
- `--fancy-input`
- `--no-fancy-input`
## Voice Settings:
### `--voice-format VOICE_FORMAT`
Audio format for voice recording (default: wav). webm and mp3 require ffmpeg
Default: wav
Environment variable: `AIDER_VOICE_FORMAT`
### `--voice-language VOICE_LANGUAGE`
Specify the language for voice using ISO 639-1 code (default: auto)
Default: en
Environment variable: `AIDER_VOICE_LANGUAGE`
<!--[[[end]]]-->

View file

@ -12,6 +12,7 @@ nav_exclude: true
![robot flowchat](/assets/robot-flowchart.png)
## Updated
Aider no longer uses ctags to build a repo map.
@ -111,9 +112,9 @@ like functions and methods also include their signatures.
Here's a
sample of the map of the aider repo, just showing the maps of
[main.py](https://github.com/paul-gauthier/aider/blob/main/aider/main.py)
[main.py](https://github.com/Aider-AI/aider/blob/main/aider/main.py)
and
[io.py](https://github.com/paul-gauthier/aider/blob/main/aider/io.py)
[io.py](https://github.com/Aider-AI/aider/blob/main/aider/io.py)
:
```
@ -228,7 +229,7 @@ Some possible approaches to reducing the amount of map data are:
- Distill the global map, to prioritize important symbols and discard "internal" or otherwise less globally relevant identifiers. Possibly enlist `gpt-3.5-turbo` to perform this distillation in a flexible and language agnostic way.
- Provide a mechanism for GPT to start with a distilled subset of the global map, and let it ask to see more detail about subtrees or keywords that it feels are relevant to the current coding task.
- Attempt to analyize the natural language coding task given by the user and predict which subset of the repo map is relevant. Possibly by analysis of prior coding chats within the specific repo. Work on certain files or types of features may require certain somewhat predictable context from elsewhere in the repo. Vector and keyword search against the chat history, repo map or codebase may help here.
- Attempt to analyze the natural language coding task given by the user and predict which subset of the repo map is relevant. Possibly by analysis of prior coding chats within the specific repo. Work on certain files or types of features may require certain somewhat predictable context from elsewhere in the repo. Vector and keyword search against the chat history, repo map or codebase may help here.
One key goal is to prefer solutions which are language agnostic or
which can be easily deployed against most popular code languages.

Some files were not shown because too many files have changed in this diff Show more