Merge branch 'main' into ts-pack

This commit is contained in:
Paul Gauthier 2024-11-27 08:54:03 -08:00
commit ee837889db
221 changed files with 19622 additions and 3306 deletions

View file

@ -5,6 +5,7 @@ on:
paths-ignore: paths-ignore:
- 'aider/website/**' - 'aider/website/**'
- README.md - README.md
- HISTORY.md
branches: branches:
- main - main
pull_request: pull_request:
@ -26,22 +27,24 @@ jobs:
- name: Set up QEMU - name: Set up QEMU
uses: docker/setup-qemu-action@v3 uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx - name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3 uses: docker/setup-buildx-action@v3
- name: Login to DockerHub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
env:
dockerhub_username: ${{ secrets.DOCKERHUB_USERNAME }}
dockerhub_password: ${{ secrets.DOCKERHUB_PASSWORD }}
if: ${{ env.dockerhub_username }} && ${{ env.dockerhub_password }}
- name: Build Docker image - name: Build Docker standard image
uses: docker/build-push-action@v5 uses: docker/build-push-action@v5
with: with:
context: . context: .
file: ./docker/Dockerfile file: ./docker/Dockerfile
platforms: linux/amd64,linux/arm64 platforms: linux/amd64,linux/arm64
push: false push: false
target: aider
- name: Build Docker full image
uses: docker/build-push-action@v5
with:
context: .
file: ./docker/Dockerfile
platforms: linux/amd64,linux/arm64
push: false
target: aider-full

View file

@ -70,15 +70,15 @@ jobs:
id: deployment id: deployment
uses: actions/deploy-pages@v2 uses: actions/deploy-pages@v2
- name: Set up Python ${{ matrix.python-version }} - name: Set up Python 3.12
uses: actions/setup-python@v5 uses: actions/setup-python@v5
with: with:
python-version: ${{ matrix.python-version }} python-version: '3.12'
- name: Install linkchecker - name: Install linkchecker
run: | run: |
python -m pip install --upgrade pip python -m pip install --upgrade pip
pip install linkchecker python -m pip install linkchecker
- name: Run linkchecker - name: Run linkchecker
run: | run: |

View file

@ -5,6 +5,7 @@ on:
paths-ignore: paths-ignore:
- 'aider/website/**' - 'aider/website/**'
- README.md - README.md
- HISTORY.md
branches: branches:
- main - main
pull_request: pull_request:

View file

@ -5,6 +5,7 @@ on:
paths-ignore: paths-ignore:
- 'aider/website/**' - 'aider/website/**'
- README.md - README.md
- HISTORY.md
branches: branches:
- main - main
pull_request: pull_request:

3
.gitignore vendored
View file

@ -10,3 +10,6 @@ Gemfile.lock
_site _site
.jekyll-cache/ .jekyll-cache/
.jekyll-metadata .jekyll-metadata
aider/__version__.py
.venv/
.gitattributes

View file

@ -14,3 +14,9 @@ repos:
hooks: hooks:
- id: flake8 - id: flake8
args: ["--show-source"] args: ["--show-source"]
- repo: https://github.com/codespell-project/codespell
rev: v2.2.6
hooks:
- id: codespell
additional_dependencies:
- tomli

View file

@ -17,10 +17,10 @@ Contributions of
[LLM benchmark results](https://aider.chat/docs/leaderboards/) [LLM benchmark results](https://aider.chat/docs/leaderboards/)
are welcome! are welcome!
See the See the
[benchmark README](https://github.com/paul-gauthier/aider/blob/main/benchmark/README.md) [benchmark README](https://github.com/Aider-AI/aider/blob/main/benchmark/README.md)
for information on running aider's code editing benchmarks. for information on running aider's code editing benchmarks.
Submit results by opening a PR with edits to the Submit results by opening a PR with edits to the
[benchmark results data files](https://github.com/paul-gauthier/aider/blob/main/_data/). [benchmark results data files](https://github.com/Aider-AI/aider/blob/main/aider/website/_data/).
## Pull Requests ## Pull Requests
@ -33,19 +33,16 @@ ensure that your contributions can be integrated smoothly.
## Licensing ## Licensing
By contributing to this project, you agree that your contributions Before contributing a PR, please review our
will be licensed under the Apache License 2.0. Additionally, you [Individual Contributor License Agreement](https://aider.chat/docs/legal/contributor-agreement.html).
understand and agree that contributions may be subject to a different All contributors will be asked to complete the agreement as part of the PR process.
license, should the project maintainers decide to change the licensing
terms.
## Setting up a Development Environment ## Setting up a Development Environment
### Clone the Repository ### Clone the Repository
``` ```
git clone https://github.com/paul-gauthier/aider.git git clone https://github.com/Aider-AI/aider.git
cd aider cd aider
``` ```
@ -154,6 +151,10 @@ The project's documentation is built using Jekyll and hosted on GitHub Pages. To
``` ```
bundle exec jekyll build bundle exec jekyll build
``` ```
5. Preview the website while editing (optional):
```
bundle exec jekyll serve
```
The built documentation will be available in the `aider/website/_site` directory. The built documentation will be available in the `aider/website/_site` directory.
@ -186,8 +187,8 @@ pytest
You can also run specific test files or test cases by providing the file path or test name: You can also run specific test files or test cases by providing the file path or test name:
``` ```
pytest aider/tests/test_coder.py pytest tests/basic/test_coder.py
pytest aider/tests/test_coder.py::TestCoder::test_specific_case pytest tests/basic/test_coder.py::TestCoder::test_specific_case
``` ```
#### Continuous Integration #### Continuous Integration

View file

@ -1,6 +1,324 @@
# Release history # Release history
### main branch
- PDF support for Sonnet and Gemini models.
- Set cwd to repo root when running shell commands.
- Improved error handling for failed .gitignore file operations.
- Improved error handling for input history file permissions.
- Improved error handling for analytics file access.
- Aider wrote 85% of the code in this release.
### Aider v0.65.1
- Bugfix to `--alias`.
### Aider v0.65.0
- Added `--alias` config to define [custom model aliases](https://aider.chat/docs/config/model-aliases.html).
- Added `--[no-]detect-urls` flag to disable detecting and offering to scrape URLs found in the chat.
- Ollama models now default to an 8k context window.
- Added [RepoMap support for Dart language](https://aider.chat/docs/languages.html) by @malkoG.
- Ask 2.5% of users if they want to opt-in to [analytics](https://aider.chat/docs/more/analytics.html).
- Skip suggesting files that share names with files already in chat.
- `/editor` returns and prefill the file content into the prompt, so you can use `/editor` to compose messages that start with `/commands`, etc.
- Enhanced error handling for analytics.
- Improved handling of UnknownEditFormat exceptions with helpful documentation links.
- Bumped dependencies to pick up grep-ast 0.4.0 for Dart language support.
- Aider wrote 81% of the code in this release.
### Aider v0.64.1
- Disable streaming for o1 on OpenRouter.
### Aider v0.64.0
- Added [`/editor` command](https://aider.chat/docs/usage/commands.html) to open system editor for writing prompts, by @thehunmonkgroup.
- Full support for `gpt-4o-2024-11-20`.
- Stream o1 models by default.
- `/run` and suggested shell commands are less mysterious and now confirm that they "Added XX lines of output to the chat."
- Ask 1% of users if they want to opt-in to [analytics](https://aider.chat/docs/more/analytics.html).
- Added support for [optional multiline input tags](https://aider.chat/docs/usage/commands.html#entering-multi-line-chat-messages) with matching closing tags.
- Improved [model settings configuration](https://aider.chat/docs/config/adv-model-settings.html#global-extra-params) with support for global `extra_params` for `litellm.completion()`.
- Architect mode now asks to add files suggested by the LLM.
- Fixed bug in fuzzy model name matching.
- Added Timeout exception to handle API provider timeouts.
- Added `--show-release-notes` to control release notes display on first run of new version.
- Save empty dict to cache file on model metadata download failure, to delay retry.
- Improved error handling and code formatting.
- Aider wrote 74% of the code in this release.
### Aider v0.63.2
- Fixed bug in fuzzy model name matching when litellm provider info is missing.
- Modified model metadata file loading to allow override of resource file.
- Allow recursive loading of dirs using `--read`.
- Updated dependency versions to pick up litellm fix for ollama models.
- Added exponential backoff retry when writing files to handle editor file locks.
- Updated Qwen 2.5 Coder 32B model configuration.
### Aider v0.63.1
- Fixed bug in git ignored file handling.
- Improved error handling for git operations.
### Aider v0.63.0
- Support for Qwen 2.5 Coder 32B.
- `/web` command just adds the page to the chat, without triggering an LLM response.
- Improved prompting for the user's preferred chat language.
- Improved handling of LiteLLM exceptions.
- Bugfix for double-counting tokens when reporting cache stats.
- Bugfix for the LLM creating new files.
- Other small bug fixes.
- Aider wrote 55% of the code in this release.
### Aider v0.62.0
- Full support for Claude 3.5 Haiku
- Scored 75% on [aider's code editing leaderboard](https://aider.chat/docs/leaderboards/).
- Almost as good as Sonnet at much lower cost.
- Launch with `--haiku` to use it.
- Easily apply file edits from ChatGPT, Claude or other web apps
- Chat with ChatGPT or Claude via their web app.
- Give it your source files and ask for the changes you want.
- Use the web app's "copy response" button to copy the entire reply from the LLM.
- Run `aider --apply-clipboard-edits file-to-edit.js`.
- Aider will edit your file with the LLM's changes.
- Bugfix for creating new files.
- Aider wrote 84% of the code in this release.
### Aider v0.61.0
- Load and save aider slash-commands to files:
- `/save <fname>` command will make a file of `/add` and `/read-only` commands that recreate the current file context in the chat.
- `/load <fname>` will replay the commands in the file.
- You can use `/load` to run any arbitrary set of slash-commands, not just `/add` and `/read-only`.
- Use `--load <fname>` to run a list of commands on launch, before the interactive chat begins.
- Anonymous, opt-in [analytics](https://aider.chat/docs/more/analytics.html) with no personal data sharing.
- Aider follows litellm's `supports_vision` attribute to enable image support for models.
- Bugfix for when diff mode flexibly handles the model using the wrong filename.
- Displays filenames in sorted order for `/add` and `/read-only`.
- New `--no-fancy-input` switch disables prompt toolkit input, now still available with `--no-pretty`.
- Override browser config with `--no-browser` or `--no-gui`.
- Offer to open documentation URLs when errors occur.
- Properly support all o1 models, regardless of provider.
- Improved layout of filenames above input prompt.
- Better handle corrupted repomap tags cache.
- Improved handling of API errors, especially when accessing the weak model.
- Aider wrote 68% of the code in this release.
### Aider v0.60.1
- Enable image support for Sonnet 10/22.
- Display filenames in sorted order.
### Aider v0.60.0
- Full support for Sonnet 10/22, the new SOTA model on aider's code editing benchmark.
- Aider uses Sonnet 10/22 by default.
- Improved formatting of added and read-only files above chat prompt, by @jbellis.
- Improved support for o1 models by more flexibly parsing their nonconforming code edit replies.
- Corrected diff edit format prompt that only the first match is replaced.
- Stronger whole edit format prompt asking for clean file names.
- Now offers to add `.env` to the `.gitignore` file.
- Ships with a small model metadata json file to handle models not yet updated in litellm.
- Model settings for o1 models on azure.
- Bugfix to properly include URLs in `/help` RAG results.
- Aider wrote 49% of the code in this release.
### Aider v0.59.1
- Check for obsolete `yes: true` in yaml config, show helpful error.
- Model settings for openrouter/anthropic/claude-3.5-sonnet:beta
### Aider v0.59.0
- Improvements to `/read-only`:
- Now supports shell-style auto-complete of the full file system.
- Still auto-completes the full paths of the repo files like `/add`.
- Now supports globs like `src/**/*.py`
- Renamed `--yes` to `--yes-always`.
- Now uses `AIDER_YES_ALWAYS` env var and `yes-always:` yaml key.
- Existing YAML and .env files will need to be updated.
- Can still abbreviate to `--yes` on the command line.
- Config file now uses standard YAML list syntax with ` - list entries`, one per line.
- `/settings` now includes the same announcement lines that would print at launch.
- Sanity checks the `--editor-model` on launch now, same as main and weak models.
- Added `--skip-sanity-check-repo` switch to speedup launch in large repos.
- Bugfix so architect mode handles Control-C properly.
- Repo-map is deterministic now, with improved caching logic.
- Improved commit message prompt.
- Aider wrote 77% of the code in this release.
### Aider v0.58.1
- Fixed bug where cache warming pings caused subsequent user messages to trigger a tight loop of LLM requests.
### Aider v0.58.0
- [Use a pair of Architect/Editor models for improved coding](https://aider.chat/2024/09/26/architect.html)
- Use a strong reasoning model like o1-preview as your Architect.
- Use a cheaper, faster model like gpt-4o as your Editor.
- New `--o1-preview` and `--o1-mini` shortcuts.
- Support for new Gemini 002 models.
- Better support for Qwen 2.5 models.
- Many confirmation questions can be skipped for the rest of the session with "(D)on't ask again" response.
- Autocomplete for `/read-only` supports the entire filesystem.
- New settings for completion menu colors.
- New `/copy` command to copy the last LLM response to the clipboard.
- Renamed `/clipboard` to `/paste`.
- Will now follow HTTP redirects when scraping urls.
- New `--voice-format` switch to send voice audio as wav/mp3/webm, by @mbailey.
- ModelSettings takes `extra_params` dict to specify any extras to pass to `litellm.completion()`.
- Support for cursor shapes when in vim mode.
- Numerous bug fixes.
- Aider wrote 53% of the code in this release.
### Aider v0.57.1
- Fixed dependency conflict between aider-chat[help] and [playwright].
### Aider v0.57.0
- Support for OpenAI o1 models:
- o1-preview now works well with diff edit format.
- o1-preview with diff now matches SOTA leaderboard result with whole edit format.
- `aider --model o1-mini`
- `aider --model o1-preview`
- On Windows, `/run` correctly uses PowerShell or cmd.exe.
- Support for new 08-2024 Cohere models, by @jalammar.
- Can now recursively add directories with `/read-only`.
- User input prompts now fall back to simple `input()` if `--no-pretty` or a Windows console is not available.
- Improved sanity check of git repo on startup.
- Improvements to prompt cache chunking strategy.
- Removed "No changes made to git tracked files".
- Numerous bug fixes for corner case crashes.
- Updated all dependency versions.
- Aider wrote 70% of the code in this release.
### Aider v0.56.0
- Enables prompt caching for Sonnet via OpenRouter by @fry69
- Enables 8k output tokens for Sonnet via VertexAI and DeepSeek V2.5.
- New `/report` command to open your browser with a pre-populated GitHub Issue.
- New `--chat-language` switch to set the spoken language.
- Now `--[no-]suggest-shell-commands` controls both prompting for and offering to execute shell commands.
- Check key imports on launch, provide helpful error message if dependencies aren't available.
- Renamed `--models` to `--list-models` by @fry69.
- Numerous bug fixes for corner case crashes.
- Aider wrote 56% of the code in this release.
### Aider v0.55.0
- Only print the pip command when self updating on Windows, without running it.
- Converted many error messages to warning messages.
- Added `--tool-warning-color` setting.
- Blanket catch and handle git errors in any `/command`.
- Catch and handle glob errors in `/add`, errors writing files.
- Disabled built in linter for typescript.
- Catch and handle terminals which don't support pretty output.
- Catch and handle playwright and pandoc errors.
- Catch `/voice` transcription exceptions, show the WAV file so the user can recover it.
- Aider wrote 53% of the code in this release.
### Aider v0.54.12
- Switched to `vX.Y.Z.dev` version naming.
### Aider v0.54.11
- Improved printed pip command output on Windows.
### Aider v0.54.10
- Bugfix to test command in platform info.
### Aider v0.54.9
- Include important devops files in the repomap.
- Print quoted pip install commands to the user.
- Adopt setuptools_scm to provide dev versions with git hashes.
- Share active test and lint commands with the LLM.
- Catch and handle most errors creating new files, reading existing files.
- Catch and handle most git errors.
- Added --verbose debug output for shell commands.
### Aider v0.54.8
- Startup QOL improvements:
- Sanity check the git repo and exit gracefully on problems.
- Pause for confirmation after model sanity check to allow user to review warnings.
- Bug fix for shell commands on Windows.
- Do not fuzzy match filenames when LLM is creating a new file, by @ozapinq
- Numerous corner case bug fixes submitted via new crash report -> GitHub Issue feature.
- Crash reports now include python version, OS, etc.
### Aider v0.54.7
- Offer to submit a GitHub issue pre-filled with uncaught exception info.
- Bugfix for infinite output.
### Aider v0.54.6
- New `/settings` command to show active settings.
- Only show cache warming status update if `--verbose`.
### Aider v0.54.5
- Bugfix for shell commands on Windows.
- Refuse to make git repo in $HOME, warn user.
- Don't ask again in current session about a file the user has said not to add to the chat.
- Added `--update` as an alias for `--upgrade`.
### Aider v0.54.4
- Bugfix to completions for `/model` command.
- Bugfix: revert home dir special case.
### Aider v0.54.3
- Dependency `watchdog<5` for docker image.
### Aider v0.54.2
- When users launch aider in their home dir, help them find/create a repo in a subdir.
- Added missing `pexpect` dependency.
### Aider v0.54.0
- Added model settings for `gemini/gemini-1.5-pro-exp-0827` and `gemini/gemini-1.5-flash-exp-0827`.
- Shell and `/run` commands can now be interactive in environments where a pty is available.
- Optionally share output of suggested shell commands back to the LLM.
- New `--[no-]suggest-shell-commands` switch to configure shell commands.
- Performance improvements for autocomplete in large/mono repos.
- New `--upgrade` switch to install latest version of aider from pypi.
- Bugfix to `--show-prompt`.
- Disabled automatic reply to the LLM on `/undo` for all models.
- Removed pager from `/web` output.
- Aider wrote 64% of the code in this release.
### Aider v0.53.0
- [Keep your prompt cache from expiring](https://aider.chat/docs/usage/caching.html#preventing-cache-expiration) with `--cache-keepalive-pings`.
- Pings the API every 5min to keep the cache warm.
- You can now bulk accept/reject a series of add url and run shell confirmations.
- Improved matching of filenames from S/R blocks with files in chat.
- Stronger prompting for Sonnet to make edits in code chat mode.
- Stronger prompting for the LLM to specify full file paths.
- Improved shell command prompting.
- Weak model now uses `extra_headers`, to support Anthropic beta features.
- New `--install-main-branch` to update to the latest dev version of aider.
- Improved error messages on attempt to add not-git subdir to chat.
- Show model metadata info on `--verbose`.
- Improved warnings when LLMs env variables aren't set.
- Bugfix to windows filenames which contain `\_`.
- Aider wrote 59% of the code in this release.
### Aider v0.52.1
- Bugfix for NameError when applying edits.
### Aider v0.52.0 ### Aider v0.52.0
- Aider now offers to run shell commands: - Aider now offers to run shell commands:
@ -521,7 +839,7 @@
### Aider v0.14.0 ### Aider v0.14.0
- [Support for Claude2 and other LLMs via OpenRouter](https://aider.chat/docs/faq.html#accessing-other-llms-with-openrouter) by @joshuavial - [Support for Claude2 and other LLMs via OpenRouter](https://aider.chat/docs/faq.html#accessing-other-llms-with-openrouter) by @joshuavial
- Documentation for [running the aider benchmarking suite](https://github.com/paul-gauthier/aider/tree/main/benchmark) - Documentation for [running the aider benchmarking suite](https://github.com/Aider-AI/aider/tree/main/benchmark)
- Aider now requires Python >= 3.9 - Aider now requires Python >= 3.9
@ -566,7 +884,7 @@
- Added `/git` command to run git from inside aider chats. - Added `/git` command to run git from inside aider chats.
- Use Meta-ENTER (Esc+ENTER in some environments) to enter multiline chat messages. - Use Meta-ENTER (Esc+ENTER in some environments) to enter multiline chat messages.
- Create a `.gitignore` with `.aider*` to prevent users from accidentaly adding aider files to git. - Create a `.gitignore` with `.aider*` to prevent users from accidentally adding aider files to git.
- Check pypi for newer versions and notify user. - Check pypi for newer versions and notify user.
- Updated keyboard interrupt logic so that 2 ^C in 2 seconds always forces aider to exit. - Updated keyboard interrupt logic so that 2 ^C in 2 seconds always forces aider to exit.
- Provide GPT with detailed error if it makes a bad edit block, ask for a retry. - Provide GPT with detailed error if it makes a bad edit block, ask for a retry.

View file

@ -9,12 +9,23 @@ Start a new project or work with an existing git repo.
Aider works best with GPT-4o & Claude 3.5 Sonnet and can Aider works best with GPT-4o & Claude 3.5 Sonnet and can
[connect to almost any LLM](https://aider.chat/docs/llms.html). [connect to almost any LLM](https://aider.chat/docs/llms.html).
<!-- SCREENCAST START -->
<p align="center"> <p align="center">
<img <img
src="https://aider.chat/assets/screencast.svg" src="https://aider.chat/assets/screencast.svg"
alt="aider screencast" alt="aider screencast"
> >
</p> </p>
<!-- SCREENCAST END -->
<!-- VIDEO START
<p align="center">
<video style="max-width: 100%; height: auto;" autoplay loop muted playsinline>
<source src="/assets/shell-cmds-small.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
</p>
VIDEO END -->
<p align="center"> <p align="center">
<a href="https://discord.gg/Tv2uQnR88V"> <a href="https://discord.gg/Tv2uQnR88V">
@ -35,7 +46,7 @@ cog.out(open("aider/website/_includes/get-started.md").read())
You can get started quickly like this: You can get started quickly like this:
``` ```
python -m pip install aider-chat python -m pip install -U aider-chat
# Change directory into a git repo # Change directory into a git repo
cd /to/your/git/repo cd /to/your/git/repo
@ -96,7 +107,7 @@ projects like django, scikitlearn, matplotlib, etc.
- [Configuration](https://aider.chat/docs/config.html) - [Configuration](https://aider.chat/docs/config.html)
- [Troubleshooting](https://aider.chat/docs/troubleshooting.html) - [Troubleshooting](https://aider.chat/docs/troubleshooting.html)
- [LLM Leaderboards](https://aider.chat/docs/leaderboards/) - [LLM Leaderboards](https://aider.chat/docs/leaderboards/)
- [GitHub](https://github.com/paul-gauthier/aider) - [GitHub](https://github.com/Aider-AI/aider)
- [Discord](https://discord.gg/Tv2uQnR88V) - [Discord](https://discord.gg/Tv2uQnR88V)
- [Blog](https://aider.chat/blog/) - [Blog](https://aider.chat/blog/)
@ -107,14 +118,14 @@ projects like django, scikitlearn, matplotlib, etc.
- *The best AI coding assistant so far.* -- [Matthew Berman](https://www.youtube.com/watch?v=df8afeb1FY8) - *The best AI coding assistant so far.* -- [Matthew Berman](https://www.youtube.com/watch?v=df8afeb1FY8)
- *Aider ... has easily quadrupled my coding productivity.* -- [SOLAR_FIELDS](https://news.ycombinator.com/item?id=36212100) - *Aider ... has easily quadrupled my coding productivity.* -- [SOLAR_FIELDS](https://news.ycombinator.com/item?id=36212100)
- *It's a cool workflow... Aider's ergonomics are perfect for me.* -- [qup](https://news.ycombinator.com/item?id=38185326) - *It's a cool workflow... Aider's ergonomics are perfect for me.* -- [qup](https://news.ycombinator.com/item?id=38185326)
- *It's really like having your senior developer live right in your Git repo - truly amazing!* -- [rappster](https://github.com/paul-gauthier/aider/issues/124) - *It's really like having your senior developer live right in your Git repo - truly amazing!* -- [rappster](https://github.com/Aider-AI/aider/issues/124)
- *What an amazing tool. It's incredible.* -- [valyagolev](https://github.com/paul-gauthier/aider/issues/6#issue-1722897858) - *What an amazing tool. It's incredible.* -- [valyagolev](https://github.com/Aider-AI/aider/issues/6#issue-1722897858)
- *Aider is such an astounding thing!* -- [cgrothaus](https://github.com/paul-gauthier/aider/issues/82#issuecomment-1631876700) - *Aider is such an astounding thing!* -- [cgrothaus](https://github.com/Aider-AI/aider/issues/82#issuecomment-1631876700)
- *It was WAY faster than I would be getting off the ground and making the first few working versions.* -- [Daniel Feldman](https://twitter.com/d_feldman/status/1662295077387923456) - *It was WAY faster than I would be getting off the ground and making the first few working versions.* -- [Daniel Feldman](https://twitter.com/d_feldman/status/1662295077387923456)
- *THANK YOU for Aider! It really feels like a glimpse into the future of coding.* -- [derwiki](https://news.ycombinator.com/item?id=38205643) - *THANK YOU for Aider! It really feels like a glimpse into the future of coding.* -- [derwiki](https://news.ycombinator.com/item?id=38205643)
- *It's just amazing. It is freeing me to do things I felt were out my comfort zone before.* -- [Dougie](https://discord.com/channels/1131200896827654144/1174002618058678323/1174084556257775656) - *It's just amazing. It is freeing me to do things I felt were out my comfort zone before.* -- [Dougie](https://discord.com/channels/1131200896827654144/1174002618058678323/1174084556257775656)
- *This project is stellar.* -- [funkytaco](https://github.com/paul-gauthier/aider/issues/112#issuecomment-1637429008) - *This project is stellar.* -- [funkytaco](https://github.com/Aider-AI/aider/issues/112#issuecomment-1637429008)
- *Amazing project, definitely the best AI coding assistant I've used.* -- [joshuavial](https://github.com/paul-gauthier/aider/issues/84) - *Amazing project, definitely the best AI coding assistant I've used.* -- [joshuavial](https://github.com/Aider-AI/aider/issues/84)
- *I absolutely love using Aider ... It makes software development feel so much lighter as an experience.* -- [principalideal0](https://discord.com/channels/1131200896827654144/1133421607499595858/1229689636012691468) - *I absolutely love using Aider ... It makes software development feel so much lighter as an experience.* -- [principalideal0](https://discord.com/channels/1131200896827654144/1133421607499595858/1229689636012691468)
- *I have been recovering from multiple shoulder surgeries ... and have used aider extensively. It has allowed me to continue productivity.* -- [codeninja](https://www.reddit.com/r/OpenAI/s/nmNwkHy1zG) - *I have been recovering from multiple shoulder surgeries ... and have used aider extensively. It has allowed me to continue productivity.* -- [codeninja](https://www.reddit.com/r/OpenAI/s/nmNwkHy1zG)
- *I am an aider addict. I'm getting so much more work done, but in less time.* -- [dandandan](https://discord.com/channels/1131200896827654144/1131200896827654149/1135913253483069470) - *I am an aider addict. I'm getting so much more work done, but in less time.* -- [dandandan](https://discord.com/channels/1131200896827654144/1131200896827654149/1135913253483069470)

View file

@ -1 +1,6 @@
__version__ = "0.52.1-dev" try:
from aider.__version__ import __version__
except Exception:
__version__ = "0.65.2.dev"
__all__ = [__version__]

220
aider/analytics.py Normal file
View file

@ -0,0 +1,220 @@
import json
import platform
import sys
import time
import uuid
from pathlib import Path
from mixpanel import Mixpanel, MixpanelException
from posthog import Posthog
from aider import __version__
from aider.dump import dump # noqa: F401
from aider.models import model_info_manager
mixpanel_project_token = "6da9a43058a5d1b9f3353153921fb04d"
posthog_project_api_key = "phc_99T7muzafUMMZX15H8XePbMSreEUzahHbtWjy3l5Qbv"
posthog_host = "https://us.i.posthog.com"
class Analytics:
# providers
mp = None
ph = None
# saved
user_id = None
permanently_disable = None
asked_opt_in = None
# ephemeral
logfile = None
def __init__(self, logfile=None, permanently_disable=False):
self.logfile = logfile
self.get_or_create_uuid()
if self.permanently_disable or permanently_disable or not self.asked_opt_in:
self.disable(permanently_disable)
def enable(self):
if not self.user_id:
self.disable(False)
return
if self.permanently_disable:
self.disable(True)
return
if not self.asked_opt_in:
self.disable(False)
return
self.mp = Mixpanel(mixpanel_project_token)
self.ph = Posthog(project_api_key=posthog_project_api_key, host=posthog_host)
def disable(self, permanently):
self.mp = None
self.ph = None
if permanently:
self.asked_opt_in = True
self.permanently_disable = True
self.save_data()
def need_to_ask(self, args_analytics):
if args_analytics is False:
return False
could_ask = not self.asked_opt_in and not self.permanently_disable
if not could_ask:
return False
if args_analytics is True:
return True
assert args_analytics is None, args_analytics
if not self.user_id:
return False
PERCENT = 2.5
return self.is_uuid_in_percentage(self.user_id, PERCENT)
def is_uuid_in_percentage(self, uuid_str, percent):
"""Check if a UUID string falls within the first X percent of the UUID space.
Args:
uuid_str: UUID string to test
percent: Percentage threshold (0-100)
Returns:
bool: True if UUID falls within the first X percent
"""
if not (0 <= percent <= 100):
raise ValueError("Percentage must be between 0 and 100")
if not uuid_str:
return False
# Convert percentage to hex threshold (1% = "04...", 10% = "1a...", etc)
# Using first 6 hex digits
if percent == 0:
return False
threshold = format(int(0xFFFFFF * percent / 100), "06x")
return uuid_str[:6] <= threshold
def get_data_file_path(self):
try:
data_file = Path.home() / ".aider" / "analytics.json"
data_file.parent.mkdir(parents=True, exist_ok=True)
return data_file
except OSError:
# If we can't create/access the directory, just disable analytics
self.disable(permanently=False)
return None
def get_or_create_uuid(self):
self.load_data()
if self.user_id:
return
self.user_id = str(uuid.uuid4())
self.save_data()
def load_data(self):
data_file = self.get_data_file_path()
if not data_file:
return
if data_file.exists():
try:
data = json.loads(data_file.read_text())
self.permanently_disable = data.get("permanently_disable")
self.user_id = data.get("uuid")
self.asked_opt_in = data.get("asked_opt_in", False)
except (json.decoder.JSONDecodeError, OSError):
self.disable(permanently=False)
def save_data(self):
data_file = self.get_data_file_path()
if not data_file:
return
data = dict(
uuid=self.user_id,
permanently_disable=self.permanently_disable,
asked_opt_in=self.asked_opt_in,
)
try:
data_file.write_text(json.dumps(data, indent=4))
except OSError:
# If we can't write the file, just disable analytics
self.disable(permanently=False)
def get_system_info(self):
return {
"python_version": sys.version.split()[0],
"os_platform": platform.system(),
"os_release": platform.release(),
"machine": platform.machine(),
}
def _redact_model_name(self, model):
if not model:
return None
info = model_info_manager.get_model_from_cached_json_db(model.name)
if info:
return model.name
elif "/" in model.name:
return model.name.split("/")[0] + "/REDACTED"
return None
def event(self, event_name, main_model=None, **kwargs):
if not self.mp and not self.ph and not self.logfile:
return
properties = {}
if main_model:
properties["main_model"] = self._redact_model_name(main_model)
properties["weak_model"] = self._redact_model_name(main_model.weak_model)
properties["editor_model"] = self._redact_model_name(main_model.editor_model)
properties.update(kwargs)
properties.update(self.get_system_info()) # Add system info to all events
# Handle numeric values
for key, value in properties.items():
if isinstance(value, (int, float)):
properties[key] = value
else:
properties[key] = str(value)
properties["aider_version"] = __version__
if self.mp:
try:
self.mp.track(self.user_id, event_name, dict(properties))
except MixpanelException:
self.mp = None # Disable mixpanel on connection errors
if self.ph:
self.ph.capture(self.user_id, event_name, dict(properties))
if self.logfile:
log_entry = {
"event": event_name,
"properties": properties,
"user_id": self.user_id,
"time": int(time.time()),
}
with open(self.logfile, "a") as f:
json.dump(log_entry, f)
f.write("\n")
def __del__(self):
if self.ph:
self.ph.shutdown()

View file

@ -22,9 +22,10 @@ def default_env_file(git_root):
def get_parser(default_config_files, git_root): def get_parser(default_config_files, git_root):
parser = configargparse.ArgumentParser( parser = configargparse.ArgumentParser(
description="aider is GPT powered coding in your terminal", description="aider is AI pair programming in your terminal",
add_config_file_help=True, add_config_file_help=True,
default_config_files=default_config_files, default_config_files=default_config_files,
config_file_parser_class=configargparse.YAMLConfigFileParser,
auto_env_var_prefix="AIDER_", auto_env_var_prefix="AIDER_",
) )
group = parser.add_argument_group("Main") group = parser.add_argument_group("Main")
@ -57,7 +58,7 @@ def get_parser(default_config_files, git_root):
const=opus_model, const=opus_model,
help=f"Use {opus_model} model for the main chat", help=f"Use {opus_model} model for the main chat",
) )
sonnet_model = "claude-3-5-sonnet-20240620" sonnet_model = "claude-3-5-sonnet-20241022"
group.add_argument( group.add_argument(
"--sonnet", "--sonnet",
action="store_const", action="store_const",
@ -65,6 +66,14 @@ def get_parser(default_config_files, git_root):
const=sonnet_model, const=sonnet_model,
help=f"Use {sonnet_model} model for the main chat", help=f"Use {sonnet_model} model for the main chat",
) )
haiku_model = "claude-3-5-haiku-20241022"
group.add_argument(
"--haiku",
action="store_const",
dest="model",
const=haiku_model,
help=f"Use {haiku_model} model for the main chat",
)
gpt_4_model = "gpt-4-0613" gpt_4_model = "gpt-4-0613"
group.add_argument( group.add_argument(
"--4", "--4",
@ -117,10 +126,27 @@ def get_parser(default_config_files, git_root):
const=deepseek_model, const=deepseek_model,
help=f"Use {deepseek_model} model for the main chat", help=f"Use {deepseek_model} model for the main chat",
) )
o1_mini_model = "o1-mini"
group.add_argument(
"--o1-mini",
action="store_const",
dest="model",
const=o1_mini_model,
help=f"Use {o1_mini_model} model for the main chat",
)
o1_preview_model = "o1-preview"
group.add_argument(
"--o1-preview",
action="store_const",
dest="model",
const=o1_preview_model,
help=f"Use {o1_preview_model} model for the main chat",
)
########## ##########
group = parser.add_argument_group("Model Settings") group = parser.add_argument_group("Model Settings")
group.add_argument( group.add_argument(
"--list-models",
"--models", "--models",
metavar="MODEL", metavar="MODEL",
help="List known models which match the (partial) MODEL name", help="List known models which match the (partial) MODEL name",
@ -167,6 +193,12 @@ def get_parser(default_config_files, git_root):
default=".aider.model.metadata.json", default=".aider.model.metadata.json",
help="Specify a file with context window and costs for unknown models", help="Specify a file with context window and costs for unknown models",
) )
group.add_argument(
"--alias",
action="append",
metavar="ALIAS:MODEL",
help="Add a model alias (can be used multiple times)",
)
group.add_argument( group.add_argument(
"--verify-ssl", "--verify-ssl",
action=argparse.BooleanOptionalAction, action=argparse.BooleanOptionalAction,
@ -180,6 +212,13 @@ def get_parser(default_config_files, git_root):
default=None, default=None,
help="Specify what edit format the LLM should use (default depends on model)", help="Specify what edit format the LLM should use (default depends on model)",
) )
group.add_argument(
"--architect",
action="store_const",
dest="edit_format",
const="architect",
help="Use architect edit format for the main chat",
)
group.add_argument( group.add_argument(
"--weak-model", "--weak-model",
metavar="WEAK_MODEL", metavar="WEAK_MODEL",
@ -189,12 +228,59 @@ def get_parser(default_config_files, git_root):
" depends on --model)" " depends on --model)"
), ),
) )
group.add_argument(
"--editor-model",
metavar="EDITOR_MODEL",
default=None,
help="Specify the model to use for editor tasks (default depends on --model)",
)
group.add_argument(
"--editor-edit-format",
metavar="EDITOR_EDIT_FORMAT",
default=None,
help="Specify the edit format for the editor model (default: depends on editor model)",
)
group.add_argument( group.add_argument(
"--show-model-warnings", "--show-model-warnings",
action=argparse.BooleanOptionalAction, action=argparse.BooleanOptionalAction,
default=True, default=True,
help="Only work with models that have meta-data available (default: True)", help="Only work with models that have meta-data available (default: True)",
) )
group.add_argument(
"--max-chat-history-tokens",
type=int,
default=None,
help=(
"Soft limit on tokens for chat history, after which summarization begins."
" If unspecified, defaults to the model's max_chat_history_tokens."
),
)
# This is a duplicate of the argument in the preparser and is a no-op by this time of
# argument parsing, but it's here so that the help is displayed as expected.
group.add_argument(
"--env-file",
metavar="ENV_FILE",
default=default_env_file(git_root),
help="Specify the .env file to load (default: .env in git root)",
)
##########
group = parser.add_argument_group("Cache Settings")
group.add_argument(
"--cache-prompts",
action=argparse.BooleanOptionalAction,
default=False,
help="Enable caching of prompts (default: False)",
)
group.add_argument(
"--cache-keepalive-pings",
type=int,
default=0,
help="Number of times to ping at 5min intervals to keep prompt cache warm (default: 0)",
)
##########
group = parser.add_argument_group("Repomap Settings")
group.add_argument( group.add_argument(
"--map-tokens", "--map-tokens",
type=int, type=int,
@ -205,13 +291,10 @@ def get_parser(default_config_files, git_root):
"--map-refresh", "--map-refresh",
choices=["auto", "always", "files", "manual"], choices=["auto", "always", "files", "manual"],
default="auto", default="auto",
help="Control how often the repo map is refreshed (default: auto)", help=(
) "Control how often the repo map is refreshed. Options: auto, always, files, manual"
group.add_argument( " (default: auto)"
"--cache-prompts", ),
action=argparse.BooleanOptionalAction,
default=False,
help="Enable caching of prompts (default: False)",
) )
group.add_argument( group.add_argument(
"--map-multiplier-no-files", "--map-multiplier-no-files",
@ -219,23 +302,6 @@ def get_parser(default_config_files, git_root):
default=2, default=2,
help="Multiplier for map tokens when no files are specified (default: 2)", help="Multiplier for map tokens when no files are specified (default: 2)",
) )
group.add_argument(
"--max-chat-history-tokens",
type=int,
default=None,
help=(
"Maximum number of tokens to use for chat history. If not specified, uses the model's"
" max_chat_history_tokens."
),
)
# This is a duplicate of the argument in the preparser and is a no-op by this time of
# argument parsing, but it's here so that the help is displayed as expected.
group.add_argument(
"--env-file",
metavar="ENV_FILE",
default=default_env_file(git_root),
help="Specify the .env file to load (default: .env in git root)",
)
########## ##########
group = parser.add_argument_group("History Files") group = parser.add_argument_group("History Files")
@ -309,13 +375,51 @@ def get_parser(default_config_files, git_root):
group.add_argument( group.add_argument(
"--tool-error-color", "--tool-error-color",
default="#FF2222", default="#FF2222",
help="Set the color for tool error messages (default: red)", help="Set the color for tool error messages (default: #FF2222)",
)
group.add_argument(
"--tool-warning-color",
default="#FFA500",
help="Set the color for tool warning messages (default: #FFA500)",
) )
group.add_argument( group.add_argument(
"--assistant-output-color", "--assistant-output-color",
default="#0088ff", default="#0088ff",
help="Set the color for assistant output (default: #0088ff)", help="Set the color for assistant output (default: #0088ff)",
) )
group.add_argument(
"--completion-menu-color",
metavar="COLOR",
default=None,
help="Set the color for the completion menu (default: terminal's default text color)",
)
group.add_argument(
"--completion-menu-bg-color",
metavar="COLOR",
default=None,
help=(
"Set the background color for the completion menu (default: terminal's default"
" background color)"
),
)
group.add_argument(
"--completion-menu-current-color",
metavar="COLOR",
default=None,
help=(
"Set the color for the current item in the completion menu (default: terminal's default"
" background color)"
),
)
group.add_argument(
"--completion-menu-current-bg-color",
metavar="COLOR",
default=None,
help=(
"Set the background color for the current item in the completion menu (default:"
" terminal's default text color)"
),
)
group.add_argument( group.add_argument(
"--code-theme", "--code-theme",
default="default", default="default",
@ -413,6 +517,12 @@ def get_parser(default_config_files, git_root):
default=False, default=False,
help="Perform a dry run without modifying files (default: False)", help="Perform a dry run without modifying files (default: False)",
) )
group.add_argument(
"--skip-sanity-check-repo",
action="store_true",
help="Skip the sanity check for the git repository (default: False)",
default=False,
)
group = parser.add_argument_group("Fixing and committing") group = parser.add_argument_group("Fixing and committing")
group.add_argument( group.add_argument(
"--lint", "--lint",
@ -454,6 +564,25 @@ def get_parser(default_config_files, git_root):
) )
########## ##########
group = parser.add_argument_group("Analytics")
group.add_argument(
"--analytics",
action=argparse.BooleanOptionalAction,
default=None,
help="Enable/disable analytics for current session (default: random)",
)
group.add_argument(
"--analytics-log",
metavar="ANALYTICS_LOG_FILE",
help="Specify a file to log analytics events",
)
group.add_argument(
"--analytics-disable",
action="store_true",
help="Permanently disable analytics",
default=False,
)
group = parser.add_argument_group("Other Settings") group = parser.add_argument_group("Other Settings")
group.add_argument( group.add_argument(
"--file", "--file",
@ -474,10 +603,10 @@ def get_parser(default_config_files, git_root):
default=False, default=False,
) )
group.add_argument( group.add_argument(
"--voice-language", "--chat-language",
metavar="VOICE_LANGUAGE", metavar="CHAT_LANGUAGE",
default="en", default=None,
help="Specify the language for voice using ISO 639-1 code (default: auto)", help="Specify the language to use in the chat (default: None, uses system settings)",
) )
group.add_argument( group.add_argument(
"--version", "--version",
@ -497,13 +626,38 @@ def get_parser(default_config_files, git_root):
help="Check for new aider versions on launch", help="Check for new aider versions on launch",
default=True, default=True,
) )
group.add_argument(
"--show-release-notes",
action=argparse.BooleanOptionalAction,
help="Show release notes on first run of new version (default: None, ask user)",
default=None,
)
group.add_argument(
"--install-main-branch",
action="store_true",
help="Install the latest version from the main branch",
default=False,
)
group.add_argument(
"--upgrade",
"--update",
action="store_true",
help="Upgrade aider to the latest version from PyPI",
default=False,
)
group.add_argument( group.add_argument(
"--apply", "--apply",
metavar="FILE", metavar="FILE",
help="Apply the changes from the given file instead of running the chat (debug)", help="Apply the changes from the given file instead of running the chat (debug)",
) )
group.add_argument( group.add_argument(
"--yes", "--apply-clipboard-edits",
action="store_true",
help="Apply clipboard contents as edits using the main model's editor format",
default=False,
)
group.add_argument(
"--yes-always",
action="store_true", action="store_true",
help="Always say yes to every confirmation", help="Always say yes to every confirmation",
default=None, default=None,
@ -551,6 +705,11 @@ def get_parser(default_config_files, git_root):
" (disables chat mode)" " (disables chat mode)"
), ),
) )
group.add_argument(
"--load",
metavar="LOAD_FILE",
help="Load and execute /commands from a file on launch",
)
group.add_argument( group.add_argument(
"--encoding", "--encoding",
default="utf-8", default="utf-8",
@ -569,10 +728,48 @@ def get_parser(default_config_files, git_root):
group.add_argument( group.add_argument(
"--gui", "--gui",
"--browser", "--browser",
action="store_true", action=argparse.BooleanOptionalAction,
help="Run aider in your browser", help="Run aider in your browser (default: False)",
default=False, default=False,
) )
group.add_argument(
"--suggest-shell-commands",
action=argparse.BooleanOptionalAction,
default=True,
help="Enable/disable suggesting shell commands (default: True)",
)
group.add_argument(
"--fancy-input",
action=argparse.BooleanOptionalAction,
default=True,
help="Enable/disable fancy input with history and completion (default: True)",
)
group.add_argument(
"--detect-urls",
action=argparse.BooleanOptionalAction,
default=True,
help="Enable/disable detection and offering to add URLs to chat (default: True)",
)
group.add_argument(
"--editor",
help="Specify which editor to use for the /editor command",
)
##########
group = parser.add_argument_group("Voice Settings")
group.add_argument(
"--voice-format",
metavar="VOICE_FORMAT",
default="wav",
choices=["wav", "mp3", "webm"],
help="Audio format for voice recording (default: wav). webm and mp3 require ffmpeg",
)
group.add_argument(
"--voice-language",
metavar="VOICE_LANGUAGE",
default="en",
help="Specify the language for voice using ISO 639-1 code (default: auto)",
)
return parser return parser
@ -588,7 +785,6 @@ def get_md_help():
parser.formatter_class = MarkdownHelpFormatter parser.formatter_class = MarkdownHelpFormatter
return argparse.ArgumentParser.format_help(parser) return argparse.ArgumentParser.format_help(parser)
return parser.format_help()
def get_sample_yaml(): def get_sample_yaml():
@ -602,7 +798,6 @@ def get_sample_yaml():
parser.formatter_class = YamlHelpFormatter parser.formatter_class = YamlHelpFormatter
return argparse.ArgumentParser.format_help(parser) return argparse.ArgumentParser.format_help(parser)
return parser.format_help()
def get_sample_dotenv(): def get_sample_dotenv():
@ -616,7 +811,6 @@ def get_sample_dotenv():
parser.formatter_class = DotEnvFormatter parser.formatter_class = DotEnvFormatter
return argparse.ArgumentParser.format_help(parser) return argparse.ArgumentParser.format_help(parser)
return parser.format_help()
def main(): def main():

View file

@ -144,8 +144,15 @@ class YamlHelpFormatter(argparse.HelpFormatter):
if default: if default:
parts.append(f"#{switch}: {default}\n") parts.append(f"#{switch}: {default}\n")
elif action.nargs in ("*", "+") or isinstance(action, argparse._AppendAction):
parts.append(f"#{switch}: xxx")
parts.append("## Specify multiple values like this:")
parts.append(f"#{switch}:")
parts.append(f"# - xxx")
parts.append(f"# - yyy")
parts.append(f"# - zzz")
else: else:
parts.append(f"#{switch}:\n") parts.append(f"#{switch}: xxx\n")
### ###
# parts.append(str(action)) # parts.append(str(action))

View file

@ -1,13 +1,16 @@
from .architect_coder import ArchitectCoder
from .ask_coder import AskCoder from .ask_coder import AskCoder
from .base_coder import Coder from .base_coder import Coder
from .editblock_coder import EditBlockCoder from .editblock_coder import EditBlockCoder
from .editblock_fenced_coder import EditBlockFencedCoder from .editblock_fenced_coder import EditBlockFencedCoder
from .editor_editblock_coder import EditorEditBlockCoder
from .editor_whole_coder import EditorWholeFileCoder
from .help_coder import HelpCoder from .help_coder import HelpCoder
# from .single_wholefile_func_coder import SingleWholeFileFunctionCoder
from .udiff_coder import UnifiedDiffCoder from .udiff_coder import UnifiedDiffCoder
from .wholefile_coder import WholeFileCoder from .wholefile_coder import WholeFileCoder
# from .single_wholefile_func_coder import SingleWholeFileFunctionCoder
__all__ = [ __all__ = [
HelpCoder, HelpCoder,
AskCoder, AskCoder,
@ -17,4 +20,7 @@ __all__ = [
WholeFileCoder, WholeFileCoder,
UnifiedDiffCoder, UnifiedDiffCoder,
# SingleWholeFileFunctionCoder, # SingleWholeFileFunctionCoder,
ArchitectCoder,
EditorEditBlockCoder,
EditorWholeFileCoder,
] ]

View file

@ -0,0 +1,47 @@
from .architect_prompts import ArchitectPrompts
from .ask_coder import AskCoder
from .base_coder import Coder
class ArchitectCoder(AskCoder):
edit_format = "architect"
gpt_prompts = ArchitectPrompts()
def reply_completed(self):
content = self.partial_response_content
if not content or not content.strip():
return
if not self.io.confirm_ask("Edit the files?"):
return
kwargs = dict()
# Use the editor_model from the main_model if it exists, otherwise use the main_model itself
editor_model = self.main_model.editor_model or self.main_model
kwargs["main_model"] = editor_model
kwargs["edit_format"] = self.main_model.editor_edit_format
kwargs["suggest_shell_commands"] = False
kwargs["map_tokens"] = 0
kwargs["total_cost"] = self.total_cost
kwargs["cache_prompts"] = False
kwargs["num_cache_warming_pings"] = 0
kwargs["summarize_from_coder"] = False
new_kwargs = dict(io=self.io, from_coder=self)
new_kwargs.update(kwargs)
editor_coder = Coder.create(**new_kwargs)
editor_coder.cur_messages = []
editor_coder.done_messages = []
if self.verbose:
editor_coder.show_announcements()
editor_coder.run(with_message=content, preproc=False)
self.move_back_cur_messages("I made those changes to the files.")
self.total_cost = editor_coder.total_cost
self.aider_commit_hashes = editor_coder.aider_commit_hashes

View file

@ -0,0 +1,40 @@
# flake8: noqa: E501
from .base_prompts import CoderPrompts
class ArchitectPrompts(CoderPrompts):
main_system = """Act as an expert architect engineer and provide direction to your editor engineer.
Study the change request and the current code.
Describe how to modify the code to complete the request.
The editor engineer will rely solely on your instructions, so make them unambiguous and complete.
Explain all needed code changes clearly and completely, but concisely.
Just show the changes needed.
DO NOT show the entire updated function/file/etc!
Always reply to the user in {language}.
"""
example_messages = []
files_content_prefix = """I have *added these files to the chat* so you see all of their contents.
*Trust this message as the true contents of the files!*
Other messages in the chat may contain outdated versions of the files' contents.
""" # noqa: E501
files_content_assistant_reply = (
"Ok, I will use that as the true, current contents of the files."
)
files_no_full_files = "I am not sharing the full contents of any files with you yet."
files_no_full_files_with_repo_map = ""
files_no_full_files_with_repo_map_reply = ""
repo_content_prefix = """I am working with you on code in a git repository.
Here are summaries of some files present in my git repo.
If you need to see the full contents of any files to answer my questions, ask me to *add them to the chat*.
"""
system_reminder = ""

View file

@ -6,8 +6,7 @@ from .base_prompts import CoderPrompts
class AskPrompts(CoderPrompts): class AskPrompts(CoderPrompts):
main_system = """Act as an expert code analyst. main_system = """Act as an expert code analyst.
Answer questions about the supplied code. Answer questions about the supplied code.
Always reply to the user in {language}.
Always reply to the user in the same language they are using.
""" """
example_messages = [] example_messages = []
@ -17,6 +16,10 @@ Always reply to the user in the same language they are using.
Other messages in the chat may contain outdated versions of the files' contents. Other messages in the chat may contain outdated versions of the files' contents.
""" # noqa: E501 """ # noqa: E501
files_content_assistant_reply = (
"Ok, I will use that as the true, current contents of the files."
)
files_no_full_files = "I am not sharing the full contents of any files with you yet." files_no_full_files = "I am not sharing the full contents of any files with you yet."
files_no_full_files_with_repo_map = "" files_no_full_files_with_repo_map = ""

File diff suppressed because it is too large Load diff

View file

@ -22,6 +22,8 @@ You always COMPLETELY IMPLEMENT the needed code!
Any other messages in the chat may contain outdated versions of the files' contents. Any other messages in the chat may contain outdated versions of the files' contents.
""" # noqa: E501 """ # noqa: E501
files_content_assistant_reply = "Ok, any changes I propose will be to those files."
files_no_full_files = "I am not sharing any files that you can edit yet." files_no_full_files = "I am not sharing any files that you can edit yet."
files_no_full_files_with_repo_map = """Don't try and edit any existing code without asking me to add the files to the chat! files_no_full_files_with_repo_map = """Don't try and edit any existing code without asking me to add the files to the chat!
@ -43,3 +45,8 @@ If you need to edit any of these files, ask me to *add them to the chat* first.
read_only_files_prefix = """Here are some READ ONLY files, provided for your reference. read_only_files_prefix = """Here are some READ ONLY files, provided for your reference.
Do not edit these files! Do not edit these files!
""" """
shell_cmd_prompt = ""
shell_cmd_reminder = ""
no_shell_cmd_prompt = ""
no_shell_cmd_reminder = ""

View file

@ -31,10 +31,12 @@ class ChatChunks:
else: else:
self.add_cache_control(self.system) self.add_cache_control(self.system)
if self.readonly_files: if self.repo:
self.add_cache_control(self.readonly_files) # this will mark both the readonly_files and repomap chunk as cacheable
else:
self.add_cache_control(self.repo) self.add_cache_control(self.repo)
else:
# otherwise, just cache readonly_files if there are any
self.add_cache_control(self.readonly_files)
self.add_cache_control(self.chat_files) self.add_cache_control(self.chat_files)
@ -51,3 +53,12 @@ class ChatChunks:
content["cache_control"] = {"type": "ephemeral"} content["cache_control"] = {"type": "ephemeral"}
messages[-1]["content"] = [content] messages[-1]["content"] = [content]
def cacheable_messages(self):
messages = self.all_messages()
for i, message in enumerate(reversed(messages)):
if isinstance(message.get("content"), list) and message["content"][0].get(
"cache_control"
):
return messages[: len(messages) - i]
return messages

View file

@ -1,7 +1,6 @@
import difflib import difflib
import math import math
import re import re
import subprocess
import sys import sys
from difflib import SequenceMatcher from difflib import SequenceMatcher
from pathlib import Path from pathlib import Path
@ -23,55 +22,60 @@ class EditBlockCoder(Coder):
content = self.partial_response_content content = self.partial_response_content
# might raise ValueError for malformed ORIG/UPD blocks # might raise ValueError for malformed ORIG/UPD blocks
edits = list(find_original_update_blocks(content, self.fence)) edits = list(
find_original_update_blocks(
content,
self.fence,
self.get_inchat_relative_files(),
)
)
self.shell_commands += [edit[1] for edit in edits if edit[0] is None] self.shell_commands += [edit[1] for edit in edits if edit[0] is None]
edits = [edit for edit in edits if edit[0] is not None] edits = [edit for edit in edits if edit[0] is not None]
return edits return edits
def run_interactive_subprocess(self, command): def apply_edits_dry_run(self, edits):
try: return self.apply_edits(edits, dry_run=True)
result = subprocess.run(
command,
text=True,
shell=True,
encoding=self.io.encoding,
errors="replace",
)
if result.returncode == 0:
return
self.io.tool_error(f"Command '{command}' exited with status {result.returncode}")
except Exception as e:
self.io.tool_error(f"Error running command '{command}': {str(e)}")
self.io.tool_output(f"To retry and share output with the LLM: /run {command}") def apply_edits(self, edits, dry_run=False):
self.io.tool_output("You can find this command in your input history with up-arrow.")
def apply_edits(self, edits):
failed = [] failed = []
passed = [] passed = []
updated_edits = []
for edit in edits: for edit in edits:
path, original, updated = edit path, original, updated = edit
full_path = self.abs_root_path(path) full_path = self.abs_root_path(path)
content = self.io.read_text(full_path) new_content = None
new_content = do_replace(full_path, content, original, updated, self.fence)
if not new_content: if Path(full_path).exists():
content = self.io.read_text(full_path)
new_content = do_replace(full_path, content, original, updated, self.fence)
# If the edit failed, and
# this is not a "create a new file" with an empty original...
# https://github.com/Aider-AI/aider/issues/2258
if not new_content and original.strip():
# try patching any of the other files in the chat # try patching any of the other files in the chat
dump(self.abs_fnames)
for full_path in self.abs_fnames: for full_path in self.abs_fnames:
content = self.io.read_text(full_path) content = self.io.read_text(full_path)
new_content = do_replace(full_path, content, original, updated, self.fence) new_content = do_replace(full_path, content, original, updated, self.fence)
if new_content: if new_content:
path = self.get_rel_fname(full_path)
break break
updated_edits.append((path, original, updated))
if new_content: if new_content:
self.io.write_text(full_path, new_content) if not dry_run:
self.io.write_text(full_path, new_content)
passed.append(edit) passed.append(edit)
else: else:
failed.append(edit) failed.append(edit)
if dry_run:
return updated_edits
if not failed: if not failed:
return return
@ -379,9 +383,13 @@ def do_replace(fname, content, before_text, after_text, fence=None):
return new_content return new_content
HEAD = "<<<<<<< SEARCH" HEAD = r"^<{5,9} SEARCH\s*$"
DIVIDER = "=======" DIVIDER = r"^={5,9}\s*$"
UPDATED = ">>>>>>> REPLACE" UPDATED = r"^>{5,9} REPLACE\s*$"
HEAD_ERR = "<<<<<<< SEARCH"
DIVIDER_ERR = "======="
UPDATED_ERR = ">>>>>>> REPLACE"
separators = "|".join([HEAD, DIVIDER, UPDATED]) separators = "|".join([HEAD, DIVIDER, UPDATED])
@ -409,16 +417,22 @@ def strip_filename(filename, fence):
filename = filename.strip() filename = filename.strip()
filename = filename.strip("`") filename = filename.strip("`")
filename = filename.strip("*") filename = filename.strip("*")
filename = filename.replace("\\_", "_")
# https://github.com/Aider-AI/aider/issues/1158
# filename = filename.replace("\\_", "_")
return filename return filename
def find_original_update_blocks(content, fence=DEFAULT_FENCE): def find_original_update_blocks(content, fence=DEFAULT_FENCE, valid_fnames=None):
lines = content.splitlines(keepends=True) lines = content.splitlines(keepends=True)
i = 0 i = 0
current_filename = None current_filename = None
head_pattern = re.compile(HEAD)
divider_pattern = re.compile(DIVIDER)
updated_pattern = re.compile(UPDATED)
while i < len(lines): while i < len(lines):
line = lines[i] line = lines[i]
@ -437,7 +451,7 @@ def find_original_update_blocks(content, fence=DEFAULT_FENCE):
"```csh", "```csh",
"```tcsh", "```tcsh",
] ]
next_is_editblock = i + 1 < len(lines) and lines[i + 1].rstrip() == HEAD next_is_editblock = i + 1 < len(lines) and head_pattern.match(lines[i + 1].strip())
if any(line.strip().startswith(start) for start in shell_starts) and not next_is_editblock: if any(line.strip().startswith(start) for start in shell_starts) and not next_is_editblock:
shell_content = [] shell_content = []
@ -452,9 +466,14 @@ def find_original_update_blocks(content, fence=DEFAULT_FENCE):
continue continue
# Check for SEARCH/REPLACE blocks # Check for SEARCH/REPLACE blocks
if line.strip() == HEAD: if head_pattern.match(line.strip()):
try: try:
filename = find_filename(lines[max(0, i - 3) : i], fence) # if next line after HEAD exists and is DIVIDER, it's a new file
if i + 1 < len(lines) and divider_pattern.match(lines[i + 1].strip()):
filename = find_filename(lines[max(0, i - 3) : i], fence, None)
else:
filename = find_filename(lines[max(0, i - 3) : i], fence, valid_fnames)
if not filename: if not filename:
if current_filename: if current_filename:
filename = current_filename filename = current_filename
@ -465,21 +484,27 @@ def find_original_update_blocks(content, fence=DEFAULT_FENCE):
original_text = [] original_text = []
i += 1 i += 1
while i < len(lines) and not lines[i].strip() == DIVIDER: while i < len(lines) and not divider_pattern.match(lines[i].strip()):
original_text.append(lines[i]) original_text.append(lines[i])
i += 1 i += 1
if i >= len(lines) or lines[i].strip() != DIVIDER: if i >= len(lines) or not divider_pattern.match(lines[i].strip()):
raise ValueError(f"Expected `{DIVIDER}`") raise ValueError(f"Expected `{DIVIDER_ERR}`")
updated_text = [] updated_text = []
i += 1 i += 1
while i < len(lines) and not lines[i].strip() in (UPDATED, DIVIDER): while i < len(lines) and not (
updated_pattern.match(lines[i].strip())
or divider_pattern.match(lines[i].strip())
):
updated_text.append(lines[i]) updated_text.append(lines[i])
i += 1 i += 1
if i >= len(lines) or lines[i].strip() not in (UPDATED, DIVIDER): if i >= len(lines) or not (
raise ValueError(f"Expected `{UPDATED}` or `{DIVIDER}`") updated_pattern.match(lines[i].strip())
or divider_pattern.match(lines[i].strip())
):
raise ValueError(f"Expected `{UPDATED_ERR}` or `{DIVIDER_ERR}`")
yield filename, "".join(original_text), "".join(updated_text) yield filename, "".join(original_text), "".join(updated_text)
@ -491,7 +516,7 @@ def find_original_update_blocks(content, fence=DEFAULT_FENCE):
i += 1 i += 1
def find_filename(lines, fence): def find_filename(lines, fence, valid_fnames):
""" """
Deepseek Coder v2 has been doing this: Deepseek Coder v2 has been doing this:
@ -505,19 +530,54 @@ def find_filename(lines, fence):
This is a more flexible search back for filenames. This is a more flexible search back for filenames.
""" """
if valid_fnames is None:
valid_fnames = []
# Go back through the 3 preceding lines # Go back through the 3 preceding lines
lines.reverse() lines.reverse()
lines = lines[:3] lines = lines[:3]
filenames = []
for line in lines: for line in lines:
# If we find a filename, done # If we find a filename, done
filename = strip_filename(line, fence) filename = strip_filename(line, fence)
if filename: if filename:
return filename filenames.append(filename)
# Only continue as long as we keep seeing fences # Only continue as long as we keep seeing fences
if not line.startswith(fence[0]): if not line.startswith(fence[0]):
return break
if not filenames:
return
# pick the *best* filename found
# Check for exact match first
for fname in filenames:
if fname in valid_fnames:
return fname
# Check for partial match (basename match)
for fname in filenames:
for vfn in valid_fnames:
if fname == Path(vfn).name:
return vfn
# Perform fuzzy matching with valid_fnames
for fname in filenames:
close_matches = difflib.get_close_matches(fname, valid_fnames, n=1, cutoff=0.8)
if len(close_matches) == 1:
return close_matches[0]
# If no fuzzy match, look for a file w/extension
for fname in filenames:
if "." in fname:
return fname
if filenames:
return filenames[0]
def find_similar_lines(search_lines, content_lines, threshold=0.6): def find_similar_lines(search_lines, content_lines, threshold=0.6):

View file

@ -111,9 +111,9 @@ class EditBlockFunctionCoder(Coder):
updated = get_arg(edit, "updated_lines") updated = get_arg(edit, "updated_lines")
# gpt-3.5 returns lists even when instructed to return a string! # gpt-3.5 returns lists even when instructed to return a string!
if self.code_format == "list" or type(original) == list: if self.code_format == "list" or type(original) is list:
original = "\n".join(original) original = "\n".join(original)
if self.code_format == "list" or type(updated) == list: if self.code_format == "list" or type(updated) is list:
updated = "\n".join(updated) updated = "\n".join(updated)
if original and not original.endswith("\n"): if original and not original.endswith("\n"):

View file

@ -11,7 +11,7 @@ Respect and use existing conventions, libraries, etc that are already present in
Take requests for changes to the supplied code. Take requests for changes to the supplied code.
If the request is ambiguous, ask questions. If the request is ambiguous, ask questions.
Always reply to the user in the same language they are using. Always reply to the user in {language}.
Once you understand the request you MUST: Once you understand the request you MUST:
@ -27,14 +27,18 @@ You can keep asking if you then decide you need to edit more files.
All changes to files must use this *SEARCH/REPLACE block* format. All changes to files must use this *SEARCH/REPLACE block* format.
ONLY EVER RETURN CODE IN A *SEARCH/REPLACE BLOCK*! ONLY EVER RETURN CODE IN A *SEARCH/REPLACE BLOCK*!
{shell_cmd_prompt}
"""
shell_cmd_prompt = """
4. *Concisely* suggest any shell commands the user might want to run in ```bash blocks. 4. *Concisely* suggest any shell commands the user might want to run in ```bash blocks.
Just suggest shell commands this way, not example code. Just suggest shell commands this way, not example code.
Only suggest complete shell commands that are ready to execute, without placeholders.
Only suggest at most a few shell commands at a time, not more than 1-3.
Use the appropriate shell based on the user's system info: Use the appropriate shell based on the user's system info:
{platform} {platform}
Examples of when to suggest shell commands: Examples of when to suggest shell commands:
- If you changed a self-contained html file, suggest an OS-appropriate command to open a browser to view it to see the updated content. - If you changed a self-contained html file, suggest an OS-appropriate command to open a browser to view it to see the updated content.
@ -45,6 +49,10 @@ Examples of when to suggest shell commands:
- Etc. - Etc.
""" """
no_shell_cmd_prompt = """
Keep in mind these details about the user's platform and environment:
{platform}
"""
example_messages = [ example_messages = [
dict( dict(
role="user", role="user",
@ -137,7 +145,7 @@ from hello import hello
system_reminder = """# *SEARCH/REPLACE block* Rules: system_reminder = """# *SEARCH/REPLACE block* Rules:
Every *SEARCH/REPLACE block* must use this format: Every *SEARCH/REPLACE block* must use this format:
1. The file path alone on a line, verbatim. No bold asterisks, no quotes around it, no escaping of characters, etc. 1. The *FULL* file path alone on a line, verbatim. No bold asterisks, no quotes around it, no escaping of characters, etc.
2. The opening fence and code language, eg: {fence[0]}python 2. The opening fence and code language, eg: {fence[0]}python
3. The start of search block: <<<<<<< SEARCH 3. The start of search block: <<<<<<< SEARCH
4. A contiguous chunk of lines to search for in the existing source code 4. A contiguous chunk of lines to search for in the existing source code
@ -146,11 +154,14 @@ Every *SEARCH/REPLACE block* must use this format:
7. The end of the replace block: >>>>>>> REPLACE 7. The end of the replace block: >>>>>>> REPLACE
8. The closing fence: {fence[1]} 8. The closing fence: {fence[1]}
Use the *FULL* file path, as shown to you by the user.
Every *SEARCH* section must *EXACTLY MATCH* the existing file content, character for character, including all comments, docstrings, etc. Every *SEARCH* section must *EXACTLY MATCH* the existing file content, character for character, including all comments, docstrings, etc.
If the file contains code or other data wrapped/escaped in json/xml/quotes or other containers, you need to propose edits to the literal contents of the file, including the container markup. If the file contains code or other data wrapped/escaped in json/xml/quotes or other containers, you need to propose edits to the literal contents of the file, including the container markup.
*SEARCH/REPLACE* blocks will replace *all* matching occurrences. *SEARCH/REPLACE* blocks will *only* replace the first match occurrence.
Include enough lines to make the SEARCH blocks uniquely match the lines to change. Including multiple unique *SEARCH/REPLACE* blocks if needed.
Include enough lines in each SEARCH section to uniquely match each set of lines that need to change.
Keep *SEARCH/REPLACE* blocks concise. Keep *SEARCH/REPLACE* blocks concise.
Break large *SEARCH/REPLACE* blocks into a series of smaller blocks that each change a small portion of the file. Break large *SEARCH/REPLACE* blocks into a series of smaller blocks that each change a small portion of the file.
@ -161,16 +172,21 @@ Only create *SEARCH/REPLACE* blocks for files that the user has added to the cha
To move code within a file, use 2 *SEARCH/REPLACE* blocks: 1 to delete it from its current location, 1 to insert it in the new location. To move code within a file, use 2 *SEARCH/REPLACE* blocks: 1 to delete it from its current location, 1 to insert it in the new location.
Pay attention to which filenames the user wants you to edit, especially if they are asking you to create a new file.
If you want to put code in a new file, use a *SEARCH/REPLACE block* with: If you want to put code in a new file, use a *SEARCH/REPLACE block* with:
- A new file path, including dir name if needed - A new file path, including dir name if needed
- An empty `SEARCH` section - An empty `SEARCH` section
- The new file's contents in the `REPLACE` section - The new file's contents in the `REPLACE` section
To rename files which have been added to the chat, use shell commands. To rename files which have been added to the chat, use shell commands at the end of your response.
{lazy_prompt} {lazy_prompt}
ONLY EVER RETURN CODE IN A *SEARCH/REPLACE BLOCK*! ONLY EVER RETURN CODE IN A *SEARCH/REPLACE BLOCK*!
{shell_cmd_reminder}
"""
shell_cmd_reminder = """
Examples of when to suggest shell commands: Examples of when to suggest shell commands:
- If you changed a self-contained html file, suggest an OS-appropriate command to open a browser to view it to see the updated content. - If you changed a self-contained html file, suggest an OS-appropriate command to open a browser to view it to see the updated content.

View file

@ -0,0 +1,7 @@
from .editblock_coder import EditBlockCoder
from .editor_editblock_prompts import EditorEditBlockPrompts
class EditorEditBlockCoder(EditBlockCoder):
edit_format = "editor-diff"
gpt_prompts = EditorEditBlockPrompts()

View file

@ -0,0 +1,16 @@
# flake8: noqa: E501
from .editblock_prompts import EditBlockPrompts
class EditorEditBlockPrompts(EditBlockPrompts):
main_system = """Act as an expert software developer who edits source code.
{lazy_prompt}
Describe each change with a *SEARCH/REPLACE block* per the examples below.
All changes to files must use this *SEARCH/REPLACE block* format.
ONLY EVER RETURN CODE IN A *SEARCH/REPLACE BLOCK*!
"""
shell_cmd_prompt = ""
no_shell_cmd_prompt = ""
shell_cmd_reminder = ""

View file

@ -0,0 +1,7 @@
from .editor_whole_prompts import EditorWholeFilePrompts
from .wholefile_coder import WholeFileCoder
class EditorWholeFileCoder(WholeFileCoder):
edit_format = "editor-whole"
gpt_prompts = EditorWholeFilePrompts()

View file

@ -0,0 +1,10 @@
# flake8: noqa: E501
from .wholefile_prompts import WholeFilePrompts
class EditorWholeFilePrompts(WholeFilePrompts):
main_system = """Act as an expert software developer and make changes to source code.
{lazy_prompt}
Output a copy of each file that needs changes.
"""

View file

@ -484,7 +484,7 @@ def git_cherry_pick_osr_onto_o(texts):
# cherry pick R onto original # cherry pick R onto original
try: try:
repo.git.cherry_pick(replace_hash, "--minimal") repo.git.cherry_pick(replace_hash, "--minimal")
except git.exc.GitCommandError: except (git.exc.ODBError, git.exc.GitError):
# merge conflicts! # merge conflicts!
return return
@ -522,7 +522,7 @@ def git_cherry_pick_sr_onto_so(texts):
# cherry pick replace onto original # cherry pick replace onto original
try: try:
repo.git.cherry_pick(replace_hash, "--minimal") repo.git.cherry_pick(replace_hash, "--minimal")
except git.exc.GitCommandError: except (git.exc.ODBError, git.exc.GitError):
# merge conflicts! # merge conflicts!
return return

View file

@ -12,7 +12,7 @@ Respect and use existing conventions, libraries, etc that are already present in
Take requests for changes to the supplied code. Take requests for changes to the supplied code.
If the request is ambiguous, ask questions. If the request is ambiguous, ask questions.
Always reply to the user in the same language they are using. Always reply to the user in {language}.
For each file that needs to be changed, write out the changes similar to a unified diff like `diff -U0` would produce. For each file that needs to be changed, write out the changes similar to a unified diff like `diff -U0` would produce.
""" """

View file

@ -58,6 +58,12 @@ class WholeFileCoder(Coder):
fname = fname.strip("*") # handle **filename.py** fname = fname.strip("*") # handle **filename.py**
fname = fname.rstrip(":") fname = fname.rstrip(":")
fname = fname.strip("`") fname = fname.strip("`")
fname = fname.lstrip("#")
fname = fname.strip()
# Issue #1232
if len(fname) > 250:
fname = ""
# Did gpt prepend a bogus dir? It especially likes to # Did gpt prepend a bogus dir? It especially likes to
# include the path/to prefix from the one-shot example in # include the path/to prefix from the one-shot example in
@ -123,15 +129,16 @@ class WholeFileCoder(Coder):
def do_live_diff(self, full_path, new_lines, final): def do_live_diff(self, full_path, new_lines, final):
if Path(full_path).exists(): if Path(full_path).exists():
orig_lines = self.io.read_text(full_path).splitlines(keepends=True) orig_lines = self.io.read_text(full_path)
if orig_lines is not None:
orig_lines = orig_lines.splitlines(keepends=True)
show_diff = diffs.diff_partial_update( show_diff = diffs.diff_partial_update(
orig_lines, orig_lines,
new_lines, new_lines,
final=final, final=final,
).splitlines() ).splitlines()
output = show_diff return show_diff
else:
output = ["```"] + new_lines + ["```"]
output = ["```"] + new_lines + ["```"]
return output return output

View file

@ -8,7 +8,7 @@ class WholeFilePrompts(CoderPrompts):
Take requests for changes to the supplied code. Take requests for changes to the supplied code.
If the request is ambiguous, ask questions. If the request is ambiguous, ask questions.
Always reply to the user in the same language they are using. Always reply to the user in {language}.
{lazy_prompt} {lazy_prompt}
Once you understand the request you MUST: Once you understand the request you MUST:
@ -52,7 +52,7 @@ path/to/filename.js
{fence[1]} {fence[1]}
Every *file listing* MUST use this format: Every *file listing* MUST use this format:
- First line: the filename with any originally provided path - First line: the filename with any originally provided path; no extra markup, punctuation, comments, etc. **JUST** the filename with path.
- Second line: opening {fence[0]} - Second line: opening {fence[0]}
- ... entire content of the file ... - ... entire content of the file ...
- Final line: closing {fence[1]} - Final line: closing {fence[1]}

View file

@ -1,19 +1,25 @@
import glob
import os import os
import re import re
import subprocess import subprocess
import sys import sys
import tempfile import tempfile
from collections import OrderedDict from collections import OrderedDict
from os.path import expanduser
from pathlib import Path from pathlib import Path
import git
import pyperclip import pyperclip
from PIL import Image, ImageGrab from PIL import Image, ImageGrab
from rich.text import Text from prompt_toolkit.completion import Completion, PathCompleter
from prompt_toolkit.document import Document
from aider import models, prompts, voice from aider import models, prompts, voice
from aider.editor import pipe_editor
from aider.format_settings import format_settings
from aider.help import Help, install_help_extra from aider.help import Help, install_help_extra
from aider.llm import litellm from aider.llm import litellm
from aider.repo import ANY_GIT_ERROR
from aider.run_cmd import run_cmd
from aider.scrape import Scraper, install_playwright from aider.scrape import Scraper, install_playwright
from aider.utils import is_image_file from aider.utils import is_image_file
@ -29,9 +35,32 @@ class Commands:
voice = None voice = None
scraper = None scraper = None
def __init__(self, io, coder, voice_language=None, verify_ssl=True): def clone(self):
return Commands(
self.io,
None,
voice_language=self.voice_language,
verify_ssl=self.verify_ssl,
args=self.args,
parser=self.parser,
)
def __init__(
self,
io,
coder,
voice_language=None,
verify_ssl=True,
args=None,
parser=None,
verbose=False,
editor=None,
):
self.io = io self.io = io
self.coder = coder self.coder = coder
self.parser = parser
self.args = args
self.verbose = verbose
self.verify_ssl = verify_ssl self.verify_ssl = verify_ssl
if voice_language == "auto": if voice_language == "auto":
@ -40,6 +69,7 @@ class Commands:
self.voice_language = voice_language self.voice_language = voice_language
self.help = None self.help = None
self.editor = editor
def cmd_model(self, args): def cmd_model(self, args):
"Switch to a new LLM" "Switch to a new LLM"
@ -119,8 +149,8 @@ class Commands:
else: else:
self.io.tool_output("Please provide a partial model name to search for.") self.io.tool_output("Please provide a partial model name to search for.")
def cmd_web(self, args, paginate=True): def cmd_web(self, args, return_content=False):
"Scrape a webpage, convert to markdown and add to the chat" "Scrape a webpage, convert to markdown and send in a message"
url = args.strip() url = args.strip()
if not url: if not url:
@ -131,30 +161,40 @@ class Commands:
if not self.scraper: if not self.scraper:
res = install_playwright(self.io) res = install_playwright(self.io)
if not res: if not res:
self.io.tool_error("Unable to initialize playwright.") self.io.tool_warning("Unable to initialize playwright.")
self.scraper = Scraper( self.scraper = Scraper(
print_error=self.io.tool_error, playwright_available=res, verify_ssl=self.verify_ssl print_error=self.io.tool_error, playwright_available=res, verify_ssl=self.verify_ssl
) )
content = self.scraper.scrape(url) or "" content = self.scraper.scrape(url) or ""
content = f"{url}:\n\n" + content content = f"Here is the content of {url}:\n\n" + content
if return_content:
return content
self.io.tool_output("... done.") self.io.tool_output("... added to chat.")
if paginate: self.coder.cur_messages += [
with self.io.console.pager(): dict(role="user", content=content),
self.io.console.print(Text(content)) dict(role="assistant", content="Ok."),
]
return content
def is_command(self, inp): def is_command(self, inp):
return inp[0] in "/!" return inp[0] in "/!"
def get_raw_completions(self, cmd):
assert cmd.startswith("/")
cmd = cmd[1:]
cmd = cmd.replace("-", "_")
raw_completer = getattr(self, f"completions_raw_{cmd}", None)
return raw_completer
def get_completions(self, cmd): def get_completions(self, cmd):
assert cmd.startswith("/") assert cmd.startswith("/")
cmd = cmd[1:] cmd = cmd[1:]
cmd = cmd.replace("-", "_")
fun = getattr(self, f"completions_{cmd}", None) fun = getattr(self, f"completions_{cmd}", None)
if not fun: if not fun:
return return
@ -175,10 +215,14 @@ class Commands:
cmd_name = cmd_name.replace("-", "_") cmd_name = cmd_name.replace("-", "_")
cmd_method_name = f"cmd_{cmd_name}" cmd_method_name = f"cmd_{cmd_name}"
cmd_method = getattr(self, cmd_method_name, None) cmd_method = getattr(self, cmd_method_name, None)
if cmd_method: if not cmd_method:
return cmd_method(args)
else:
self.io.tool_output(f"Error: Command {cmd_name} not found.") self.io.tool_output(f"Error: Command {cmd_name} not found.")
return
try:
return cmd_method(args)
except ANY_GIT_ERROR as err:
self.io.tool_error(f"Unable to complete {cmd_name}: {err}")
def matching_commands(self, inp): def matching_commands(self, inp):
words = inp.strip().split() words = inp.strip().split()
@ -186,7 +230,7 @@ class Commands:
return return
first_word = words[0] first_word = words[0]
rest_inp = inp[len(words[0]) :] rest_inp = inp[len(words[0]) :].strip()
all_commands = self.get_commands() all_commands = self.get_commands()
matching_commands = [cmd for cmd in all_commands if cmd.startswith(first_word)] matching_commands = [cmd for cmd in all_commands if cmd.startswith(first_word)]
@ -194,6 +238,7 @@ class Commands:
def run(self, inp): def run(self, inp):
if inp.startswith("!"): if inp.startswith("!"):
self.coder.event("command_run")
return self.do_run("run", inp[1:]) return self.do_run("run", inp[1:])
res = self.matching_commands(inp) res = self.matching_commands(inp)
@ -201,9 +246,13 @@ class Commands:
return return
matching_commands, first_word, rest_inp = res matching_commands, first_word, rest_inp = res
if len(matching_commands) == 1: if len(matching_commands) == 1:
return self.do_run(matching_commands[0][1:], rest_inp) command = matching_commands[0][1:]
self.coder.event(f"command_{command}")
return self.do_run(command, rest_inp)
elif first_word in matching_commands: elif first_word in matching_commands:
return self.do_run(first_word[1:], rest_inp) command = first_word[1:]
self.coder.event(f"command_{command}")
return self.do_run(command, rest_inp)
elif len(matching_commands) > 1: elif len(matching_commands) > 1:
self.io.tool_error(f"Ambiguous command: {', '.join(matching_commands)}") self.io.tool_error(f"Ambiguous command: {', '.join(matching_commands)}")
else: else:
@ -214,20 +263,25 @@ class Commands:
def cmd_commit(self, args=None): def cmd_commit(self, args=None):
"Commit edits to the repo made outside the chat (commit message optional)" "Commit edits to the repo made outside the chat (commit message optional)"
try:
self.raw_cmd_commit(args)
except ANY_GIT_ERROR as err:
self.io.tool_error(f"Unable to complete commit: {err}")
def raw_cmd_commit(self, args=None):
if not self.coder.repo: if not self.coder.repo:
self.io.tool_error("No git repository found.") self.io.tool_error("No git repository found.")
return return
if not self.coder.repo.is_dirty(): if not self.coder.repo.is_dirty():
self.io.tool_error("No more changes to commit.") self.io.tool_warning("No more changes to commit.")
return return
commit_message = args.strip() if args else None commit_message = args.strip() if args else None
self.coder.repo.commit(message=commit_message) self.coder.repo.commit(message=commit_message)
def cmd_lint(self, args="", fnames=None): def cmd_lint(self, args="", fnames=None):
"Lint and fix provided files or in-chat files if none provided" "Lint and fix in-chat files or all dirty files if none in chat"
if not self.coder.repo: if not self.coder.repo:
self.io.tool_error("No git repository found.") self.io.tool_error("No git repository found.")
@ -241,7 +295,7 @@ class Commands:
fnames = self.coder.repo.get_dirty_files() fnames = self.coder.repo.get_dirty_files()
if not fnames: if not fnames:
self.io.tool_error("No dirty files to lint.") self.io.tool_warning("No dirty files to lint.")
return return
fnames = [self.coder.abs_root_path(fname) for fname in fnames] fnames = [self.coder.abs_root_path(fname) for fname in fnames]
@ -252,18 +306,18 @@ class Commands:
errors = self.coder.linter.lint(fname) errors = self.coder.linter.lint(fname)
except FileNotFoundError as err: except FileNotFoundError as err:
self.io.tool_error(f"Unable to lint {fname}") self.io.tool_error(f"Unable to lint {fname}")
self.io.tool_error(str(err)) self.io.tool_output(str(err))
continue continue
if not errors: if not errors:
continue continue
self.io.tool_error(errors) self.io.tool_output(errors)
if not self.io.confirm_ask(f"Fix lint errors in {fname}?", default="y"): if not self.io.confirm_ask(f"Fix lint errors in {fname}?", default="y"):
continue continue
# Commit everything before we start fixing lint errors # Commit everything before we start fixing lint errors
if self.coder.repo.is_dirty(): if self.coder.repo.is_dirty() and self.coder.dirty_commits:
self.cmd_commit("") self.cmd_commit("")
if not lint_coder: if not lint_coder:
@ -278,7 +332,7 @@ class Commands:
lint_coder.run(errors) lint_coder.run(errors)
lint_coder.abs_fnames = set() lint_coder.abs_fnames = set()
if lint_coder and self.coder.repo.is_dirty(): if lint_coder and self.coder.repo.is_dirty() and self.coder.auto_commits:
self.cmd_commit("") self.cmd_commit("")
def cmd_clear(self, args): def cmd_clear(self, args):
@ -406,15 +460,37 @@ class Commands:
def cmd_undo(self, args): def cmd_undo(self, args):
"Undo the last git commit if it was done by aider" "Undo the last git commit if it was done by aider"
try:
self.raw_cmd_undo(args)
except ANY_GIT_ERROR as err:
self.io.tool_error(f"Unable to complete undo: {err}")
def raw_cmd_undo(self, args):
if not self.coder.repo: if not self.coder.repo:
self.io.tool_error("No git repository found.") self.io.tool_error("No git repository found.")
return return
last_commit = self.coder.repo.repo.head.commit last_commit = self.coder.repo.get_head_commit()
if not last_commit.parents: if not last_commit or not last_commit.parents:
self.io.tool_error("This is the first commit in the repository. Cannot undo.") self.io.tool_error("This is the first commit in the repository. Cannot undo.")
return return
last_commit_hash = self.coder.repo.get_head_commit_sha(short=True)
last_commit_message = self.coder.repo.get_head_commit_message("(unknown)").strip()
if last_commit_hash not in self.coder.aider_commit_hashes:
self.io.tool_error("The last commit was not made by aider in this chat session.")
self.io.tool_output(
"You could try `/git reset --hard HEAD^` but be aware that this is a destructive"
" command!"
)
return
if len(last_commit.parents) > 1:
self.io.tool_error(
f"The last commit {last_commit.hexsha} has more than 1 parent, can't undo."
)
return
prev_commit = last_commit.parents[0] prev_commit = last_commit.parents[0]
changed_files_last_commit = [item.a_path for item in last_commit.diff(prev_commit)] changed_files_last_commit = [item.a_path for item in last_commit.diff(prev_commit)]
@ -440,7 +516,7 @@ class Commands:
try: try:
remote_head = self.coder.repo.repo.git.rev_parse(f"origin/{current_branch}") remote_head = self.coder.repo.repo.git.rev_parse(f"origin/{current_branch}")
has_origin = True has_origin = True
except git.exc.GitCommandError: except ANY_GIT_ERROR:
has_origin = False has_origin = False
if has_origin: if has_origin:
@ -451,19 +527,25 @@ class Commands:
) )
return return
last_commit_hash = self.coder.repo.repo.head.commit.hexsha[:7]
last_commit_message = self.coder.repo.repo.head.commit.message.strip()
if last_commit_hash not in self.coder.aider_commit_hashes:
self.io.tool_error("The last commit was not made by aider in this chat session.")
self.io.tool_error(
"You could try `/git reset --hard HEAD^` but be aware that this is a destructive"
" command!"
)
return
# Reset only the files which are part of `last_commit` # Reset only the files which are part of `last_commit`
restored = set()
unrestored = set()
for file_path in changed_files_last_commit: for file_path in changed_files_last_commit:
self.coder.repo.repo.git.checkout("HEAD~1", file_path) try:
self.coder.repo.repo.git.checkout("HEAD~1", file_path)
restored.add(file_path)
except ANY_GIT_ERROR:
unrestored.add(file_path)
if unrestored:
self.io.tool_error(f"Error restoring {file_path}, aborting undo.")
self.io.tool_output("Restored files:")
for file in restored:
self.io.tool_output(f" {file}")
self.io.tool_output("Unable to restore files:")
for file in unrestored:
self.io.tool_output(f" {file}")
return
# Move the HEAD back before the latest commit # Move the HEAD back before the latest commit
self.coder.repo.repo.git.reset("--soft", "HEAD~1") self.coder.repo.repo.git.reset("--soft", "HEAD~1")
@ -471,8 +553,8 @@ class Commands:
self.io.tool_output(f"Removed: {last_commit_hash} {last_commit_message}") self.io.tool_output(f"Removed: {last_commit_hash} {last_commit_message}")
# Get the current HEAD after undo # Get the current HEAD after undo
current_head_hash = self.coder.repo.repo.head.commit.hexsha[:7] current_head_hash = self.coder.repo.get_head_commit_sha(short=True)
current_head_message = self.coder.repo.repo.head.commit.message.strip() current_head_message = self.coder.repo.get_head_commit_message("(unknown)").strip()
self.io.tool_output(f"Now at: {current_head_hash} {current_head_message}") self.io.tool_output(f"Now at: {current_head_hash} {current_head_message}")
if self.coder.main_model.send_undo_reply: if self.coder.main_model.send_undo_reply:
@ -480,11 +562,17 @@ class Commands:
def cmd_diff(self, args=""): def cmd_diff(self, args=""):
"Display the diff of changes since the last message" "Display the diff of changes since the last message"
try:
self.raw_cmd_diff(args)
except ANY_GIT_ERROR as err:
self.io.tool_error(f"Unable to complete diff: {err}")
def raw_cmd_diff(self, args=""):
if not self.coder.repo: if not self.coder.repo:
self.io.tool_error("No git repository found.") self.io.tool_error("No git repository found.")
return return
current_head = self.coder.repo.get_head() current_head = self.coder.repo.get_head_commit_sha()
if current_head is None: if current_head is None:
self.io.tool_error("Unable to get current commit. The repository might be empty.") self.io.tool_error("Unable to get current commit. The repository might be empty.")
return return
@ -495,7 +583,7 @@ class Commands:
commit_before_message = self.coder.commit_before_message[-2] commit_before_message = self.coder.commit_before_message[-2]
if not commit_before_message or commit_before_message == current_head: if not commit_before_message or commit_before_message == current_head:
self.io.tool_error("No changes to display since the last message.") self.io.tool_warning("No changes to display since the last message.")
return return
self.io.tool_output(f"Diff since {commit_before_message[:7]}...") self.io.tool_output(f"Diff since {commit_before_message[:7]}...")
@ -506,16 +594,69 @@ class Commands:
"HEAD", "HEAD",
) )
# don't use io.tool_output() because we don't want to log or further colorize self.io.print(diff)
print(diff)
def quote_fname(self, fname): def quote_fname(self, fname):
if " " in fname and '"' not in fname: if " " in fname and '"' not in fname:
fname = f'"{fname}"' fname = f'"{fname}"'
return fname return fname
def completions_read(self): def completions_raw_read_only(self, document, complete_event):
return self.completions_add() # Get the text before the cursor
text = document.text_before_cursor
# Skip the first word and the space after it
after_command = text.split()[-1]
# Create a new Document object with the text after the command
new_document = Document(after_command, cursor_position=len(after_command))
def get_paths():
return [self.coder.root] if self.coder.root else None
path_completer = PathCompleter(
get_paths=get_paths,
only_directories=False,
expanduser=True,
)
# Adjust the start_position to replace all of 'after_command'
adjusted_start_position = -len(after_command)
# Collect all completions
all_completions = []
# Iterate over the completions and modify them
for completion in path_completer.get_completions(new_document, complete_event):
quoted_text = self.quote_fname(after_command + completion.text)
all_completions.append(
Completion(
text=quoted_text,
start_position=adjusted_start_position,
display=completion.display,
style=completion.style,
selected_style=completion.selected_style,
)
)
# Add completions from the 'add' command
add_completions = self.completions_add()
for completion in add_completions:
if after_command in completion:
all_completions.append(
Completion(
text=completion,
start_position=adjusted_start_position,
display=completion,
)
)
# Sort all completions based on their text
sorted_completions = sorted(all_completions, key=lambda c: c.text)
# Yield the sorted completions
for completion in sorted_completions:
yield completion
def completions_add(self): def completions_add(self):
files = set(self.coder.get_all_relative_files()) files = set(self.coder.get_all_relative_files())
@ -524,12 +665,17 @@ class Commands:
return files return files
def glob_filtered_to_repo(self, pattern): def glob_filtered_to_repo(self, pattern):
if not pattern.strip():
return []
try: try:
if os.path.isabs(pattern): if os.path.isabs(pattern):
# Handle absolute paths # Handle absolute paths
raw_matched_files = [Path(pattern)] raw_matched_files = [Path(pattern)]
else: else:
raw_matched_files = list(Path(self.coder.root).glob(pattern)) try:
raw_matched_files = list(Path(self.coder.root).glob(pattern))
except (IndexError, AttributeError):
raw_matched_files = []
except ValueError as err: except ValueError as err:
self.io.tool_error(f"Error matching {pattern}: {err}") self.io.tool_error(f"Error matching {pattern}: {err}")
raw_matched_files = [] raw_matched_files = []
@ -539,9 +685,9 @@ class Commands:
matched_files += expand_subdir(fn) matched_files += expand_subdir(fn)
matched_files = [ matched_files = [
str(Path(fn).relative_to(self.coder.root)) fn.relative_to(self.coder.root)
for fn in matched_files for fn in matched_files
if Path(fn).is_relative_to(self.coder.root) if fn.is_relative_to(self.coder.root)
] ]
# if repo, filter against it # if repo, filter against it
@ -553,9 +699,7 @@ class Commands:
return res return res
def cmd_add(self, args): def cmd_add(self, args):
"Add files to the chat so GPT can edit them or review them in detail" "Add files to the chat so aider can edit them or review them in detail"
added_fnames = []
all_matched_files = set() all_matched_files = set()
@ -567,7 +711,7 @@ class Commands:
fname = Path(self.coder.root) / word fname = Path(self.coder.root) / word
if self.coder.repo and self.coder.repo.ignored_file(fname): if self.coder.repo and self.coder.repo.ignored_file(fname):
self.io.tool_error(f"Skipping {fname} due to aiderignore or --subtree-only.") self.io.tool_warning(f"Skipping {fname} due to aiderignore or --subtree-only.")
continue continue
if fname.exists(): if fname.exists():
@ -582,17 +726,25 @@ class Commands:
all_matched_files.update(matched_files) all_matched_files.update(matched_files)
continue continue
if self.io.confirm_ask(f"No files matched '{word}'. Do you want to create {fname}?"): if "*" in str(fname) or "?" in str(fname):
if "*" in str(fname) or "?" in str(fname): self.io.tool_error(
self.io.tool_error(f"Cannot create file with wildcard characters: {fname}") f"No match, and cannot create file with wildcard characters: {fname}"
else: )
try: continue
fname.touch()
all_matched_files.add(str(fname))
except OSError as e:
self.io.tool_error(f"Error creating file {fname}: {e}")
for matched_file in all_matched_files: if fname.exists() and fname.is_dir() and self.coder.repo:
self.io.tool_error(f"Directory {fname} is not in git.")
self.io.tool_output(f"You can add to git with: /git add {fname}")
continue
if self.io.confirm_ask(f"No files matched '{word}'. Do you want to create {fname}?"):
try:
fname.touch()
all_matched_files.add(str(fname))
except OSError as e:
self.io.tool_error(f"Error creating file {fname}: {e}")
for matched_file in sorted(all_matched_files):
abs_file_path = self.coder.abs_root_path(matched_file) abs_file_path = self.coder.abs_root_path(matched_file)
if not abs_file_path.startswith(self.coder.root) and not is_image_file(matched_file): if not abs_file_path.startswith(self.coder.root) and not is_image_file(matched_file):
@ -601,8 +753,13 @@ class Commands:
) )
continue continue
if self.coder.repo and self.coder.repo.git_ignored_file(matched_file):
self.io.tool_error(f"Can't add {matched_file} which is in gitignore")
continue
if abs_file_path in self.coder.abs_fnames: if abs_file_path in self.coder.abs_fnames:
self.io.tool_error(f"{matched_file} is already in the chat") self.io.tool_error(f"{matched_file} is already in the chat as an editable file")
continue
elif abs_file_path in self.coder.abs_read_only_fnames: elif abs_file_path in self.coder.abs_read_only_fnames:
if self.coder.repo and self.coder.repo.path_in_repo(matched_file): if self.coder.repo and self.coder.repo.path_in_repo(matched_file):
self.coder.abs_read_only_fnames.remove(abs_file_path) self.coder.abs_read_only_fnames.remove(abs_file_path)
@ -610,17 +767,17 @@ class Commands:
self.io.tool_output( self.io.tool_output(
f"Moved {matched_file} from read-only to editable files in the chat" f"Moved {matched_file} from read-only to editable files in the chat"
) )
added_fnames.append(matched_file)
else: else:
self.io.tool_error( self.io.tool_error(
f"Cannot add {matched_file} as it's not part of the repository" f"Cannot add {matched_file} as it's not part of the repository"
) )
else: else:
if is_image_file(matched_file) and not self.coder.main_model.accepts_images: if is_image_file(matched_file) and not self.coder.main_model.info.get(
"supports_vision"
):
self.io.tool_error( self.io.tool_error(
f"Cannot add image file {matched_file} as the" f"Cannot add image file {matched_file} as the"
f" {self.coder.main_model.name} does not support image.\nYou can run `aider" f" {self.coder.main_model.name} does not support images."
" --4-turbo-vision` to use GPT-4 Turbo with Vision."
) )
continue continue
content = self.io.read_text(abs_file_path) content = self.io.read_text(abs_file_path)
@ -630,7 +787,6 @@ class Commands:
self.coder.abs_fnames.add(abs_file_path) self.coder.abs_fnames.add(abs_file_path)
self.io.tool_output(f"Added {matched_file} to the chat") self.io.tool_output(f"Added {matched_file} to the chat")
self.coder.check_added_files() self.coder.check_added_files()
added_fnames.append(matched_file)
def completions_drop(self): def completions_drop(self):
files = self.coder.get_inchat_relative_files() files = self.coder.get_inchat_relative_files()
@ -672,7 +828,7 @@ class Commands:
self.io.tool_output(f"Removed {matched_file} from the chat") self.io.tool_output(f"Removed {matched_file} from the chat")
def cmd_git(self, args): def cmd_git(self, args):
"Run a git command" "Run a git command (output excluded from chat)"
combined_output = None combined_output = None
try: try:
args = "git " + args args = "git " + args
@ -702,67 +858,49 @@ class Commands:
if not args and self.coder.test_cmd: if not args and self.coder.test_cmd:
args = self.coder.test_cmd args = self.coder.test_cmd
if not args:
return
if not callable(args): if not callable(args):
if type(args) is not str:
raise ValueError(repr(args))
return self.cmd_run(args, True) return self.cmd_run(args, True)
errors = args() errors = args()
if not errors: if not errors:
return return
self.io.tool_error(errors, strip=False) self.io.tool_output(errors)
return errors return errors
def cmd_run(self, args, add_on_nonzero_exit=False): def cmd_run(self, args, add_on_nonzero_exit=False):
"Run a shell command and optionally add the output to the chat (alias: !)" "Run a shell command and optionally add the output to the chat (alias: !)"
combined_output = None exit_status, combined_output = run_cmd(
instructions = None args, verbose=self.verbose, error_print=self.io.tool_error, cwd=self.coder.root
try: )
result = subprocess.run(
args,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
text=True,
shell=True,
encoding=self.io.encoding,
errors="replace",
)
combined_output = result.stdout
except Exception as e:
self.io.tool_error(f"Error running command: {e}")
if combined_output is None: if combined_output is None:
return return
self.io.tool_output(combined_output)
if add_on_nonzero_exit: if add_on_nonzero_exit:
add = result.returncode != 0 add = exit_status != 0
else: else:
response = self.io.prompt_ask( add = self.io.confirm_ask("Add command output to the chat?")
"Add the output to the chat?\n[Y/n/instructions]",
).strip()
if response.lower() in ["yes", "y"]:
add = True
elif response.lower() in ["no", "n"]:
add = False
else:
add = True
instructions = response
if add: if add:
for line in combined_output.splitlines(): num_lines = len(combined_output.strip().splitlines())
self.io.tool_output(line, log_only=True) line_plural = "line" if num_lines == 1 else "lines"
self.io.tool_output(f"Added {num_lines} {line_plural} of output to the chat.")
msg = prompts.run_output.format( msg = prompts.run_output.format(
command=args, command=args,
output=combined_output, output=combined_output,
) )
if instructions: self.coder.cur_messages += [
msg = instructions + "\n\n" + msg dict(role="user", content=msg),
dict(role="assistant", content="Ok."),
return msg ]
def cmd_exit(self, args): def cmd_exit(self, args):
"Exit the application" "Exit the application"
@ -834,6 +972,7 @@ class Commands:
self.basic_help() self.basic_help()
return return
self.coder.event("interactive help")
from aider.coders import Coder from aider.coders import Coder
if not self.help: if not self.help:
@ -877,14 +1016,6 @@ class Commands:
show_announcements=False, show_announcements=False,
) )
def clone(self):
return Commands(
self.io,
None,
voice_language=self.voice_language,
verify_ssl=self.verify_ssl,
)
def cmd_ask(self, args): def cmd_ask(self, args):
"Ask questions about the code base without editing any files" "Ask questions about the code base without editing any files"
return self._generic_chat_command(args, "ask") return self._generic_chat_command(args, "ask")
@ -893,6 +1024,10 @@ class Commands:
"Ask for changes to your code" "Ask for changes to your code"
return self._generic_chat_command(args, self.coder.main_model.edit_format) return self._generic_chat_command(args, self.coder.main_model.edit_format)
def cmd_architect(self, args):
"Enter architect mode to discuss high-level design and architecture"
return self._generic_chat_command(args, "architect")
def _generic_chat_command(self, args, edit_format): def _generic_chat_command(self, args, edit_format):
if not args.strip(): if not args.strip():
self.io.tool_error(f"Please provide a question or topic for the {edit_format} chat.") self.io.tool_error(f"Please provide a question or topic for the {edit_format} chat.")
@ -945,7 +1080,7 @@ class Commands:
self.io.tool_error("To use /voice you must provide an OpenAI API key.") self.io.tool_error("To use /voice you must provide an OpenAI API key.")
return return
try: try:
self.voice = voice.Voice() self.voice = voice.Voice(audio_format=self.args.voice_format)
except voice.SoundDeviceError: except voice.SoundDeviceError:
self.io.tool_error( self.io.tool_error(
"Unable to import `sounddevice` and/or `soundfile`, is portaudio installed?" "Unable to import `sounddevice` and/or `soundfile`, is portaudio installed?"
@ -977,14 +1112,15 @@ class Commands:
if text: if text:
self.io.add_to_input_history(text) self.io.add_to_input_history(text)
print() self.io.print()
self.io.user_input(text, log_only=False) self.io.user_input(text, log_only=False)
print() self.io.print()
return text return text
def cmd_clipboard(self, args): def cmd_paste(self, args):
"Add image/text from the clipboard to the chat (optionally provide a name for the image)" """Paste image/text from the clipboard into the chat.\
Optionally provide a name for the image."""
try: try:
# Check for image first # Check for image first
image = ImageGrab.grabclipboard() image = ImageGrab.grabclipboard()
@ -1035,33 +1171,76 @@ class Commands:
def cmd_read_only(self, args): def cmd_read_only(self, args):
"Add files to the chat that are for reference, not to be edited" "Add files to the chat that are for reference, not to be edited"
if not args.strip(): if not args.strip():
self.io.tool_error("Please provide filenames to read.") self.io.tool_error("Please provide filenames or directories to read.")
return return
filenames = parse_quoted_filenames(args) filenames = parse_quoted_filenames(args)
for word in filenames: all_paths = []
# Expand the home directory if the path starts with "~"
expanded_path = os.path.expanduser(word)
abs_path = self.coder.abs_root_path(expanded_path)
if not os.path.exists(abs_path): # First collect all expanded paths
self.io.tool_error(f"File not found: {abs_path}") for pattern in filenames:
continue expanded_pattern = expanduser(pattern)
if os.path.isabs(expanded_pattern):
# For absolute paths, glob it
matches = list(glob.glob(expanded_pattern))
else:
# For relative paths and globs, use glob from the root directory
matches = list(Path(self.coder.root).glob(expanded_pattern))
if not os.path.isfile(abs_path): if not matches:
self.io.tool_error(f"Not a file: {abs_path}") self.io.tool_error(f"No matches found for: {pattern}")
continue else:
all_paths.extend(matches)
if abs_path in self.coder.abs_fnames: # Then process them in sorted order
self.io.tool_error(f"{word} is already in the chat as an editable file") for path in sorted(all_paths):
continue abs_path = self.coder.abs_root_path(path)
if os.path.isfile(abs_path):
self._add_read_only_file(abs_path, path)
elif os.path.isdir(abs_path):
self._add_read_only_directory(abs_path, path)
else:
self.io.tool_error(f"Not a file or directory: {abs_path}")
if abs_path in self.coder.abs_read_only_fnames: def _add_read_only_file(self, abs_path, original_name):
self.io.tool_error(f"{word} is already in the chat as a read-only file") if is_image_file(original_name) and not self.coder.main_model.info.get("supports_vision"):
continue self.io.tool_error(
f"Cannot add image file {original_name} as the"
f" {self.coder.main_model.name} does not support images."
)
return
if abs_path in self.coder.abs_read_only_fnames:
self.io.tool_error(f"{original_name} is already in the chat as a read-only file")
return
elif abs_path in self.coder.abs_fnames:
self.coder.abs_fnames.remove(abs_path)
self.coder.abs_read_only_fnames.add(abs_path) self.coder.abs_read_only_fnames.add(abs_path)
self.io.tool_output(f"Added {word} to read-only files.") self.io.tool_output(
f"Moved {original_name} from editable to read-only files in the chat"
)
else:
self.coder.abs_read_only_fnames.add(abs_path)
self.io.tool_output(f"Added {original_name} to read-only files.")
def _add_read_only_directory(self, abs_path, original_name):
added_files = 0
for root, _, files in os.walk(abs_path):
for file in files:
file_path = os.path.join(root, file)
if (
file_path not in self.coder.abs_fnames
and file_path not in self.coder.abs_read_only_fnames
):
self.coder.abs_read_only_fnames.add(file_path)
added_files += 1
if added_files > 0:
self.io.tool_output(
f"Added {added_files} files from directory {original_name} to read-only files."
)
else:
self.io.tool_output(f"No new files added from directory {original_name}.")
def cmd_map(self, args): def cmd_map(self, args):
"Print out the current repository map" "Print out the current repository map"
@ -1077,9 +1256,120 @@ class Commands:
if repo_map: if repo_map:
self.io.tool_output("The repo map has been refreshed, use /map to view it.") self.io.tool_output("The repo map has been refreshed, use /map to view it.")
def cmd_settings(self, args):
"Print out the current settings"
settings = format_settings(self.parser, self.args)
announcements = "\n".join(self.coder.get_announcements())
output = f"{announcements}\n{settings}"
self.io.tool_output(output)
def completions_raw_load(self, document, complete_event):
return self.completions_raw_read_only(document, complete_event)
def cmd_load(self, args):
"Load and execute commands from a file"
if not args.strip():
self.io.tool_error("Please provide a filename containing commands to load.")
return
try:
with open(args.strip(), "r", encoding=self.io.encoding, errors="replace") as f:
commands = f.readlines()
except FileNotFoundError:
self.io.tool_error(f"File not found: {args}")
return
except Exception as e:
self.io.tool_error(f"Error reading file: {e}")
return
for cmd in commands:
cmd = cmd.strip()
if not cmd or cmd.startswith("#"):
continue
self.io.tool_output(f"\nExecuting: {cmd}")
self.run(cmd)
def completions_raw_save(self, document, complete_event):
return self.completions_raw_read_only(document, complete_event)
def cmd_save(self, args):
"Save commands to a file that can reconstruct the current chat session's files"
if not args.strip():
self.io.tool_error("Please provide a filename to save the commands to.")
return
try:
with open(args.strip(), "w", encoding=self.io.encoding) as f:
f.write("/drop\n")
# Write commands to add editable files
for fname in sorted(self.coder.abs_fnames):
rel_fname = self.coder.get_rel_fname(fname)
f.write(f"/add {rel_fname}\n")
# Write commands to add read-only files
for fname in sorted(self.coder.abs_read_only_fnames):
# Use absolute path for files outside repo root, relative path for files inside
if Path(fname).is_relative_to(self.coder.root):
rel_fname = self.coder.get_rel_fname(fname)
f.write(f"/read-only {rel_fname}\n")
else:
f.write(f"/read-only {fname}\n")
self.io.tool_output(f"Saved commands to {args.strip()}")
except Exception as e:
self.io.tool_error(f"Error saving commands to file: {e}")
def cmd_copy(self, args):
"Copy the last assistant message to the clipboard"
all_messages = self.coder.done_messages + self.coder.cur_messages
assistant_messages = [msg for msg in reversed(all_messages) if msg["role"] == "assistant"]
if not assistant_messages:
self.io.tool_error("No assistant messages found to copy.")
return
last_assistant_message = assistant_messages[0]["content"]
try:
pyperclip.copy(last_assistant_message)
preview = (
last_assistant_message[:50] + "..."
if len(last_assistant_message) > 50
else last_assistant_message
)
self.io.tool_output(f"Copied last assistant message to clipboard. Preview: {preview}")
except pyperclip.PyperclipException as e:
self.io.tool_error(f"Failed to copy to clipboard: {str(e)}")
self.io.tool_output(
"You may need to install xclip or xsel on Linux, or pbcopy on macOS."
)
except Exception as e:
self.io.tool_error(f"An unexpected error occurred while copying to clipboard: {str(e)}")
def cmd_report(self, args):
"Report a problem by opening a GitHub Issue"
from aider.report import report_github_issue
announcements = "\n".join(self.coder.get_announcements())
issue_text = announcements
if args.strip():
title = args.strip()
else:
title = None
report_github_issue(issue_text, title=title, confirm=False)
def cmd_editor(self, initial_content=""):
"Open an editor to write a prompt"
user_input = pipe_editor(initial_content, suffix="md", editor=self.editor)
if user_input.strip():
self.io.set_placeholder(user_input.rstrip())
def expand_subdir(file_path): def expand_subdir(file_path):
file_path = Path(file_path)
if file_path.is_file(): if file_path.is_file():
yield file_path yield file_path
return return
@ -1087,7 +1377,7 @@ def expand_subdir(file_path):
if file_path.is_dir(): if file_path.is_dir():
for file in file_path.rglob("*"): for file in file_path.rglob("*"):
if file.is_file(): if file.is_file():
yield str(file) yield file
def parse_quoted_filenames(args): def parse_quoted_filenames(args):
@ -1097,11 +1387,7 @@ def parse_quoted_filenames(args):
def get_help_md(): def get_help_md():
from aider.coders import Coder md = Commands(None, None).get_help_md()
from aider.models import Model
coder = Coder(Model("gpt-3.5-turbo"), None)
md = coder.commands.get_help_md()
return md return md

View file

@ -50,7 +50,6 @@ def diff_partial_update(lines_orig, lines_updated, final=False, fname=None):
# dump(lines_orig) # dump(lines_orig)
# dump(lines_updated) # dump(lines_updated)
assert_newlines(lines_orig)
assert_newlines(lines_orig) assert_newlines(lines_orig)
num_orig_lines = len(lines_orig) num_orig_lines = len(lines_orig)

146
aider/editor.py Normal file
View file

@ -0,0 +1,146 @@
"""
Editor module for handling system text editor interactions.
This module provides functionality to:
- Discover and launch the system's configured text editor
- Create and manage temporary files for editing
- Handle editor preferences from environment variables
- Support cross-platform editor operations
"""
import os
import platform
import shlex
import subprocess
import tempfile
from rich.console import Console
DEFAULT_EDITOR_NIX = "vi"
DEFAULT_EDITOR_OS_X = "vim"
DEFAULT_EDITOR_WINDOWS = "notepad"
console = Console()
def print_status_message(success, message, style=None):
"""
Print a status message with appropriate styling.
:param success: Whether the operation was successful
:param message: The message to display
:param style: Optional style override. If None, uses green for success and red for failure
"""
if style is None:
style = "bold green" if success else "bold red"
console.print(message, style=style)
print("")
def write_temp_file(
input_data="",
suffix=None,
prefix=None,
dir=None,
):
"""
Create a temporary file with the given input data.
:param input_data: Content to write to the temporary file
:param suffix: Optional file extension (without the dot)
:param prefix: Optional prefix for the temporary filename
:param dir: Optional directory to create the file in
:return: Path to the created temporary file
:raises: OSError if file creation or writing fails
"""
kwargs = {"prefix": prefix, "dir": dir}
if suffix:
kwargs["suffix"] = f".{suffix}"
fd, filepath = tempfile.mkstemp(**kwargs)
try:
with os.fdopen(fd, "w") as f:
f.write(input_data)
except Exception:
os.close(fd)
raise
return filepath
def get_environment_editor(default=None):
"""
Fetches the preferred editor from the environment variables.
This function checks the following environment variables in order to
determine the user's preferred editor:
- VISUAL
- EDITOR
:param default: The default editor to return if no environment variable is set.
:type default: str or None
:return: The preferred editor as specified by environment variables or the default value.
:rtype: str or None
"""
editor = os.environ.get("VISUAL", os.environ.get("EDITOR", default))
return editor
def discover_editor(editor_override=None):
"""
Discovers and returns the appropriate editor command as a list of arguments.
Handles cases where the editor command includes arguments, including quoted arguments
with spaces (e.g. 'vim -c "set noswapfile"').
:return: A list of command parts ready for subprocess execution
:rtype: list[str]
"""
system = platform.system()
if system == "Windows":
default_editor = DEFAULT_EDITOR_WINDOWS
elif system == "Darwin":
default_editor = DEFAULT_EDITOR_OS_X
else:
default_editor = DEFAULT_EDITOR_NIX
if editor_override:
editor = editor_override
else:
editor = get_environment_editor(default_editor)
try:
return shlex.split(editor)
except ValueError as e:
raise RuntimeError(f"Invalid editor command format '{editor}': {e}")
def pipe_editor(input_data="", suffix=None, editor=None):
"""
Opens the system editor with optional input data and returns the edited content.
This function creates a temporary file with the provided input data, opens it in
the system editor, waits for the user to make changes and close the editor, then
reads and returns the modified content. The temporary file is deleted afterwards.
:param input_data: Initial content to populate the editor with
:type input_data: str
:param suffix: Optional file extension for the temporary file (e.g. '.txt', '.md')
:type suffix: str or None
:return: The edited content after the editor is closed
:rtype: str
"""
filepath = write_temp_file(input_data, suffix)
command_parts = discover_editor(editor)
command_parts.append(filepath)
subprocess.call(command_parts)
with open(filepath, "r") as f:
output_data = f.read()
try:
os.remove(filepath)
except PermissionError:
print_status_message(
False,
(
f"WARNING: Unable to delete temporary file {filepath!r}. You may need to delete it"
" manually."
),
)
return output_data

81
aider/exceptions.py Normal file
View file

@ -0,0 +1,81 @@
from dataclasses import dataclass
@dataclass
class ExInfo:
name: str
retry: bool
description: str
EXCEPTIONS = [
ExInfo("APIConnectionError", True, None),
ExInfo("APIError", True, None),
ExInfo("APIResponseValidationError", True, None),
ExInfo(
"AuthenticationError",
False,
"The API provider is not able to authenticate you. Check your API key.",
),
ExInfo("AzureOpenAIError", True, None),
ExInfo("BadRequestError", False, None),
ExInfo("BudgetExceededError", True, None),
ExInfo(
"ContentPolicyViolationError",
True,
"The API provider has refused the request due to a safety policy about the content.",
),
ExInfo("ContextWindowExceededError", False, None), # special case handled in base_coder
ExInfo("InternalServerError", True, "The API provider's servers are down or overloaded."),
ExInfo("InvalidRequestError", True, None),
ExInfo("JSONSchemaValidationError", True, None),
ExInfo("NotFoundError", False, None),
ExInfo("OpenAIError", True, None),
ExInfo(
"RateLimitError",
True,
"The API provider has rate limited you. Try again later or check your quotas.",
),
ExInfo("RouterRateLimitError", True, None),
ExInfo("ServiceUnavailableError", True, "The API provider's servers are down or overloaded."),
ExInfo("UnprocessableEntityError", True, None),
ExInfo("UnsupportedParamsError", True, None),
ExInfo(
"Timeout",
True,
"The API provider timed out without returning a response. They may be down or overloaded.",
),
]
class LiteLLMExceptions:
exceptions = dict()
def __init__(self):
self._load()
def _load(self, strict=False):
import litellm
for var in dir(litellm):
if not var.endswith("Error"):
continue
ex_info = None
for exi in EXCEPTIONS:
if var == exi.name:
ex_info = exi
break
if strict and not ex_info:
raise ValueError(f"{var} is in litellm but not in aider's exceptions list")
ex = getattr(litellm, var)
self.exceptions[ex] = ex_info
def exceptions_tuple(self):
return tuple(self.exceptions)
def get_ex_info(self, ex):
"""Return the ExInfo for a given exception instance"""
return self.exceptions.get(ex.__class__, ExInfo(None, None, None))

26
aider/format_settings.py Normal file
View file

@ -0,0 +1,26 @@
def scrub_sensitive_info(args, text):
# Replace sensitive information with last 4 characters
if text and args.openai_api_key:
last_4 = args.openai_api_key[-4:]
text = text.replace(args.openai_api_key, f"...{last_4}")
if text and args.anthropic_api_key:
last_4 = args.anthropic_api_key[-4:]
text = text.replace(args.anthropic_api_key, f"...{last_4}")
return text
def format_settings(parser, args):
show = scrub_sensitive_info(args, parser.format_values())
# clean up the headings for consistency w/ new lines
heading_env = "Environment Variables:"
heading_defaults = "Defaults:"
if heading_env in show:
show = show.replace(heading_env, "\n" + heading_env)
show = show.replace(heading_defaults, "\n" + heading_defaults)
show += "\n"
show += "Option settings:\n"
for arg, val in sorted(vars(args).items()):
if val:
val = scrub_sensitive_info(args, str(val))
show += f" - {arg}: {val}\n" # noqa: E221
return show

View file

@ -26,6 +26,10 @@ class CaptureIO(InputOutput):
self.lines.append(msg) self.lines.append(msg)
super().tool_error(msg) super().tool_error(msg)
def tool_warning(self, msg):
self.lines.append(msg)
super().tool_warning(msg)
def get_captured_lines(self): def get_captured_lines(self):
lines = self.lines lines = self.lines
self.lines = [] self.lines = []
@ -156,7 +160,7 @@ class GUI:
st.warning( st.warning(
"This browser version of aider is experimental. Please share feedback in [GitHub" "This browser version of aider is experimental. Please share feedback in [GitHub"
" issues](https://github.com/paul-gauthier/aider/issues)." " issues](https://github.com/Aider-AI/aider/issues)."
) )
def do_settings_tab(self): def do_settings_tab(self):
@ -524,7 +528,7 @@ def gui_main():
page_icon=urls.favicon, page_icon=urls.favicon,
menu_items={ menu_items={
"Get Help": urls.website, "Get Help": urls.website,
"Report a bug": "https://github.com/paul-gauthier/aider/issues", "Report a bug": "https://github.com/Aider-AI/aider/issues",
"About": "# Aider\nAI pair programming in your browser.", "About": "# Aider\nAI pair programming in your browser.",
}, },
) )

View file

@ -1,6 +1,8 @@
#!/usr/bin/env python #!/usr/bin/env python
import json
import os import os
import shutil
import warnings import warnings
from pathlib import Path from pathlib import Path
@ -38,24 +40,45 @@ def get_package_files():
def fname_to_url(filepath): def fname_to_url(filepath):
website = "website/" website = "website"
index = "/index.md" index = "index.md"
md = ".md" md = ".md"
docid = "" # Convert backslashes to forward slashes for consistency
if filepath.startswith("website/_includes/"): filepath = filepath.replace("\\", "/")
pass
elif filepath.startswith(website):
docid = filepath[len(website) :]
if filepath.endswith(index): # Convert to Path object for easier manipulation
filepath = filepath[: -len(index)] + "/" path = Path(filepath)
elif filepath.endswith(md):
filepath = filepath[: -len(md)] + ".html"
docid = "https://aider.chat/" + filepath # Split the path into parts
parts = path.parts
return docid # Find the 'website' part in the path
try:
website_index = [p.lower() for p in parts].index(website.lower())
except ValueError:
return "" # 'website' not found in the path
# Extract the part of the path starting from 'website'
relevant_parts = parts[website_index + 1 :]
# Handle _includes directory
if relevant_parts and relevant_parts[0].lower() == "_includes":
return ""
# Join the remaining parts
url_path = "/".join(relevant_parts)
# Handle index.md and other .md files
if url_path.lower().endswith(index.lower()):
url_path = url_path[: -len(index)]
elif url_path.lower().endswith(md.lower()):
url_path = url_path[: -len(md)] + ".html"
# Ensure the URL starts and ends with '/'
url_path = url_path.strip("/")
return f"https://aider.chat/{url_path}"
def get_index(): def get_index():
@ -69,12 +92,17 @@ def get_index():
dname = Path.home() / ".aider" / "caches" / ("help." + __version__) dname = Path.home() / ".aider" / "caches" / ("help." + __version__)
if dname.exists(): index = None
storage_context = StorageContext.from_defaults( try:
persist_dir=dname, if dname.exists():
) storage_context = StorageContext.from_defaults(
index = load_index_from_storage(storage_context) persist_dir=dname,
else: )
index = load_index_from_storage(storage_context)
except (OSError, json.JSONDecodeError):
shutil.rmtree(dname)
if index is None:
parser = MarkdownNodeParser() parser = MarkdownNodeParser()
nodes = [] nodes = []

View file

@ -7,4 +7,5 @@ exclude_website_pats = [
"docs/unified-diffs.md", "docs/unified-diffs.md",
"docs/leaderboards/index.md", "docs/leaderboards/index.md",
"assets/**", "assets/**",
"**/.DS_Store",
] ]

View file

@ -108,7 +108,9 @@ class ChatSummary:
for model in self.models: for model in self.models:
try: try:
summary = simple_send_with_retries(model.name, summarize_messages) summary = simple_send_with_retries(
model.name, summarize_messages, extra_params=model.extra_params
)
if summary is not None: if summary is not None:
summary = prompts.summary_prefix + summary summary = prompts.summary_prefix + summary
return [dict(role="user", content=summary)] return [dict(role="user", content=summary)]

View file

@ -1,29 +1,45 @@
import base64 import base64
import os import os
import time
import webbrowser
from collections import defaultdict from collections import defaultdict
from dataclasses import dataclass
from datetime import datetime from datetime import datetime
from io import StringIO
from pathlib import Path from pathlib import Path
from prompt_toolkit import prompt from prompt_toolkit.completion import Completer, Completion, ThreadedCompleter
from prompt_toolkit.completion import Completer, Completion from prompt_toolkit.cursor_shapes import ModalCursorShapeConfig
from prompt_toolkit.enums import EditingMode from prompt_toolkit.enums import EditingMode
from prompt_toolkit.history import FileHistory from prompt_toolkit.history import FileHistory
from prompt_toolkit.key_binding import KeyBindings from prompt_toolkit.key_binding import KeyBindings
from prompt_toolkit.lexers import PygmentsLexer from prompt_toolkit.lexers import PygmentsLexer
from prompt_toolkit.shortcuts import CompleteStyle, PromptSession from prompt_toolkit.shortcuts import CompleteStyle, PromptSession
from prompt_toolkit.styles import Style from prompt_toolkit.styles import Style
from prompt_toolkit.validation import Validator
from pygments.lexers import MarkdownLexer, guess_lexer_for_filename from pygments.lexers import MarkdownLexer, guess_lexer_for_filename
from pygments.token import Token from pygments.token import Token
from pygments.util import ClassNotFound from rich.columns import Columns
from rich.console import Console from rich.console import Console
from rich.markdown import Markdown
from rich.style import Style as RichStyle from rich.style import Style as RichStyle
from rich.text import Text from rich.text import Text
from aider.mdstream import MarkdownStream
from .dump import dump # noqa: F401 from .dump import dump # noqa: F401
from .utils import is_image_file from .utils import is_image_file
@dataclass
class ConfirmGroup:
preference: str = None
show_group: bool = True
def __init__(self, items=None):
if items is not None:
self.show_group = len(items) > 1
class AutoCompleter(Completer): class AutoCompleter(Completer):
def __init__( def __init__(
self, root, rel_fnames, addable_rel_fnames, commands, encoding, abs_read_only_fnames=None self, root, rel_fnames, addable_rel_fnames, commands, encoding, abs_read_only_fnames=None
@ -57,7 +73,15 @@ class AutoCompleter(Completer):
if abs_read_only_fnames: if abs_read_only_fnames:
all_fnames.extend(abs_read_only_fnames) all_fnames.extend(abs_read_only_fnames)
for fname in all_fnames: self.all_fnames = all_fnames
self.tokenized = False
def tokenize(self):
if self.tokenized:
return
self.tokenized = True
for fname in self.all_fnames:
try: try:
with open(fname, "r", encoding=self.encoding) as f: with open(fname, "r", encoding=self.encoding) as f:
content = f.read() content = f.read()
@ -65,27 +89,37 @@ class AutoCompleter(Completer):
continue continue
try: try:
lexer = guess_lexer_for_filename(fname, content) lexer = guess_lexer_for_filename(fname, content)
except ClassNotFound: except Exception: # On Windows, bad ref to time.clock which is deprecated
continue continue
tokens = list(lexer.get_tokens(content))
self.words.update(token[1] for token in tokens if token[0] in Token.Name)
def get_command_completions(self, text, words): tokens = list(lexer.get_tokens(content))
candidates = [] self.words.update(
(token[1], f"`{token[1]}`") for token in tokens if token[0] in Token.Name
)
def get_command_completions(self, document, complete_event, text, words):
if len(words) == 1 and not text[-1].isspace(): if len(words) == 1 and not text[-1].isspace():
partial = words[0].lower() partial = words[0].lower()
candidates = [cmd for cmd in self.command_names if cmd.startswith(partial)] candidates = [cmd for cmd in self.command_names if cmd.startswith(partial)]
return candidates for candidate in sorted(candidates):
yield Completion(candidate, start_position=-len(words[-1]))
return
if len(words) <= 1: if len(words) <= 1 or text[-1].isspace():
return [] return
if text[-1].isspace():
return []
cmd = words[0] cmd = words[0]
partial = words[-1].lower() partial = words[-1].lower()
if cmd not in self.command_names: matches, _, _ = self.commands.matching_commands(cmd)
if len(matches) == 1:
cmd = matches[0]
elif cmd not in matches:
return
raw_completer = self.commands.get_raw_completions(cmd)
if raw_completer:
yield from raw_completer(document, complete_event)
return return
if cmd not in self.command_completions: if cmd not in self.command_completions:
@ -98,41 +132,42 @@ class AutoCompleter(Completer):
return return
candidates = [word for word in candidates if partial in word.lower()] candidates = [word for word in candidates if partial in word.lower()]
return candidates for candidate in sorted(candidates):
yield Completion(candidate, start_position=-len(words[-1]))
def get_completions(self, document, complete_event): def get_completions(self, document, complete_event):
self.tokenize()
text = document.text_before_cursor text = document.text_before_cursor
words = text.split() words = text.split()
if not words: if not words:
return return
if text and text[-1].isspace():
# don't keep completing after a space
return
if text[0] == "/": if text[0] == "/":
candidates = self.get_command_completions(text, words) yield from self.get_command_completions(document, complete_event, text, words)
if candidates is not None: return
for candidate in candidates:
yield Completion(candidate, start_position=-len(words[-1]))
return
candidates = self.words candidates = self.words
candidates.update(set(self.fname_to_rel_fnames)) candidates.update(set(self.fname_to_rel_fnames))
candidates = [ candidates = [word if type(word) is tuple else (word, word) for word in candidates]
(word, f"`{word}`" if word not in self.fname_to_rel_fnames else word)
for word in candidates
]
last_word = words[-1] last_word = words[-1]
completions = []
for word_match, word_insert in candidates: for word_match, word_insert in candidates:
if word_match.lower().startswith(last_word.lower()): if word_match.lower().startswith(last_word.lower()):
completions.append((word_insert, -len(last_word), word_match))
rel_fnames = self.fname_to_rel_fnames.get(word_match, []) rel_fnames = self.fname_to_rel_fnames.get(word_match, [])
if rel_fnames: if rel_fnames:
for rel_fname in rel_fnames: for rel_fname in rel_fnames:
yield Completion( completions.append((rel_fname, -len(last_word), rel_fname))
rel_fname, start_position=-len(last_word), display=rel_fname
) for ins, pos, match in sorted(completions):
else: yield Completion(ins, start_position=pos, display=match)
yield Completion(
word_insert, start_position=-len(last_word), display=word_match
)
class InputOutput: class InputOutput:
@ -142,7 +177,7 @@ class InputOutput:
def __init__( def __init__(
self, self,
pretty=True, pretty=True,
yes=False, yes=None,
input_history_file=None, input_history_file=None,
chat_history_file=None, chat_history_file=None,
input=None, input=None,
@ -150,11 +185,21 @@ class InputOutput:
user_input_color="blue", user_input_color="blue",
tool_output_color=None, tool_output_color=None,
tool_error_color="red", tool_error_color="red",
tool_warning_color="#FFA500",
assistant_output_color="blue",
completion_menu_color=None,
completion_menu_bg_color=None,
completion_menu_current_color=None,
completion_menu_current_bg_color=None,
code_theme="default",
encoding="utf-8", encoding="utf-8",
dry_run=False, dry_run=False,
llm_history_file=None, llm_history_file=None,
editingmode=EditingMode.EMACS, editingmode=EditingMode.EMACS,
fancy_input=True,
): ):
self.placeholder = None
self.never_prompts = set()
self.editingmode = editingmode self.editingmode = editingmode
no_color = os.environ.get("NO_COLOR") no_color = os.environ.get("NO_COLOR")
if no_color is not None and no_color != "": if no_color is not None and no_color != "":
@ -163,6 +208,14 @@ class InputOutput:
self.user_input_color = user_input_color if pretty else None self.user_input_color = user_input_color if pretty else None
self.tool_output_color = tool_output_color if pretty else None self.tool_output_color = tool_output_color if pretty else None
self.tool_error_color = tool_error_color if pretty else None self.tool_error_color = tool_error_color if pretty else None
self.tool_warning_color = tool_warning_color if pretty else None
self.assistant_output_color = assistant_output_color
self.completion_menu_color = completion_menu_color if pretty else None
self.completion_menu_bg_color = completion_menu_bg_color if pretty else None
self.completion_menu_current_color = completion_menu_current_color if pretty else None
self.completion_menu_current_bg_color = completion_menu_current_bg_color if pretty else None
self.code_theme = code_theme
self.input = input self.input = input
self.output = output self.output = output
@ -183,19 +236,74 @@ class InputOutput:
self.encoding = encoding self.encoding = encoding
self.dry_run = dry_run self.dry_run = dry_run
if pretty:
self.console = Console()
else:
self.console = Console(force_terminal=False, no_color=True)
current_time = datetime.now().strftime("%Y-%m-%d %H:%M:%S") current_time = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
self.append_chat_history(f"\n# aider chat started at {current_time}\n\n") self.append_chat_history(f"\n# aider chat started at {current_time}\n\n")
self.prompt_session = None
if fancy_input:
# Initialize PromptSession
session_kwargs = {
"input": self.input,
"output": self.output,
"lexer": PygmentsLexer(MarkdownLexer),
"editing_mode": self.editingmode,
}
if self.editingmode == EditingMode.VI:
session_kwargs["cursor"] = ModalCursorShapeConfig()
if self.input_history_file is not None:
session_kwargs["history"] = FileHistory(self.input_history_file)
try:
self.prompt_session = PromptSession(**session_kwargs)
self.console = Console() # pretty console
except Exception as err:
self.console = Console(force_terminal=False, no_color=True)
self.tool_error(f"Can't initialize prompt toolkit: {err}") # non-pretty
else:
self.console = Console(force_terminal=False, no_color=True) # non-pretty
def _get_style(self):
style_dict = {}
if not self.pretty:
return Style.from_dict(style_dict)
if self.user_input_color:
style_dict.setdefault("", self.user_input_color)
style_dict.update(
{
"pygments.literal.string": f"bold italic {self.user_input_color}",
}
)
# Conditionally add 'completion-menu' style
completion_menu_style = []
if self.completion_menu_bg_color:
completion_menu_style.append(f"bg:{self.completion_menu_bg_color}")
if self.completion_menu_color:
completion_menu_style.append(self.completion_menu_color)
if completion_menu_style:
style_dict["completion-menu"] = " ".join(completion_menu_style)
# Conditionally add 'completion-menu.completion.current' style
completion_menu_current_style = []
if self.completion_menu_current_bg_color:
completion_menu_current_style.append(f"bg:{self.completion_menu_current_bg_color}")
if self.completion_menu_current_color:
completion_menu_current_style.append(self.completion_menu_current_color)
if completion_menu_current_style:
style_dict["completion-menu.completion.current"] = " ".join(
completion_menu_current_style
)
return Style.from_dict(style_dict)
def read_image(self, filename): def read_image(self, filename):
try: try:
with open(str(filename), "rb") as image_file: with open(str(filename), "rb") as image_file:
encoded_string = base64.b64encode(image_file.read()) encoded_string = base64.b64encode(image_file.read())
return encoded_string.decode("utf-8") return encoded_string.decode("utf-8")
except OSError as err:
self.tool_error(f"{filename}: unable to read: {err}")
return
except FileNotFoundError: except FileNotFoundError:
self.tool_error(f"{filename}: file not found error") self.tool_error(f"{filename}: file not found error")
return return
@ -213,6 +321,9 @@ class InputOutput:
try: try:
with open(str(filename), "r", encoding=self.encoding) as f: with open(str(filename), "r", encoding=self.encoding) as f:
return f.read() return f.read()
except OSError as err:
self.tool_error(f"{filename}: unable to read: {err}")
return
except FileNotFoundError: except FileNotFoundError:
self.tool_error(f"{filename}: file not found error") self.tool_error(f"{filename}: file not found error")
return return
@ -224,11 +335,43 @@ class InputOutput:
self.tool_error("Use --encoding to set the unicode encoding.") self.tool_error("Use --encoding to set the unicode encoding.")
return return
def write_text(self, filename, content): def write_text(self, filename, content, max_retries=5, initial_delay=0.1):
"""
Writes content to a file, retrying with progressive backoff if the file is locked.
:param filename: Path to the file to write.
:param content: Content to write to the file.
:param max_retries: Maximum number of retries if a file lock is encountered.
:param initial_delay: Initial delay (in seconds) before the first retry.
"""
if self.dry_run: if self.dry_run:
return return
with open(str(filename), "w", encoding=self.encoding) as f:
f.write(content) delay = initial_delay
for attempt in range(max_retries):
try:
with open(str(filename), "w", encoding=self.encoding) as f:
f.write(content)
return # Successfully wrote the file
except PermissionError as err:
if attempt < max_retries - 1:
time.sleep(delay)
delay *= 2 # Exponential backoff
else:
self.tool_error(
f"Unable to write file {filename} after {max_retries} attempts: {err}"
)
raise
except OSError as err:
self.tool_error(f"Unable to write file {filename}: {err}")
raise
def rule(self):
if self.pretty:
style = dict(style=self.user_input_color) if self.user_input_color else dict()
self.console.rule(**style)
else:
print()
def get_input( def get_input(
self, self,
@ -239,16 +382,15 @@ class InputOutput:
abs_read_only_fnames=None, abs_read_only_fnames=None,
edit_format=None, edit_format=None,
): ):
if self.pretty: self.rule()
style = dict(style=self.user_input_color) if self.user_input_color else dict()
self.console.rule(**style)
else:
print()
rel_fnames = list(rel_fnames) rel_fnames = list(rel_fnames)
show = "" show = ""
if rel_fnames: if rel_fnames:
show = " ".join(rel_fnames) + "\n" rel_read_only_fnames = [
get_rel_fname(fname, root) for fname in (abs_read_only_fnames or [])
]
show = self.format_files_for_input(rel_fnames, rel_read_only_fnames)
if edit_format: if edit_format:
show += edit_format show += edit_format
show += "> " show += "> "
@ -256,62 +398,87 @@ class InputOutput:
inp = "" inp = ""
multiline_input = False multiline_input = False
if self.user_input_color: style = self._get_style()
style = Style.from_dict(
{
"": self.user_input_color,
"pygments.literal.string": f"bold italic {self.user_input_color}",
}
)
else:
style = None
completer_instance = AutoCompleter( completer_instance = ThreadedCompleter(
root, AutoCompleter(
rel_fnames, root,
addable_rel_fnames, rel_fnames,
commands, addable_rel_fnames,
self.encoding, commands,
abs_read_only_fnames=abs_read_only_fnames, self.encoding,
abs_read_only_fnames=abs_read_only_fnames,
)
) )
kb = KeyBindings()
@kb.add("c-space")
def _(event):
"Ignore Ctrl when pressing space bar"
event.current_buffer.insert_text(" ")
@kb.add("escape", "c-m", eager=True)
def _(event):
event.current_buffer.insert_text("\n")
while True: while True:
if multiline_input: if multiline_input:
show = ". " show = ". "
session_kwargs = { try:
"message": show, if self.prompt_session:
"completer": completer_instance, # Use placeholder if set, then clear it
"reserve_space_for_menu": 4, default = self.placeholder or ""
"complete_style": CompleteStyle.MULTI_COLUMN, self.placeholder = None
"input": self.input,
"output": self.output,
"lexer": PygmentsLexer(MarkdownLexer),
}
if style:
session_kwargs["style"] = style
if self.input_history_file is not None: line = self.prompt_session.prompt(
session_kwargs["history"] = FileHistory(self.input_history_file) show,
default=default,
completer=completer_instance,
reserve_space_for_menu=4,
complete_style=CompleteStyle.MULTI_COLUMN,
style=style,
key_bindings=kb,
)
else:
line = input(show)
except UnicodeEncodeError as err:
self.tool_error(str(err))
return ""
kb = KeyBindings() if line.strip("\r\n") and not multiline_input:
stripped = line.strip("\r\n")
@kb.add("escape", "c-m", eager=True) if stripped == "{":
def _(event): multiline_input = True
event.current_buffer.insert_text("\n") multiline_tag = None
inp += ""
session = PromptSession( elif stripped[0] == "{":
key_bindings=kb, editing_mode=self.editingmode, **session_kwargs # Extract tag if it exists (only alphanumeric chars)
) tag = "".join(c for c in stripped[1:] if c.isalnum())
line = session.prompt() if stripped == "{" + tag:
multiline_input = True
if line and line[0] == "{" and not multiline_input: multiline_tag = tag
multiline_input = True inp += ""
inp += line[1:] + "\n" else:
inp = line
break
else:
inp = line
break
continue continue
elif line and line[-1] == "}" and multiline_input: elif multiline_input and line.strip():
inp += line[:-1] + "\n" if multiline_tag:
break # Check if line is exactly "tag}"
if line.strip("\r\n") == f"{multiline_tag}}}":
break
else:
inp += line + "\n"
# Check if line is exactly "}"
elif line.strip("\r\n") == "}":
break
else:
inp += line + "\n"
elif multiline_input: elif multiline_input:
inp += line + "\n" inp += line + "\n"
else: else:
@ -325,10 +492,13 @@ class InputOutput:
def add_to_input_history(self, inp): def add_to_input_history(self, inp):
if not self.input_history_file: if not self.input_history_file:
return return
FileHistory(self.input_history_file).append_string(inp) try:
# Also add to the in-memory history if it exists FileHistory(self.input_history_file).append_string(inp)
if hasattr(self, "session") and hasattr(self.session, "history"): # Also add to the in-memory history if it exists
self.session.history.append_string(inp) if self.prompt_session and self.prompt_session.history:
self.prompt_session.history.append_string(inp)
except OSError as err:
self.tool_warning(f"Unable to write to input history file: {err}")
def get_input_history(self): def get_input_history(self):
if not self.input_history_file: if not self.input_history_file:
@ -345,10 +515,17 @@ class InputOutput:
log_file.write(f"{role.upper()} {timestamp}\n") log_file.write(f"{role.upper()} {timestamp}\n")
log_file.write(content + "\n") log_file.write(content + "\n")
def display_user_input(self, inp):
if self.pretty and self.user_input_color:
style = dict(style=self.user_input_color)
else:
style = dict()
self.console.print(Text(inp), **style)
def user_input(self, inp, log_only=True): def user_input(self, inp, log_only=True):
if not log_only and self.pretty: if not log_only:
style = dict(style=self.user_input_color) if self.user_input_color else dict() self.display_user_input(inp)
self.console.print(Text(inp), **style)
prefix = "####" prefix = "####"
if inp: if inp:
@ -368,15 +545,49 @@ class InputOutput:
hist = "\n" + content.strip() + "\n\n" hist = "\n" + content.strip() + "\n\n"
self.append_chat_history(hist) self.append_chat_history(hist)
def confirm_ask(self, question, default="y", subject=None, explicit_yes_required=False): def offer_url(self, url, prompt="Open URL for more info?", allow_never=True):
"""Offer to open a URL in the browser, returns True if opened."""
if url in self.never_prompts:
return False
if self.confirm_ask(prompt, subject=url, allow_never=allow_never):
webbrowser.open(url)
return True
return False
def confirm_ask(
self,
question,
default="y",
subject=None,
explicit_yes_required=False,
group=None,
allow_never=False,
):
self.num_user_asks += 1 self.num_user_asks += 1
if default == "y": question_id = (question, subject)
question += " [Y/n] "
elif default == "n": if question_id in self.never_prompts:
question += " [y/N] " return False
else:
question += " [y/n] " if group and not group.show_group:
group = None
if group:
allow_never = True
valid_responses = ["yes", "no"]
options = " (Y)es/(N)o"
if group:
if not explicit_yes_required:
options += "/(A)ll"
valid_responses.append("all")
options += "/(S)kip all"
valid_responses.append("skip")
if allow_never:
options += "/(D)on't ask again"
valid_responses.append("don't")
question += options + " [Yes]: "
if subject: if subject:
self.tool_output() self.tool_output()
@ -389,37 +600,64 @@ class InputOutput:
else: else:
self.tool_output(subject, bold=True) self.tool_output(subject, bold=True)
if self.pretty and self.user_input_color: style = self._get_style()
style = {"": self.user_input_color}
else:
style = dict()
def is_yesno(text): def is_valid_response(text):
return "yes".startswith(text.lower()) or "no".startswith(text.lower()) if not text:
return True
validator = Validator.from_callable( return text.lower() in valid_responses
is_yesno,
error_message="Answer yes or no.",
move_cursor_to_end=True,
)
if self.yes is True: if self.yes is True:
res = "n" if explicit_yes_required else "y" res = "n" if explicit_yes_required else "y"
elif self.yes is False: elif self.yes is False:
res = "n" res = "n"
elif group and group.preference:
res = group.preference
self.user_input(f"{question}{res}", log_only=False)
else: else:
res = prompt( while True:
question, if self.prompt_session:
style=Style.from_dict(style), res = self.prompt_session.prompt(
validator=validator, question,
) style=style,
if not res and default: )
res = default else:
res = input(question)
res = res.lower().strip() if not res:
is_yes = res in ("y", "yes") res = "y" # Default to Yes if no input
break
res = res.lower()
good = any(valid_response.startswith(res) for valid_response in valid_responses)
if good:
break
hist = f"{question.strip()} {'y' if is_yes else 'n'}" error_message = f"Please answer with one of: {', '.join(valid_responses)}"
self.tool_error(error_message)
res = res.lower()[0]
if res == "d" and allow_never:
self.never_prompts.add(question_id)
hist = f"{question.strip()} {res}"
self.append_chat_history(hist, linebreak=True, blockquote=True)
return False
if explicit_yes_required:
is_yes = res == "y"
else:
is_yes = res in ("y", "a")
is_all = res == "a" and group is not None and not explicit_yes_required
is_skip = res == "s" and group is not None
if group:
if is_all and not explicit_yes_required:
group.preference = "all"
elif is_skip:
group.preference = "skip"
hist = f"{question.strip()} {res}"
self.append_chat_history(hist, linebreak=True, blockquote=True) self.append_chat_history(hist, linebreak=True, blockquote=True)
return is_yes return is_yes
@ -431,17 +669,17 @@ class InputOutput:
self.tool_output() self.tool_output()
self.tool_output(subject, bold=True) self.tool_output(subject, bold=True)
if self.pretty and self.user_input_color: style = self._get_style()
style = Style.from_dict({"": self.user_input_color})
else:
style = None
if self.yes is True: if self.yes is True:
res = "yes" res = "yes"
elif self.yes is False: elif self.yes is False:
res = "no" res = "no"
else: else:
res = prompt(question + " ", default=default, style=style) if self.prompt_session:
res = self.prompt_session.prompt(question + " ", default=default, style=style)
else:
res = input(question + " ")
hist = f"{question.strip()} {res.strip()}" hist = f"{question.strip()} {res.strip()}"
self.append_chat_history(hist, linebreak=True, blockquote=True) self.append_chat_history(hist, linebreak=True, blockquote=True)
@ -450,36 +688,72 @@ class InputOutput:
return res return res
def tool_error(self, message="", strip=True): def _tool_message(self, message="", strip=True, color=None):
self.num_error_outputs += 1
if message.strip(): if message.strip():
if "\n" in message: if "\n" in message:
for line in message.splitlines(): for line in message.splitlines():
self.append_chat_history(line, linebreak=True, blockquote=True, strip=strip) self.append_chat_history(line, linebreak=True, blockquote=True, strip=strip)
else: else:
if strip: hist = message.strip() if strip else message
hist = message.strip()
else:
hist = message
self.append_chat_history(hist, linebreak=True, blockquote=True) self.append_chat_history(hist, linebreak=True, blockquote=True)
message = Text(message) message = Text(message)
style = dict(style=self.tool_error_color) if self.tool_error_color else dict() style = dict(style=color) if self.pretty and color else dict()
self.console.print(message, **style) self.console.print(message, **style)
def tool_error(self, message="", strip=True):
self.num_error_outputs += 1
self._tool_message(message, strip, self.tool_error_color)
def tool_warning(self, message="", strip=True):
self._tool_message(message, strip, self.tool_warning_color)
def tool_output(self, *messages, log_only=False, bold=False): def tool_output(self, *messages, log_only=False, bold=False):
if messages: if messages:
hist = " ".join(messages) hist = " ".join(messages)
hist = f"{hist.strip()}" hist = f"{hist.strip()}"
self.append_chat_history(hist, linebreak=True, blockquote=True) self.append_chat_history(hist, linebreak=True, blockquote=True)
if not log_only: if log_only:
messages = list(map(Text, messages)) return
style = dict(color=self.tool_output_color) if self.tool_output_color else dict()
messages = list(map(Text, messages))
style = dict()
if self.pretty:
if self.tool_output_color:
style["color"] = self.tool_output_color
style["reverse"] = bold style["reverse"] = bold
style = RichStyle(**style)
self.console.print(*messages, style=style) style = RichStyle(**style)
self.console.print(*messages, style=style)
def get_assistant_mdstream(self):
mdargs = dict(style=self.assistant_output_color, code_theme=self.code_theme)
mdStream = MarkdownStream(mdargs=mdargs)
return mdStream
def assistant_output(self, message, pretty=None):
show_resp = message
# Coder will force pretty off if fence is not triple-backticks
if pretty is None:
pretty = self.pretty
if pretty:
show_resp = Markdown(
message, style=self.assistant_output_color, code_theme=self.code_theme
)
else:
show_resp = Text(message or "<no response>")
self.console.print(show_resp)
def set_placeholder(self, placeholder):
"""Set a one-time placeholder text for the next input prompt."""
self.placeholder = placeholder
def print(self, message=""):
print(message)
def append_chat_history(self, text, linebreak=False, blockquote=False, strip=True): def append_chat_history(self, text, linebreak=False, blockquote=False, strip=True):
if blockquote: if blockquote:
@ -493,5 +767,58 @@ class InputOutput:
if not text.endswith("\n"): if not text.endswith("\n"):
text += "\n" text += "\n"
if self.chat_history_file is not None: if self.chat_history_file is not None:
with self.chat_history_file.open("a", encoding=self.encoding) as f: try:
f.write(text) with self.chat_history_file.open("a", encoding=self.encoding, errors="ignore") as f:
f.write(text)
except (PermissionError, OSError) as err:
print(f"Warning: Unable to write to chat history file {self.chat_history_file}.")
print(err)
self.chat_history_file = None # Disable further attempts to write
def format_files_for_input(self, rel_fnames, rel_read_only_fnames):
if not self.pretty:
read_only_files = []
for full_path in sorted(rel_read_only_fnames or []):
read_only_files.append(f"{full_path} (read only)")
editable_files = []
for full_path in sorted(rel_fnames):
if full_path in rel_read_only_fnames:
continue
editable_files.append(f"{full_path}")
return "\n".join(read_only_files + editable_files) + "\n"
output = StringIO()
console = Console(file=output, force_terminal=False)
read_only_files = sorted(rel_read_only_fnames or [])
editable_files = [f for f in sorted(rel_fnames) if f not in rel_read_only_fnames]
if read_only_files:
files_with_label = ["Readonly:"] + read_only_files
read_only_output = StringIO()
Console(file=read_only_output, force_terminal=False).print(Columns(files_with_label))
read_only_lines = read_only_output.getvalue().splitlines()
console.print(Columns(files_with_label))
if editable_files:
files_with_label = editable_files
if read_only_files:
files_with_label = ["Editable:"] + editable_files
editable_output = StringIO()
Console(file=editable_output, force_terminal=False).print(Columns(files_with_label))
editable_lines = editable_output.getvalue().splitlines()
if len(read_only_lines) > 1 or len(editable_lines) > 1:
console.print()
console.print(Columns(files_with_label))
return output.getvalue()
def get_rel_fname(fname, root):
try:
return os.path.relpath(fname, root)
except ValueError:
return fname

View file

@ -35,7 +35,10 @@ class Linter:
def get_rel_fname(self, fname): def get_rel_fname(self, fname):
if self.root: if self.root:
return os.path.relpath(fname, self.root) try:
return os.path.relpath(fname, self.root)
except ValueError:
return fname
else: else:
return fname return fname
@ -43,14 +46,18 @@ class Linter:
cmd += " " + rel_fname cmd += " " + rel_fname
cmd = cmd.split() cmd = cmd.split()
process = subprocess.Popen( try:
cmd, process = subprocess.Popen(
cwd=self.root, cmd,
stdout=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.STDOUT, stderr=subprocess.STDOUT,
encoding=self.encoding, encoding=self.encoding,
errors="replace", errors="replace",
) cwd=self.root,
)
except OSError as err:
print(f"Unable to execute lint command: {err}")
return
stdout, _ = process.communicate() stdout, _ = process.communicate()
errors = stdout errors = stdout
if process.returncode == 0: if process.returncode == 0:
@ -76,7 +83,11 @@ class Linter:
def lint(self, fname, cmd=None): def lint(self, fname, cmd=None):
rel_fname = self.get_rel_fname(fname) rel_fname = self.get_rel_fname(fname)
code = Path(fname).read_text(self.encoding) try:
code = Path(fname).read_text(encoding=self.encoding, errors="replace")
except OSError as err:
print(f"Unable to read {fname}: {err}")
return
if cmd: if cmd:
cmd = cmd.strip() cmd = cmd.strip()
@ -141,12 +152,12 @@ class Linter:
try: try:
result = subprocess.run( result = subprocess.run(
flake8_cmd, flake8_cmd,
cwd=self.root,
capture_output=True, capture_output=True,
text=True, text=True,
check=False, check=False,
encoding=self.encoding, encoding=self.encoding,
errors="replace", errors="replace",
cwd=self.root,
) )
errors = result.stdout + result.stderr errors = result.stdout + result.stderr
except Exception as e: except Exception as e:
@ -198,10 +209,24 @@ def basic_lint(fname, code):
if not lang: if not lang:
return return
parser = get_parser(lang) # Tree-sitter linter is not capable of working with typescript #1132
if lang == "typescript":
return
try:
parser = get_parser(lang)
except Exception as err:
print(f"Unable to load parser: {err}")
return
tree = parser.parse(bytes(code, "utf-8")) tree = parser.parse(bytes(code, "utf-8"))
errors = traverse_tree(tree.root_node) try:
errors = traverse_tree(tree.root_node)
except RecursionError:
print(f"Unable to lint {fname} due to RecursionError")
return
if not errors: if not errors:
return return

View file

@ -9,6 +9,7 @@ AIDER_APP_NAME = "Aider"
os.environ["OR_SITE_URL"] = AIDER_SITE_URL os.environ["OR_SITE_URL"] = AIDER_SITE_URL
os.environ["OR_APP_NAME"] = AIDER_APP_NAME os.environ["OR_APP_NAME"] = AIDER_APP_NAME
os.environ["LITELLM_MODE"] = "PRODUCTION"
# `import litellm` takes 1.5 seconds, defer it! # `import litellm` takes 1.5 seconds, defer it!
@ -31,6 +32,7 @@ class LazyLiteLLM:
self._lazy_module.suppress_debug_info = True self._lazy_module.suppress_debug_info = True
self._lazy_module.set_verbose = False self._lazy_module.set_verbose = False
self._lazy_module.drop_params = True self._lazy_module.drop_params = True
self._lazy_module._logging._disable_debugging()
litellm = LazyLiteLLM() litellm = LazyLiteLLM()

View file

@ -1,33 +1,60 @@
import configparser import configparser
import json
import os import os
import re import re
import sys import sys
import threading import threading
import traceback
import webbrowser
from dataclasses import fields
from pathlib import Path from pathlib import Path
import git import git
import importlib_resources
from dotenv import load_dotenv from dotenv import load_dotenv
from prompt_toolkit.enums import EditingMode from prompt_toolkit.enums import EditingMode
from aider import __version__, models, utils from aider import __version__, models, urls, utils
from aider.analytics import Analytics
from aider.args import get_parser from aider.args import get_parser
from aider.coders import Coder from aider.coders import Coder
from aider.coders.base_coder import UnknownEditFormat
from aider.commands import Commands, SwitchCoder from aider.commands import Commands, SwitchCoder
from aider.format_settings import format_settings, scrub_sensitive_info
from aider.history import ChatSummary from aider.history import ChatSummary
from aider.io import InputOutput from aider.io import InputOutput
from aider.llm import litellm # noqa: F401; properly init litellm on launch from aider.llm import litellm # noqa: F401; properly init litellm on launch
from aider.repo import GitRepo from aider.models import ModelSettings
from aider.versioncheck import check_version from aider.repo import ANY_GIT_ERROR, GitRepo
from aider.report import report_uncaught_exceptions
from aider.versioncheck import check_version, install_from_main_branch, install_upgrade
from .dump import dump # noqa: F401 from .dump import dump # noqa: F401
def check_config_files_for_yes(config_files):
found = False
for config_file in config_files:
if Path(config_file).exists():
try:
with open(config_file, "r") as f:
for line in f:
if line.strip().startswith("yes:"):
print("Configuration error detected.")
print(f"The file {config_file} contains a line starting with 'yes:'")
print("Please replace 'yes:' with 'yes-always:' in this file.")
found = True
except Exception:
pass
return found
def get_git_root(): def get_git_root():
"""Try and guess the git repo, since the conf.yml can be at the repo root""" """Try and guess the git repo, since the conf.yml can be at the repo root"""
try: try:
repo = git.Repo(search_parent_directories=True) repo = git.Repo(search_parent_directories=True)
return repo.working_tree_dir return repo.working_tree_dir
except git.InvalidGitRepositoryError: except (git.InvalidGitRepositoryError, FileNotFoundError):
return None return None
@ -36,7 +63,7 @@ def guessed_wrong_repo(io, git_root, fnames, git_dname):
try: try:
check_repo = Path(GitRepo(io, fnames, git_dname).root).resolve() check_repo = Path(GitRepo(io, fnames, git_dname).root).resolve()
except FileNotFoundError: except (OSError,) + ANY_GIT_ERROR:
return return
# we had no guess, rely on the "true" repo result # we had no guess, rely on the "true" repo result
@ -50,15 +77,40 @@ def guessed_wrong_repo(io, git_root, fnames, git_dname):
return str(check_repo) return str(check_repo)
def setup_git(git_root, io): def make_new_repo(git_root, io):
repo = None try:
if git_root:
repo = git.Repo(git_root)
elif io.confirm_ask("No git repo found, create one to track GPT's changes (recommended)?"):
git_root = str(Path.cwd().resolve())
repo = git.Repo.init(git_root) repo = git.Repo.init(git_root)
io.tool_output("Git repository created in the current working directory.")
check_gitignore(git_root, io, False) check_gitignore(git_root, io, False)
except ANY_GIT_ERROR as err: # issue #1233
io.tool_error(f"Unable to create git repo in {git_root}")
io.tool_output(str(err))
return
io.tool_output(f"Git repository created in {git_root}")
return repo
def setup_git(git_root, io):
try:
cwd = Path.cwd()
except OSError:
cwd = None
repo = None
if git_root:
try:
repo = git.Repo(git_root)
except ANY_GIT_ERROR:
pass
elif cwd == Path.home():
io.tool_warning("You should probably run aider in a directory, not your home dir.")
return
elif cwd and io.confirm_ask(
"No git repo found, create one to track aider's changes (recommended)?"
):
git_root = str(cwd.resolve())
repo = make_new_repo(git_root, io)
if not repo: if not repo:
return return
@ -72,7 +124,7 @@ def setup_git(git_root, io):
pass pass
try: try:
user_email = config.get_value("user", "email", None) user_email = config.get_value("user", "email", None)
except configparser.NoSectionError: except (configparser.NoSectionError, configparser.NoOptionError):
pass pass
if user_name and user_email: if user_name and user_email:
@ -81,10 +133,10 @@ def setup_git(git_root, io):
with repo.config_writer() as git_config: with repo.config_writer() as git_config:
if not user_name: if not user_name:
git_config.set_value("user", "name", "Your Name") git_config.set_value("user", "name", "Your Name")
io.tool_error('Update git name with: git config user.name "Your Name"') io.tool_warning('Update git name with: git config user.name "Your Name"')
if not user_email: if not user_email:
git_config.set_value("user", "email", "you@example.com") git_config.set_value("user", "email", "you@example.com")
io.tool_error('Update git email with: git config user.email "you@example.com"') io.tool_warning('Update git email with: git config user.email "you@example.com"')
return repo.working_tree_dir return repo.working_tree_dir
@ -95,60 +147,51 @@ def check_gitignore(git_root, io, ask=True):
try: try:
repo = git.Repo(git_root) repo = git.Repo(git_root)
if repo.ignored(".aider"): if repo.ignored(".aider") and repo.ignored(".env"):
return return
except git.exc.InvalidGitRepositoryError: except ANY_GIT_ERROR:
pass pass
pat = ".aider*" patterns = [".aider*", ".env"]
patterns_to_add = []
gitignore_file = Path(git_root) / ".gitignore" gitignore_file = Path(git_root) / ".gitignore"
if gitignore_file.exists(): if gitignore_file.exists():
content = io.read_text(gitignore_file) try:
if content is None: content = io.read_text(gitignore_file)
return if content is None:
if pat in content.splitlines(): return
existing_lines = content.splitlines()
for pat in patterns:
if pat not in existing_lines:
patterns_to_add.append(pat)
except OSError as e:
io.tool_error(f"Error when trying to read {gitignore_file}: {e}")
return return
else: else:
content = "" content = ""
patterns_to_add = patterns
if ask and not io.confirm_ask(f"Add {pat} to .gitignore (recommended)?"): if not patterns_to_add:
return
if ask and not io.confirm_ask(f"Add {', '.join(patterns_to_add)} to .gitignore (recommended)?"):
return return
if content and not content.endswith("\n"): if content and not content.endswith("\n"):
content += "\n" content += "\n"
content += pat + "\n" content += "\n".join(patterns_to_add) + "\n"
io.write_text(gitignore_file, content)
io.tool_output(f"Added {pat} to .gitignore") try:
io.write_text(gitignore_file, content)
io.tool_output(f"Added {', '.join(patterns_to_add)} to .gitignore")
def format_settings(parser, args): except OSError as e:
show = scrub_sensitive_info(args, parser.format_values()) io.tool_error(f"Error when trying to write to {gitignore_file}: {e}")
# clean up the headings for consistency w/ new lines io.tool_output(
heading_env = "Environment Variables:" "Try running with appropriate permissions or manually add these patterns to .gitignore:"
heading_defaults = "Defaults:" )
if heading_env in show: for pattern in patterns_to_add:
show = show.replace(heading_env, "\n" + heading_env) io.tool_output(f" {pattern}")
show = show.replace(heading_defaults, "\n" + heading_defaults)
show += "\n"
show += "Option settings:\n"
for arg, val in sorted(vars(args).items()):
if val:
val = scrub_sensitive_info(args, str(val))
show += f" - {arg}: {val}\n" # noqa: E221
return show
def scrub_sensitive_info(args, text):
# Replace sensitive information with last 4 characters
if text and args.openai_api_key:
last_4 = args.openai_api_key[-4:]
text = text.replace(args.openai_api_key, f"...{last_4}")
if text and args.anthropic_api_key:
last_4 = args.anthropic_api_key[-4:]
text = text.replace(args.anthropic_api_key, f"...{last_4}")
return text
def check_streamlit_install(io): def check_streamlit_install(io):
@ -178,7 +221,10 @@ def launch_gui(args):
"--server.runOnSave=false", "--server.runOnSave=false",
] ]
if "-dev" in __version__: # https://github.com/Aider-AI/aider/issues/2193
is_dev = "-dev" in str(__version__)
if is_dev:
print("Watching for file changes.") print("Watching for file changes.")
else: else:
st_args += [ st_args += [
@ -218,24 +264,31 @@ def parse_lint_cmds(lint_cmds, io):
res[lang] = cmd res[lang] = cmd
else: else:
io.tool_error(f'Unable to parse --lint-cmd "{lint_cmd}"') io.tool_error(f'Unable to parse --lint-cmd "{lint_cmd}"')
io.tool_error('The arg should be "language: cmd --args ..."') io.tool_output('The arg should be "language: cmd --args ..."')
io.tool_error('For example: --lint-cmd "python: flake8 --select=E9"') io.tool_output('For example: --lint-cmd "python: flake8 --select=E9"')
err = True err = True
if err: if err:
return return
return res return res
def generate_search_path_list(default_fname, git_root, command_line_file): def generate_search_path_list(default_file, git_root, command_line_file):
files = [] files = []
default_file = Path(default_fname)
files.append(Path.home() / default_file) # homedir files.append(Path.home() / default_file) # homedir
if git_root: if git_root:
files.append(Path(git_root) / default_file) # git root files.append(Path(git_root) / default_file) # git root
files.append(default_file.resolve()) files.append(default_file)
if command_line_file: if command_line_file:
files.append(command_line_file) files.append(command_line_file)
files = [Path(fn).resolve() for fn in files]
resolved_files = []
for fn in files:
try:
resolved_files.append(Path(fn).resolve())
except OSError:
pass
files = resolved_files
files.reverse() files.reverse()
uniq = [] uniq = []
for fn in files: for fn in files:
@ -275,7 +328,7 @@ def register_models(git_root, model_settings_fname, io, verbose=False):
return None return None
def load_dotenv_files(git_root, dotenv_fname): def load_dotenv_files(git_root, dotenv_fname, encoding="utf-8"):
dotenv_files = generate_search_path_list( dotenv_files = generate_search_path_list(
".env", ".env",
git_root, git_root,
@ -283,14 +336,25 @@ def load_dotenv_files(git_root, dotenv_fname):
) )
loaded = [] loaded = []
for fname in dotenv_files: for fname in dotenv_files:
if Path(fname).exists(): try:
loaded.append(fname) if Path(fname).exists():
load_dotenv(fname, override=True) load_dotenv(fname, override=True, encoding=encoding)
loaded.append(fname)
except OSError as e:
print(f"OSError loading {fname}: {e}")
except Exception as e:
print(f"Error loading {fname}: {e}")
return loaded return loaded
def register_litellm_models(git_root, model_metadata_fname, io, verbose=False): def register_litellm_models(git_root, model_metadata_fname, io, verbose=False):
model_metatdata_files = generate_search_path_list( model_metatdata_files = []
# Add the resource file path
resource_metadata = importlib_resources.files("aider.resources").joinpath("model-metadata.json")
model_metatdata_files.append(str(resource_metadata))
model_metatdata_files += generate_search_path_list(
".aider.model.metadata.json", git_root, model_metadata_fname ".aider.model.metadata.json", git_root, model_metadata_fname
) )
@ -305,7 +369,42 @@ def register_litellm_models(git_root, model_metadata_fname, io, verbose=False):
return 1 return 1
def sanity_check_repo(repo, io):
if not repo:
return True
if not repo.repo.working_tree_dir:
io.tool_error("The git repo does not seem to have a working tree?")
return False
bad_ver = False
try:
repo.get_tracked_files()
if not repo.git_repo_error:
return True
error_msg = str(repo.git_repo_error)
except ANY_GIT_ERROR as exc:
error_msg = str(exc)
bad_ver = "version in (1, 2)" in error_msg
except AssertionError as exc:
error_msg = str(exc)
bad_ver = True
if bad_ver:
io.tool_error("Aider only works with git repos with version number 1 or 2.")
io.tool_output("You may be able to convert your repo: git update-index --index-version=2")
io.tool_output("Or run aider --no-git to proceed without using git.")
io.offer_url(urls.git_index_version, "Open documentation url for more info?")
return False
io.tool_error("Unable to read git repository, it may be corrupt?")
io.tool_output(error_msg)
return False
def main(argv=None, input=None, output=None, force_git_root=None, return_coder=False): def main(argv=None, input=None, output=None, force_git_root=None, return_coder=False):
report_uncaught_exceptions()
if argv is None: if argv is None:
argv = sys.argv[1:] argv = sys.argv[1:]
@ -316,7 +415,12 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
conf_fname = Path(".aider.conf.yml") conf_fname = Path(".aider.conf.yml")
default_config_files = [conf_fname.resolve()] # CWD default_config_files = []
try:
default_config_files += [conf_fname.resolve()] # CWD
except OSError:
pass
if git_root: if git_root:
git_conf = Path(git_root) / conf_fname # git root git_conf = Path(git_root) / conf_fname # git root
if git_conf not in default_config_files: if git_conf not in default_config_files:
@ -325,7 +429,13 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
default_config_files = list(map(str, default_config_files)) default_config_files = list(map(str, default_config_files))
parser = get_parser(default_config_files, git_root) parser = get_parser(default_config_files, git_root)
args, unknown = parser.parse_known_args(argv) try:
args, unknown = parser.parse_known_args(argv)
except AttributeError as e:
if all(word in str(e) for word in ["bool", "object", "has", "no", "attribute", "strip"]):
if check_config_files_for_yes(default_config_files):
return 1
raise e
if args.verbose: if args.verbose:
print("Config files search order, if no --config:") print("Config files search order, if no --config:")
@ -336,56 +446,109 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
default_config_files.reverse() default_config_files.reverse()
parser = get_parser(default_config_files, git_root) parser = get_parser(default_config_files, git_root)
args, unknown = parser.parse_known_args(argv) args, unknown = parser.parse_known_args(argv)
# Load the .env file specified in the arguments # Load the .env file specified in the arguments
loaded_dotenvs = load_dotenv_files(git_root, args.env_file) loaded_dotenvs = load_dotenv_files(git_root, args.env_file, args.encoding)
# Parse again to include any arguments that might have been defined in .env # Parse again to include any arguments that might have been defined in .env
args = parser.parse_args(argv) args = parser.parse_args(argv)
if args.analytics_disable:
analytics = Analytics(permanently_disable=True)
print("Analytics have been permanently disabled.")
if not args.verify_ssl: if not args.verify_ssl:
import httpx import httpx
os.environ["SSL_VERIFY"] = ""
litellm._load_litellm() litellm._load_litellm()
litellm._lazy_module.client_session = httpx.Client(verify=False) litellm._lazy_module.client_session = httpx.Client(verify=False)
litellm._lazy_module.aclient_session = httpx.AsyncClient(verify=False)
if args.dark_mode: if args.dark_mode:
args.user_input_color = "#32FF32" args.user_input_color = "#32FF32"
args.tool_error_color = "#FF3333" args.tool_error_color = "#FF3333"
args.tool_warning_color = "#FFFF00"
args.assistant_output_color = "#00FFFF" args.assistant_output_color = "#00FFFF"
args.code_theme = "monokai" args.code_theme = "monokai"
if args.light_mode: if args.light_mode:
args.user_input_color = "green" args.user_input_color = "green"
args.tool_error_color = "red" args.tool_error_color = "red"
args.tool_warning_color = "#FFA500"
args.assistant_output_color = "blue" args.assistant_output_color = "blue"
args.code_theme = "default" args.code_theme = "default"
if return_coder and args.yes is None: if return_coder and args.yes_always is None:
args.yes = True args.yes_always = True
editing_mode = EditingMode.VI if args.vim else EditingMode.EMACS editing_mode = EditingMode.VI if args.vim else EditingMode.EMACS
io = InputOutput( def get_io(pretty):
args.pretty, return InputOutput(
args.yes, pretty,
args.input_history_file, args.yes_always,
args.chat_history_file, args.input_history_file,
input=input, args.chat_history_file,
output=output, input=input,
user_input_color=args.user_input_color, output=output,
tool_output_color=args.tool_output_color, user_input_color=args.user_input_color,
tool_error_color=args.tool_error_color, tool_output_color=args.tool_output_color,
dry_run=args.dry_run, tool_warning_color=args.tool_warning_color,
encoding=args.encoding, tool_error_color=args.tool_error_color,
llm_history_file=args.llm_history_file, completion_menu_color=args.completion_menu_color,
editingmode=editing_mode, completion_menu_bg_color=args.completion_menu_bg_color,
) completion_menu_current_color=args.completion_menu_current_color,
completion_menu_current_bg_color=args.completion_menu_current_bg_color,
assistant_output_color=args.assistant_output_color,
code_theme=args.code_theme,
dry_run=args.dry_run,
encoding=args.encoding,
llm_history_file=args.llm_history_file,
editingmode=editing_mode,
fancy_input=args.fancy_input,
)
io = get_io(args.pretty)
try:
io.rule()
except UnicodeEncodeError as err:
if not io.pretty:
raise err
io = get_io(False)
io.tool_warning("Terminal does not support pretty output (UnicodeDecodeError)")
analytics = Analytics(logfile=args.analytics_log, permanently_disable=args.analytics_disable)
if args.analytics is not False:
if analytics.need_to_ask(args.analytics):
io.tool_output(
"Aider respects your privacy and never collects your code, chat messages, keys or"
" personal info."
)
io.tool_output(f"For more info: {urls.analytics}")
disable = not io.confirm_ask(
"Allow collection of anonymous analytics to help improve aider?"
)
analytics.asked_opt_in = True
if disable:
analytics.disable(permanently=True)
io.tool_output("Analytics have been permanently disabled.")
analytics.save_data()
io.tool_output()
# This is a no-op if the user has opted out
analytics.enable()
analytics.event("launched")
if args.gui and not return_coder: if args.gui and not return_coder:
if not check_streamlit_install(io): if not check_streamlit_install(io):
return return
analytics.event("gui session")
launch_gui(argv) launch_gui(argv)
return return
@ -395,7 +558,14 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
all_files = args.files + (args.file or []) all_files = args.files + (args.file or [])
fnames = [str(Path(fn).resolve()) for fn in all_files] fnames = [str(Path(fn).resolve()) for fn in all_files]
read_only_fnames = [str(Path(fn).resolve()) for fn in (args.read or [])] read_only_fnames = []
for fn in args.read or []:
path = Path(fn).resolve()
if path.is_dir():
read_only_fnames.extend(str(f) for f in path.rglob("*") if f.is_file())
else:
read_only_fnames.append(str(path))
if len(all_files) > 1: if len(all_files) > 1:
good = True good = True
for fname in all_files: for fname in all_files:
@ -403,7 +573,7 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
io.tool_error(f"{fname} is a directory, not provided alone.") io.tool_error(f"{fname} is a directory, not provided alone.")
good = False good = False
if not good: if not good:
io.tool_error( io.tool_output(
"Provide either a single directory of a git repo, or a list of one or more files." "Provide either a single directory of a git repo, or a list of one or more files."
) )
return 1 return 1
@ -430,11 +600,19 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
update_available = check_version(io, just_check=True, verbose=args.verbose) update_available = check_version(io, just_check=True, verbose=args.verbose)
return 0 if not update_available else 1 return 0 if not update_available else 1
if args.install_main_branch:
success = install_from_main_branch(io)
return 0 if success else 1
if args.upgrade:
success = install_upgrade(io)
return 0 if success else 1
if args.check_update: if args.check_update:
check_version(io, verbose=args.verbose) check_version(io, verbose=args.verbose)
if args.models: if args.list_models:
models.print_matching_models(io, args.models) models.print_matching_models(io, args.list_models)
return 0 return 0
if args.git: if args.git:
@ -450,6 +628,9 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
cmd_line = scrub_sensitive_info(args, cmd_line) cmd_line = scrub_sensitive_info(args, cmd_line)
io.tool_output(cmd_line, log_only=True) io.tool_output(cmd_line, log_only=True)
is_first_run = is_first_run_of_new_version(io, verbose=args.verbose)
check_and_load_imports(io, is_first_run, verbose=args.verbose)
if args.anthropic_api_key: if args.anthropic_api_key:
os.environ["ANTHROPIC_API_KEY"] = args.anthropic_api_key os.environ["ANTHROPIC_API_KEY"] = args.anthropic_api_key
@ -467,19 +648,55 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
register_models(git_root, args.model_settings_file, io, verbose=args.verbose) register_models(git_root, args.model_settings_file, io, verbose=args.verbose)
register_litellm_models(git_root, args.model_metadata_file, io, verbose=args.verbose) register_litellm_models(git_root, args.model_metadata_file, io, verbose=args.verbose)
# Process any command line aliases
if args.alias:
for alias_def in args.alias:
# Split on first colon only
parts = alias_def.split(":", 1)
if len(parts) != 2:
io.tool_error(f"Invalid alias format: {alias_def}")
io.tool_output("Format should be: alias:model-name")
return 1
alias, model = parts
models.MODEL_ALIASES[alias.strip()] = model.strip()
if not args.model: if not args.model:
args.model = "gpt-4o-2024-08-06" args.model = "gpt-4o-2024-08-06"
if os.environ.get("ANTHROPIC_API_KEY"): if os.environ.get("ANTHROPIC_API_KEY"):
args.model = "claude-3-5-sonnet-20240620" args.model = "claude-3-5-sonnet-20241022"
main_model = models.Model(args.model, weak_model=args.weak_model) main_model = models.Model(
args.model,
weak_model=args.weak_model,
editor_model=args.editor_model,
editor_edit_format=args.editor_edit_format,
)
if args.verbose:
io.tool_output("Model metadata:")
io.tool_output(json.dumps(main_model.info, indent=4))
io.tool_output("Model settings:")
for attr in sorted(fields(ModelSettings), key=lambda x: x.name):
val = getattr(main_model, attr.name)
val = json.dumps(val, indent=4)
io.tool_output(f"{attr.name}: {val}")
lint_cmds = parse_lint_cmds(args.lint_cmd, io) lint_cmds = parse_lint_cmds(args.lint_cmd, io)
if lint_cmds is None: if lint_cmds is None:
return 1 return 1
if args.show_model_warnings: if args.show_model_warnings:
models.sanity_check_models(io, main_model) problem = models.sanity_check_models(io, main_model)
if problem:
analytics.event("model warning", main_model=main_model)
io.tool_output("You can skip this check with --no-show-model-warnings")
try:
io.offer_url(urls.model_warnings, "Open documentation url for more info?")
io.tool_output()
except KeyboardInterrupt:
return 1
repo = None repo = None
if args.git: if args.git:
@ -500,7 +717,19 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
except FileNotFoundError: except FileNotFoundError:
pass pass
commands = Commands(io, None, verify_ssl=args.verify_ssl) if not args.skip_sanity_check_repo:
if not sanity_check_repo(repo, io):
return 1
commands = Commands(
io,
None,
verify_ssl=args.verify_ssl,
args=args,
parser=parser,
verbose=args.verbose,
editor=args.editor,
)
summarizer = ChatSummary( summarizer = ChatSummary(
[main_model.weak_model, main_model], [main_model.weak_model, main_model],
@ -510,6 +739,13 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
if args.cache_prompts and args.map_refresh == "auto": if args.cache_prompts and args.map_refresh == "auto":
args.map_refresh = "files" args.map_refresh = "files"
if not main_model.streaming:
if args.stream:
io.tool_warning(
f"Warning: Streaming is not supported by {main_model.name}. Disabling streaming."
)
args.stream = False
try: try:
coder = Coder.create( coder = Coder.create(
main_model=main_model, main_model=main_model,
@ -524,8 +760,6 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
dry_run=args.dry_run, dry_run=args.dry_run,
map_tokens=args.map_tokens, map_tokens=args.map_tokens,
verbose=args.verbose, verbose=args.verbose,
assistant_output_color=args.assistant_output_color,
code_theme=args.code_theme,
stream=args.stream, stream=args.stream,
use_git=args.git, use_git=args.git,
restore_chat_history=args.restore_chat_history, restore_chat_history=args.restore_chat_history,
@ -535,10 +769,19 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
test_cmd=args.test_cmd, test_cmd=args.test_cmd,
commands=commands, commands=commands,
summarizer=summarizer, summarizer=summarizer,
analytics=analytics,
map_refresh=args.map_refresh, map_refresh=args.map_refresh,
cache_prompts=args.cache_prompts, cache_prompts=args.cache_prompts,
map_mul_no_files=args.map_multiplier_no_files, map_mul_no_files=args.map_multiplier_no_files,
num_cache_warming_pings=args.cache_keepalive_pings,
suggest_shell_commands=args.suggest_shell_commands,
chat_language=args.chat_language,
detect_urls=args.detect_urls,
) )
except UnknownEditFormat as err:
io.tool_error(str(err))
io.offer_url(urls.edit_formats, "Open documentation about edit formats?")
return 1
except ValueError as err: except ValueError as err:
io.tool_error(str(err)) io.tool_error(str(err))
return 1 return 1
@ -546,14 +789,13 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
if return_coder: if return_coder:
return coder return coder
io.tool_output()
coder.show_announcements() coder.show_announcements()
if args.show_prompts: if args.show_prompts:
coder.cur_messages += [ coder.cur_messages += [
dict(role="user", content="Hello!"), dict(role="user", content="Hello!"),
] ]
messages = coder.format_messages() messages = coder.format_messages().all_messages()
utils.show_messages(messages) utils.show_messages(messages)
return return
@ -591,20 +833,39 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
coder.apply_updates() coder.apply_updates()
return return
if args.apply_clipboard_edits:
args.edit_format = main_model.editor_edit_format
args.message = "/paste"
if "VSCODE_GIT_IPC_HANDLE" in os.environ: if "VSCODE_GIT_IPC_HANDLE" in os.environ:
args.pretty = False args.pretty = False
io.tool_output("VSCode terminal detected, pretty output has been disabled.") io.tool_output("VSCode terminal detected, pretty output has been disabled.")
io.tool_output('Use /help <question> for help, run "aider --help" to see cmd line args') io.tool_output('Use /help <question> for help, run "aider --help" to see cmd line args')
if args.show_release_notes is True:
io.tool_output(f"Opening release notes: {urls.release_notes}")
io.tool_output()
webbrowser.open(urls.release_notes)
elif args.show_release_notes is None and is_first_run:
io.tool_output()
io.offer_url(
urls.release_notes,
"Would you like to see what's new in this version?",
allow_never=False,
)
if git_root and Path.cwd().resolve() != Path(git_root).resolve(): if git_root and Path.cwd().resolve() != Path(git_root).resolve():
io.tool_error( io.tool_warning(
"Note: in-chat filenames are always relative to the git working dir, not the current" "Note: in-chat filenames are always relative to the git working dir, not the current"
" working dir." " working dir."
) )
io.tool_error(f"Cur working dir: {Path.cwd()}") io.tool_output(f"Cur working dir: {Path.cwd()}")
io.tool_error(f"Git working dir: {git_root}") io.tool_output(f"Git working dir: {git_root}")
if args.load:
commands.cmd_load(args.load)
if args.message: if args.message:
io.add_to_input_history(args.message) io.add_to_input_history(args.message)
@ -631,9 +892,7 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
if args.exit: if args.exit:
return return
thread = threading.Thread(target=load_slow_imports) analytics.event("cli session", main_model=main_model, edit_format=main_model.edit_format)
thread.daemon = True
thread.start()
while True: while True:
try: try:
@ -651,19 +910,89 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
coder.show_announcements() coder.show_announcements()
def load_slow_imports(): def is_first_run_of_new_version(io, verbose=False):
"""Check if this is the first run of a new version/executable combination"""
installs_file = Path.home() / ".aider" / "installs.json"
key = (__version__, sys.executable)
if verbose:
io.tool_output(
f"Checking imports for version {__version__} and executable {sys.executable}"
)
io.tool_output(f"Installs file: {installs_file}")
try:
if installs_file.exists():
with open(installs_file, "r") as f:
installs = json.load(f)
if verbose:
io.tool_output("Installs file exists and loaded")
else:
installs = {}
if verbose:
io.tool_output("Installs file does not exist, creating new dictionary")
is_first_run = str(key) not in installs
if is_first_run:
installs[str(key)] = True
installs_file.parent.mkdir(parents=True, exist_ok=True)
with open(installs_file, "w") as f:
json.dump(installs, f, indent=4)
return is_first_run
except Exception as e:
io.tool_warning(f"Error checking version: {e}")
if verbose:
io.tool_output(f"Full exception details: {traceback.format_exc()}")
return True # Safer to assume it's a first run if we hit an error
def check_and_load_imports(io, is_first_run, verbose=False):
try:
if is_first_run:
if verbose:
io.tool_output(
"First run for this version and executable, loading imports synchronously"
)
try:
load_slow_imports(swallow=False)
except Exception as err:
io.tool_error(str(err))
io.tool_output("Error loading required imports. Did you install aider properly?")
io.offer_url(urls.install_properly, "Open documentation url for more info?")
sys.exit(1)
if verbose:
io.tool_output("Imports loaded and installs file updated")
else:
if verbose:
io.tool_output("Not first run, loading imports in background thread")
thread = threading.Thread(target=load_slow_imports)
thread.daemon = True
thread.start()
except Exception as e:
io.tool_warning(f"Error in loading imports: {e}")
if verbose:
io.tool_output(f"Full exception details: {traceback.format_exc()}")
def load_slow_imports(swallow=True):
# These imports are deferred in various ways to # These imports are deferred in various ways to
# improve startup time. # improve startup time.
# This func is called in a thread to load them in the background # This func is called either synchronously or in a thread
# while we wait for the user to type their first message. # depending on whether it's been run before for this version and executable.
try: try:
import httpx # noqa: F401 import httpx # noqa: F401
import litellm # noqa: F401 import litellm # noqa: F401
import networkx # noqa: F401 import networkx # noqa: F401
import numpy # noqa: F401 import numpy # noqa: F401
except Exception: except Exception as e:
pass if not swallow:
raise e
if __name__ == "__main__": if __name__ == "__main__":

File diff suppressed because it is too large Load diff

View file

@ -5,14 +5,21 @@
# Conventional Commits text adapted from: # Conventional Commits text adapted from:
# https://www.conventionalcommits.org/en/v1.0.0/#summary # https://www.conventionalcommits.org/en/v1.0.0/#summary
commit_system = """You are an expert software engineer. commit_system = """You are an expert software engineer that generates concise, \
one-line Git commit messages based on the provided diffs.
Review the provided context and diffs which are about to be committed to a git repo. Review the provided context and diffs which are about to be committed to a git repo.
Review the diffs carefully. Review the diffs carefully.
Generate a commit message for those changes. Generate a one-line commit message for those changes.
The commit message MUST use the imperative tense.
The commit message should be structured as follows: <type>: <description> The commit message should be structured as follows: <type>: <description>
Use these for <type>: fix, feat, build, chore, ci, docs, style, refactor, perf, test Use these for <type>: fix, feat, build, chore, ci, docs, style, refactor, perf, test
Reply with JUST the commit message, without quotes, comments, questions, etc!
Ensure the commit message:
- Starts with the appropriate prefix.
- Is in the imperative mood (e.g., \"Add feature\" not \"Added feature\" or \"Adding feature\").
- Does not exceed 72 characters.
Reply only with the one-line commit message, without any additional text, explanations, \
or line breaks.
""" """
# COMMANDS # COMMANDS

View file

@ -0,0 +1,91 @@
(class_definition
name: (identifier) @name.definition.class) @definition.class
(method_signature
(function_signature)) @definition.method
(type_alias
(type_identifier) @name.definition.type) @definition.type
(method_signature
(getter_signature
name: (identifier) @name.definition.method)) @definition.method
(method_signature
(setter_signature
name: (identifier) @name.definition.method)) @definition.method
(method_signature
(function_signature
name: (identifier) @name.definition.method)) @definition.method
(method_signature
(factory_constructor_signature
(identifier) @name.definition.method)) @definition.method
(method_signature
(constructor_signature
name: (identifier) @name.definition.method)) @definition.method
(method_signature
(operator_signature)) @definition.method
(method_signature) @definition.method
(mixin_declaration
(mixin)
(identifier) @name.definition.mixin) @definition.mixin
(extension_declaration
name: (identifier) @name.definition.extension) @definition.extension
(enum_declaration
name: (identifier) @name.definition.enum) @definition.enum
(function_signature
name: (identifier) @name.definition.function) @definition.function
(new_expression
(type_identifier) @name.reference.class) @reference.class
(initialized_variable_definition
name: (identifier)
value: (identifier) @name.reference.class
value: (selector
"!"?
(argument_part
(arguments
(argument)*))?)?) @reference.class
(assignment_expression
left: (assignable_expression
(identifier)
(unconditional_assignable_selector
"."
(identifier) @name.reference.call))) @reference.call
(assignment_expression
left: (assignable_expression
(identifier)
(conditional_assignable_selector
"?."
(identifier) @name.reference.call))) @reference.call
((identifier) @name
(selector
"!"?
(conditional_assignable_selector
"?." (identifier) @name.reference.call)?
(unconditional_assignable_selector
"."? (identifier) @name.reference.call)?
(argument_part
(arguments
(argument)*))?)*
(cascade_section
(cascade_selector
(identifier)) @name.reference.call
(argument_part
(arguments
(argument)*))?)?) @reference.call

View file

@ -10,6 +10,16 @@ from aider.sendchat import simple_send_with_retries
from .dump import dump # noqa: F401 from .dump import dump # noqa: F401
ANY_GIT_ERROR = (
git.exc.ODBError,
git.exc.GitError,
OSError,
IndexError,
BufferError,
TypeError,
ValueError,
)
class GitRepo: class GitRepo:
repo = None repo = None
@ -19,6 +29,7 @@ class GitRepo:
aider_ignore_last_check = 0 aider_ignore_last_check = 0
subtree_only = False subtree_only = False
ignore_file_cache = {} ignore_file_cache = {}
git_repo_error = None
def __init__( def __init__(
self, self,
@ -67,9 +78,7 @@ class GitRepo:
repo_path = git.Repo(fname, search_parent_directories=True).working_dir repo_path = git.Repo(fname, search_parent_directories=True).working_dir
repo_path = utils.safe_abs_path(repo_path) repo_path = utils.safe_abs_path(repo_path)
repo_paths.append(repo_path) repo_paths.append(repo_path)
except git.exc.InvalidGitRepositoryError: except ANY_GIT_ERROR:
pass
except git.exc.NoSuchPathError:
pass pass
num_repos = len(set(repo_paths)) num_repos = len(set(repo_paths))
@ -116,7 +125,10 @@ class GitRepo:
if fnames: if fnames:
fnames = [str(self.abs_root_path(fn)) for fn in fnames] fnames = [str(self.abs_root_path(fn)) for fn in fnames]
for fname in fnames: for fname in fnames:
self.repo.git.add(fname) try:
self.repo.git.add(fname)
except ANY_GIT_ERROR as err:
self.io.tool_error(f"Unable to add {fname}: {err}")
cmd += ["--"] + fnames cmd += ["--"] + fnames
else: else:
cmd += ["-a"] cmd += ["-a"]
@ -132,30 +144,32 @@ class GitRepo:
original_auther_name_env = os.environ.get("GIT_AUTHOR_NAME") original_auther_name_env = os.environ.get("GIT_AUTHOR_NAME")
os.environ["GIT_AUTHOR_NAME"] = committer_name os.environ["GIT_AUTHOR_NAME"] = committer_name
self.repo.git.commit(cmd) try:
commit_hash = self.repo.head.commit.hexsha[:7] self.repo.git.commit(cmd)
self.io.tool_output(f"Commit {commit_hash} {commit_message}", bold=True) commit_hash = self.get_head_commit_sha(short=True)
self.io.tool_output(f"Commit {commit_hash} {commit_message}", bold=True)
return commit_hash, commit_message
except ANY_GIT_ERROR as err:
self.io.tool_error(f"Unable to commit: {err}")
finally:
# Restore the env
# Restore the env if self.attribute_committer:
if original_committer_name_env is not None:
os.environ["GIT_COMMITTER_NAME"] = original_committer_name_env
else:
del os.environ["GIT_COMMITTER_NAME"]
if self.attribute_committer: if aider_edits and self.attribute_author:
if original_committer_name_env is not None: if original_auther_name_env is not None:
os.environ["GIT_COMMITTER_NAME"] = original_committer_name_env os.environ["GIT_AUTHOR_NAME"] = original_auther_name_env
else: else:
del os.environ["GIT_COMMITTER_NAME"] del os.environ["GIT_AUTHOR_NAME"]
if aider_edits and self.attribute_author:
if original_auther_name_env is not None:
os.environ["GIT_AUTHOR_NAME"] = original_auther_name_env
else:
del os.environ["GIT_AUTHOR_NAME"]
return commit_hash, commit_message
def get_rel_repo_dir(self): def get_rel_repo_dir(self):
try: try:
return os.path.relpath(self.repo.git_dir, os.getcwd()) return os.path.relpath(self.repo.git_dir, os.getcwd())
except ValueError: except (ValueError, OSError):
return self.repo.git_dir return self.repo.git_dir
def get_commit_message(self, diffs, context): def get_commit_message(self, diffs, context):
@ -178,7 +192,9 @@ class GitRepo:
max_tokens = model.info.get("max_input_tokens") or 0 max_tokens = model.info.get("max_input_tokens") or 0
if max_tokens and num_tokens > max_tokens: if max_tokens and num_tokens > max_tokens:
continue continue
commit_message = simple_send_with_retries(model.name, messages) commit_message = simple_send_with_retries(
model.name, messages, extra_params=model.extra_params
)
if commit_message: if commit_message:
break break
@ -201,9 +217,9 @@ class GitRepo:
try: try:
commits = self.repo.iter_commits(active_branch) commits = self.repo.iter_commits(active_branch)
current_branch_has_commits = any(commits) current_branch_has_commits = any(commits)
except git.exc.GitCommandError: except ANY_GIT_ERROR:
pass pass
except TypeError: except (TypeError,) + ANY_GIT_ERROR:
pass pass
if not fnames: if not fnames:
@ -214,18 +230,21 @@ class GitRepo:
if not self.path_in_repo(fname): if not self.path_in_repo(fname):
diffs += f"Added {fname}\n" diffs += f"Added {fname}\n"
if current_branch_has_commits: try:
args = ["HEAD", "--"] + list(fnames) if current_branch_has_commits:
diffs += self.repo.git.diff(*args) args = ["HEAD", "--"] + list(fnames)
diffs += self.repo.git.diff(*args)
return diffs
wd_args = ["--"] + list(fnames)
index_args = ["--cached"] + wd_args
diffs += self.repo.git.diff(*index_args)
diffs += self.repo.git.diff(*wd_args)
return diffs return diffs
except ANY_GIT_ERROR as err:
wd_args = ["--"] + list(fnames) self.io.tool_error(f"Unable to diff: {err}")
index_args = ["--cached"] + wd_args
diffs += self.repo.git.diff(*index_args)
diffs += self.repo.git.diff(*wd_args)
return diffs
def diff_commits(self, pretty, from_commit, to_commit): def diff_commits(self, pretty, from_commit, to_commit):
args = [] args = []
@ -247,15 +266,26 @@ class GitRepo:
commit = self.repo.head.commit commit = self.repo.head.commit
except ValueError: except ValueError:
commit = None commit = None
except ANY_GIT_ERROR as err:
self.git_repo_error = err
self.io.tool_error(f"Unable to list files in git repo: {err}")
self.io.tool_output("Is your git repo corrupted?")
return []
files = set() files = set()
if commit: if commit:
if commit in self.tree_files: if commit in self.tree_files:
files = self.tree_files[commit] files = self.tree_files[commit]
else: else:
for blob in commit.tree.traverse(): try:
if blob.type == "blob": # blob is a file for blob in commit.tree.traverse():
files.add(blob.path) if blob.type == "blob": # blob is a file
files.add(blob.path)
except ANY_GIT_ERROR as err:
self.git_repo_error = err
self.io.tool_error(f"Unable to list files in git repo: {err}")
self.io.tool_output("Is your git repo corrupted?")
return []
files = set(self.normalize_path(path) for path in files) files = set(self.normalize_path(path) for path in files)
self.tree_files[commit] = set(files) self.tree_files[commit] = set(files)
@ -301,6 +331,15 @@ class GitRepo:
lines, lines,
) )
def git_ignored_file(self, path):
if not self.repo:
return
try:
if self.repo.ignored(path):
return True
except ANY_GIT_ERROR:
return False
def ignored_file(self, fname): def ignored_file(self, fname):
self.refresh_aider_ignore() self.refresh_aider_ignore()
@ -314,7 +353,14 @@ class GitRepo:
def ignored_file_raw(self, fname): def ignored_file_raw(self, fname):
if self.subtree_only: if self.subtree_only:
fname_path = Path(self.normalize_path(fname)) fname_path = Path(self.normalize_path(fname))
cwd_path = Path.cwd().resolve().relative_to(Path(self.root).resolve()) try:
cwd_path = Path.cwd().resolve().relative_to(Path(self.root).resolve())
except ValueError:
# Issue #1524
# ValueError: 'C:\\dev\\squid-certbot' is not in the subpath of
# 'C:\\dev\\squid-certbot'
# Clearly, fname is not under cwd... so ignore it
return True
if cwd_path not in fname_path.parents and fname_path != cwd_path: if cwd_path not in fname_path.parents and fname_path != cwd_path:
return True return True
@ -332,6 +378,8 @@ class GitRepo:
def path_in_repo(self, path): def path_in_repo(self, path):
if not self.repo: if not self.repo:
return return
if not path:
return
tracked_files = set(self.get_tracked_files()) tracked_files = set(self.get_tracked_files())
return self.normalize_path(path) in tracked_files return self.normalize_path(path) in tracked_files
@ -363,8 +411,22 @@ class GitRepo:
return self.repo.is_dirty(path=path) return self.repo.is_dirty(path=path)
def get_head(self): def get_head_commit(self):
try: try:
return self.repo.head.commit.hexsha return self.repo.head.commit
except ValueError: except (ValueError,) + ANY_GIT_ERROR:
return None return None
def get_head_commit_sha(self, short=False):
commit = self.get_head_commit()
if not commit:
return
if short:
return commit.hexsha[:7]
return commit.hexsha
def get_head_commit_message(self, default=None):
commit = self.get_head_commit()
if not commit:
return default
return commit.message

View file

@ -2,6 +2,8 @@ import colorsys
import math import math
import os import os
import random import random
import shutil
import sqlite3
import sys import sys
import time import time
import warnings import warnings
@ -13,10 +15,10 @@ from diskcache import Cache
from grep_ast import TreeContext, filename_to_lang from grep_ast import TreeContext, filename_to_lang
from pygments.lexers import guess_lexer_for_filename from pygments.lexers import guess_lexer_for_filename
from pygments.token import Token from pygments.token import Token
from pygments.util import ClassNotFound
from tqdm import tqdm from tqdm import tqdm
from aider.dump import dump from aider.dump import dump
from aider.special import filter_important_files
from aider.utils import Spinner from aider.utils import Spinner
# tree_sitter is throwing a FutureWarning # tree_sitter is throwing a FutureWarning
@ -26,6 +28,9 @@ from tree_sitter_language_pack import get_language, get_parser # noqa: E402
Tag = namedtuple("Tag", "rel_fname fname line name kind".split()) Tag = namedtuple("Tag", "rel_fname fname line name kind".split())
SQLITE_ERRORS = (sqlite3.OperationalError, sqlite3.DatabaseError, OSError)
class RepoMap: class RepoMap:
CACHE_VERSION = 3 CACHE_VERSION = 3
TAGS_CACHE_DIR = f".aider.tags.cache.v{CACHE_VERSION}" TAGS_CACHE_DIR = f".aider.tags.cache.v{CACHE_VERSION}"
@ -155,17 +160,59 @@ class RepoMap:
return repo_content return repo_content
def get_rel_fname(self, fname): def get_rel_fname(self, fname):
return os.path.relpath(fname, self.root) try:
return os.path.relpath(fname, self.root)
except ValueError:
# Issue #1288: ValueError: path is on mount 'C:', start on mount 'D:'
# Just return the full fname.
return fname
def split_path(self, path): def tags_cache_error(self, original_error=None):
path = os.path.relpath(path, self.root) """Handle SQLite errors by trying to recreate cache, falling back to dict if needed"""
return [path + ":"]
if self.verbose and original_error:
self.io.tool_warning(f"Tags cache error: {str(original_error)}")
if isinstance(getattr(self, "TAGS_CACHE", None), dict):
return
path = Path(self.root) / self.TAGS_CACHE_DIR
# Try to recreate the cache
try:
# Delete existing cache dir
if path.exists():
shutil.rmtree(path)
# Try to create new cache
new_cache = Cache(path)
# Test that it works
test_key = "test"
new_cache[test_key] = "test"
_ = new_cache[test_key]
del new_cache[test_key]
# If we got here, the new cache works
self.TAGS_CACHE = new_cache
return
except SQLITE_ERRORS as e:
# If anything goes wrong, warn and fall back to dict
self.io.tool_warning(
f"Unable to use tags cache at {path}, falling back to memory cache"
)
if self.verbose:
self.io.tool_warning(f"Cache recreation error: {str(e)}")
self.TAGS_CACHE = dict()
def load_tags_cache(self): def load_tags_cache(self):
path = Path(self.root) / self.TAGS_CACHE_DIR path = Path(self.root) / self.TAGS_CACHE_DIR
if not path.exists(): try:
self.cache_missing = True self.TAGS_CACHE = Cache(path)
self.TAGS_CACHE = Cache(path) except SQLITE_ERRORS as e:
self.tags_cache_error(e)
def save_tags_cache(self): def save_tags_cache(self):
pass pass
@ -174,7 +221,7 @@ class RepoMap:
try: try:
return os.path.getmtime(fname) return os.path.getmtime(fname)
except FileNotFoundError: except FileNotFoundError:
self.io.tool_error(f"File not found error: {fname}") self.io.tool_warning(f"File not found error: {fname}")
def get_tags(self, fname, rel_fname): def get_tags(self, fname, rel_fname):
# Check if the file is in the cache and if the modification time has not changed # Check if the file is in the cache and if the modification time has not changed
@ -183,15 +230,30 @@ class RepoMap:
return [] return []
cache_key = fname cache_key = fname
if cache_key in self.TAGS_CACHE and self.TAGS_CACHE[cache_key]["mtime"] == file_mtime: try:
return self.TAGS_CACHE[cache_key]["data"] val = self.TAGS_CACHE.get(cache_key) # Issue #1308
except SQLITE_ERRORS as e:
self.tags_cache_error(e)
val = self.TAGS_CACHE.get(cache_key)
if val is not None and val.get("mtime") == file_mtime:
try:
return self.TAGS_CACHE[cache_key]["data"]
except SQLITE_ERRORS as e:
self.tags_cache_error(e)
return self.TAGS_CACHE[cache_key]["data"]
# miss! # miss!
data = list(self.get_tags_raw(fname, rel_fname)) data = list(self.get_tags_raw(fname, rel_fname))
# Update the cache # Update the cache
self.TAGS_CACHE[cache_key] = {"mtime": file_mtime, "data": data} try:
self.save_tags_cache() self.TAGS_CACHE[cache_key] = {"mtime": file_mtime, "data": data}
self.save_tags_cache()
except SQLITE_ERRORS as e:
self.tags_cache_error(e)
self.TAGS_CACHE[cache_key] = {"mtime": file_mtime, "data": data}
return data return data
def get_tags_raw(self, fname, rel_fname): def get_tags_raw(self, fname, rel_fname):
@ -199,8 +261,12 @@ class RepoMap:
if not lang: if not lang:
return return
language = get_language(lang) try:
parser = get_parser(lang) language = get_language(lang)
parser = get_parser(lang)
except Exception as err:
print(f"Skipping file {fname}: {err}")
return
query_scm = get_scm_fname(lang) query_scm = get_scm_fname(lang)
if not query_scm.exists(): if not query_scm.exists():
@ -253,7 +319,8 @@ class RepoMap:
try: try:
lexer = guess_lexer_for_filename(fname, code) lexer = guess_lexer_for_filename(fname, code)
except ClassNotFound: except Exception: # On Windows, bad ref to time.clock which is deprecated?
# self.io.tool_error(f"Error lexing {fname}")
return return
tokens = list(lexer.get_tokens(code)) tokens = list(lexer.get_tokens(code))
@ -288,7 +355,13 @@ class RepoMap:
# https://networkx.org/documentation/stable/_modules/networkx/algorithms/link_analysis/pagerank_alg.html#pagerank # https://networkx.org/documentation/stable/_modules/networkx/algorithms/link_analysis/pagerank_alg.html#pagerank
personalize = 100 / len(fnames) personalize = 100 / len(fnames)
if len(fnames) - len(self.TAGS_CACHE) > 100: try:
cache_size = len(self.TAGS_CACHE)
except SQLITE_ERRORS as e:
self.tags_cache_error(e)
cache_size = len(self.TAGS_CACHE)
if len(fnames) - cache_size > 100:
self.io.tool_output( self.io.tool_output(
"Initial repo scan can be slow in larger repos, but only happens once." "Initial repo scan can be slow in larger repos, but only happens once."
) )
@ -298,19 +371,23 @@ class RepoMap:
showing_bar = False showing_bar = False
for fname in fnames: for fname in fnames:
if self.verbose:
self.io.tool_output(f"Processing {fname}")
if progress and not showing_bar: if progress and not showing_bar:
progress() progress()
if not Path(fname).is_file(): try:
if fname not in self.warned_files: file_ok = Path(fname).is_file()
if Path(fname).exists(): except OSError:
self.io.tool_error( file_ok = False
f"Repo-map can't include {fname}, it is not a normal file"
)
else:
self.io.tool_error(f"Repo-map can't include {fname}, it no longer exists")
self.warned_files.add(fname) if not file_ok:
if fname not in self.warned_files:
self.io.tool_warning(f"Repo-map can't include {fname}")
self.io.tool_output(
"Has it been deleted from the file system but not from git?"
)
self.warned_files.add(fname)
continue continue
# dump(fname) # dump(fname)
@ -382,7 +459,11 @@ class RepoMap:
try: try:
ranked = nx.pagerank(G, weight="weight", **pers_args) ranked = nx.pagerank(G, weight="weight", **pers_args)
except ZeroDivisionError: except ZeroDivisionError:
return [] # Issue #1536
try:
ranked = nx.pagerank(G, weight="weight")
except ZeroDivisionError:
return []
# distribute the rank from each source node, across all of its out edges # distribute the rank from each source node, across all of its out edges
ranked_definitions = defaultdict(float) ranked_definitions = defaultdict(float)
@ -399,7 +480,9 @@ class RepoMap:
ranked_definitions[(dst, ident)] += data["rank"] ranked_definitions[(dst, ident)] += data["rank"]
ranked_tags = [] ranked_tags = []
ranked_definitions = sorted(ranked_definitions.items(), reverse=True, key=lambda x: x[1]) ranked_definitions = sorted(
ranked_definitions.items(), reverse=True, key=lambda x: (x[1], x[0])
)
# dump(ranked_definitions) # dump(ranked_definitions)
@ -435,12 +518,20 @@ class RepoMap:
force_refresh=False, force_refresh=False,
): ):
# Create a cache key # Create a cache key
cache_key = ( cache_key = [
tuple(sorted(chat_fnames)) if chat_fnames else None, tuple(sorted(chat_fnames)) if chat_fnames else None,
tuple(sorted(other_fnames)) if other_fnames else None, tuple(sorted(other_fnames)) if other_fnames else None,
max_map_tokens, max_map_tokens,
) ]
if self.refresh == "auto":
cache_key += [
tuple(sorted(mentioned_fnames)) if mentioned_fnames else None,
tuple(sorted(mentioned_idents)) if mentioned_idents else None,
]
cache_key = tuple(cache_key)
use_cache = False
if not force_refresh: if not force_refresh:
if self.refresh == "manual" and self.last_map: if self.refresh == "manual" and self.last_map:
return self.last_map return self.last_map
@ -497,6 +588,14 @@ class RepoMap:
progress=spin.step, progress=spin.step,
) )
other_rel_fnames = sorted(set(self.get_rel_fname(fname) for fname in other_fnames))
special_fnames = filter_important_files(other_rel_fnames)
ranked_tags_fnames = set(tag[0] for tag in ranked_tags)
special_fnames = [fn for fn in special_fnames if fn not in ranked_tags_fnames]
special_fnames = [(fn,) for fn in special_fnames]
ranked_tags = special_fnames + ranked_tags
spin.step() spin.step()
num_tags = len(ranked_tags) num_tags = len(ranked_tags)

200
aider/report.py Normal file
View file

@ -0,0 +1,200 @@
import os
import platform
import subprocess
import sys
import traceback
import urllib.parse
import webbrowser
from aider import __version__
from aider.urls import github_issues
from aider.versioncheck import VERSION_CHECK_FNAME
FENCE = "`" * 3
def get_python_info():
implementation = platform.python_implementation()
is_venv = sys.prefix != sys.base_prefix
return (
f"Python implementation: {implementation}\nVirtual environment:"
f" {'Yes' if is_venv else 'No'}"
)
def get_os_info():
return f"OS: {platform.system()} {platform.release()} ({platform.architecture()[0]})"
def get_git_info():
try:
git_version = subprocess.check_output(["git", "--version"]).decode().strip()
return f"Git version: {git_version}"
except Exception:
return "Git information unavailable"
def report_github_issue(issue_text, title=None, confirm=True):
"""
Compose a URL to open a new GitHub issue with the given text prefilled,
and attempt to launch it in the default web browser.
:param issue_text: The text of the issue to file
:param title: The title of the issue (optional)
:param confirm: Whether to ask for confirmation before opening the browser (default: True)
:return: None
"""
version_info = f"Aider version: {__version__}\n"
python_version = f"Python version: {sys.version.split()[0]}\n"
platform_info = f"Platform: {platform.platform()}\n"
python_info = get_python_info() + "\n"
os_info = get_os_info() + "\n"
git_info = get_git_info() + "\n"
system_info = (
version_info + python_version + platform_info + python_info + os_info + git_info + "\n"
)
issue_text = system_info + issue_text
params = {"body": issue_text}
if title is None:
title = "Bug report"
params["title"] = title
issue_url = f"{github_issues}?{urllib.parse.urlencode(params)}"
if confirm:
print(f"\n# {title}\n")
print(issue_text.strip())
print()
print("Please consider reporting this bug to help improve aider!")
prompt = "Open a GitHub Issue pre-filled with the above error in your browser? (Y/n) "
confirmation = input(prompt).strip().lower()
yes = not confirmation or confirmation.startswith("y")
if not yes:
return
print("Attempting to open the issue URL in your default web browser...")
try:
if webbrowser.open(issue_url):
print("Browser window should be opened.")
except Exception:
pass
if confirm:
print()
print()
print("You can also use this URL to file the GitHub Issue:")
print()
print(issue_url)
print()
print()
def exception_handler(exc_type, exc_value, exc_traceback):
# If it's a KeyboardInterrupt, just call the default handler
if issubclass(exc_type, KeyboardInterrupt):
return sys.__excepthook__(exc_type, exc_value, exc_traceback)
# We don't want any more exceptions
sys.excepthook = None
# Check if VERSION_CHECK_FNAME exists and delete it if so
try:
if VERSION_CHECK_FNAME.exists():
VERSION_CHECK_FNAME.unlink()
except Exception:
pass # Swallow any errors
# Format the traceback
tb_lines = traceback.format_exception(exc_type, exc_value, exc_traceback)
# Replace full paths with basenames in the traceback
tb_lines_with_basenames = []
for line in tb_lines:
try:
if "File " in line:
parts = line.split('"')
if len(parts) > 1:
full_path = parts[1]
basename = os.path.basename(full_path)
line = line.replace(full_path, basename)
except Exception:
pass
tb_lines_with_basenames.append(line)
tb_text = "".join(tb_lines_with_basenames)
# Find the innermost frame
innermost_tb = exc_traceback
while innermost_tb.tb_next:
innermost_tb = innermost_tb.tb_next
# Get the filename and line number from the innermost frame
filename = innermost_tb.tb_frame.f_code.co_filename
line_number = innermost_tb.tb_lineno
try:
basename = os.path.basename(filename)
except Exception:
basename = filename
# Get the exception type name
exception_type = exc_type.__name__
# Prepare the issue text
issue_text = f"An uncaught exception occurred:\n\n{FENCE}\n{tb_text}\n{FENCE}"
# Prepare the title
title = f"Uncaught {exception_type} in {basename} line {line_number}"
# Report the issue
report_github_issue(issue_text, title=title)
# Call the default exception handler
sys.__excepthook__(exc_type, exc_value, exc_traceback)
def report_uncaught_exceptions():
"""
Set up the global exception handler to report uncaught exceptions.
"""
sys.excepthook = exception_handler
def dummy_function1():
def dummy_function2():
def dummy_function3():
raise ValueError("boo")
dummy_function3()
dummy_function2()
def main():
report_uncaught_exceptions()
dummy_function1()
title = None
if len(sys.argv) > 2:
# Use the first command-line argument as the title and the second as the issue text
title = sys.argv[1]
issue_text = sys.argv[2]
elif len(sys.argv) > 1:
# Use the first command-line argument as the issue text
issue_text = sys.argv[1]
else:
# Read from stdin if no argument is provided
print("Enter the issue title (optional, press Enter to skip):")
title = input().strip()
if not title:
title = None
print("Enter the issue text (Ctrl+D to finish):")
issue_text = sys.stdin.read().strip()
report_github_issue(issue_text, title)
if __name__ == "__main__":
main()

View file

@ -0,0 +1,3 @@
# This ensures that importlib_resources.files("aider.resources")
# doesn't raise ImportError, even if there are no other files in this
# dir.

View file

132
aider/run_cmd.py Normal file
View file

@ -0,0 +1,132 @@
import os
import platform
import subprocess
import sys
from io import BytesIO
import pexpect
import psutil
def run_cmd(command, verbose=False, error_print=None, cwd=None):
try:
if sys.stdin.isatty() and hasattr(pexpect, "spawn") and platform.system() != "Windows":
return run_cmd_pexpect(command, verbose, cwd)
return run_cmd_subprocess(command, verbose, cwd)
except OSError as e:
error_message = f"Error occurred while running command '{command}': {str(e)}"
if error_print is None:
print(error_message)
else:
error_print(error_message)
return 1, error_message
def get_windows_parent_process_name():
try:
current_process = psutil.Process()
while True:
parent = current_process.parent()
if parent is None:
break
parent_name = parent.name().lower()
if parent_name in ["powershell.exe", "cmd.exe"]:
return parent_name
current_process = parent
return None
except Exception:
return None
def run_cmd_subprocess(command, verbose=False, cwd=None):
if verbose:
print("Using run_cmd_subprocess:", command)
try:
shell = os.environ.get("SHELL", "/bin/sh")
parent_process = None
# Determine the appropriate shell
if platform.system() == "Windows":
parent_process = get_windows_parent_process_name()
if parent_process == "powershell.exe":
command = f"powershell -Command {command}"
if verbose:
print("Running command:", command)
print("SHELL:", shell)
if platform.system() == "Windows":
print("Parent process:", parent_process)
process = subprocess.Popen(
command,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
text=True,
shell=True,
encoding=sys.stdout.encoding,
errors="replace",
bufsize=0, # Set bufsize to 0 for unbuffered output
universal_newlines=True,
cwd=cwd,
)
output = []
while True:
chunk = process.stdout.read(1)
if not chunk:
break
print(chunk, end="", flush=True) # Print the chunk in real-time
output.append(chunk) # Store the chunk for later use
process.wait()
return process.returncode, "".join(output)
except Exception as e:
return 1, str(e)
def run_cmd_pexpect(command, verbose=False, cwd=None):
"""
Run a shell command interactively using pexpect, capturing all output.
:param command: The command to run as a string.
:param verbose: If True, print output in real-time.
:return: A tuple containing (exit_status, output)
"""
if verbose:
print("Using run_cmd_pexpect:", command)
output = BytesIO()
def output_callback(b):
output.write(b)
return b
try:
# Use the SHELL environment variable, falling back to /bin/sh if not set
shell = os.environ.get("SHELL", "/bin/sh")
if verbose:
print("With shell:", shell)
if os.path.exists(shell):
# Use the shell from SHELL environment variable
if verbose:
print("Running pexpect.spawn with shell:", shell)
child = pexpect.spawn(shell, args=["-c", command], encoding="utf-8", cwd=cwd)
else:
# Fall back to spawning the command directly
if verbose:
print("Running pexpect.spawn without shell.")
child = pexpect.spawn(command, encoding="utf-8", cwd=cwd)
# Transfer control to the user, capturing output
child.interact(output_filter=output_callback)
# Wait for the command to finish and get the exit status
child.close()
return child.exitstatus, output.getvalue().decode("utf-8", errors="replace")
except (pexpect.ExceptionPexpect, TypeError, ValueError) as e:
error_msg = f"Error running command {command}: {e}"
return 1, error_msg

View file

@ -131,7 +131,9 @@ class Scraper:
# Internals... # Internals...
def scrape_with_playwright(self, url): def scrape_with_playwright(self, url):
import playwright import playwright # noqa: F401
from playwright.sync_api import Error as PlaywrightError
from playwright.sync_api import TimeoutError as PlaywrightTimeoutError
from playwright.sync_api import sync_playwright from playwright.sync_api import sync_playwright
with sync_playwright() as p: with sync_playwright() as p:
@ -156,18 +158,20 @@ class Scraper:
response = None response = None
try: try:
response = page.goto(url, wait_until="networkidle", timeout=5000) response = page.goto(url, wait_until="networkidle", timeout=5000)
except playwright._impl._errors.TimeoutError: except PlaywrightTimeoutError:
self.print_error(f"Timeout while loading {url}") self.print_error(f"Timeout while loading {url}")
except playwright._impl._errors.Error as e: except PlaywrightError as e:
self.print_error(f"Error navigating to {url}: {str(e)}") self.print_error(f"Error navigating to {url}: {str(e)}")
return None, None return None, None
try: try:
content = page.content() content = page.content()
mime_type = ( mime_type = None
response.header_value("content-type").split(";")[0] if response else None if response:
) content_type = response.header_value("content-type")
except playwright._impl._errors.Error as e: if content_type:
mime_type = content_type.split(";")[0]
except PlaywrightError as e:
self.print_error(f"Error retrieving page content: {str(e)}") self.print_error(f"Error retrieving page content: {str(e)}")
content = None content = None
mime_type = None mime_type = None
@ -181,7 +185,9 @@ class Scraper:
headers = {"User-Agent": f"Mozilla./5.0 ({aider_user_agent})"} headers = {"User-Agent": f"Mozilla./5.0 ({aider_user_agent})"}
try: try:
with httpx.Client(headers=headers, verify=self.verify_ssl) as client: with httpx.Client(
headers=headers, verify=self.verify_ssl, follow_redirects=True
) as client:
response = client.get(url) response = client.get(url)
response.raise_for_status() response.raise_for_status()
return response.text, response.headers.get("content-type", "").split(";")[0] return response.text, response.headers.get("content-type", "").split(";")[0]
@ -220,7 +226,10 @@ class Scraper:
if not self.pandoc_available: if not self.pandoc_available:
return page_source return page_source
md = pypandoc.convert_text(page_source, "markdown", format="html") try:
md = pypandoc.convert_text(page_source, "markdown", format="html")
except OSError:
return page_source
md = re.sub(r"</div>", " ", md) md = re.sub(r"</div>", " ", md)
md = re.sub(r"<div>", " ", md) md = re.sub(r"<div>", " ", md)

View file

@ -1,9 +1,9 @@
import hashlib import hashlib
import json import json
import time
import backoff
from aider.dump import dump # noqa: F401 from aider.dump import dump # noqa: F401
from aider.exceptions import LiteLLMExceptions
from aider.llm import litellm from aider.llm import litellm
# from diskcache import Cache # from diskcache import Cache
@ -13,59 +13,32 @@ CACHE_PATH = "~/.aider.send.cache.v1"
CACHE = None CACHE = None
# CACHE = Cache(CACHE_PATH) # CACHE = Cache(CACHE_PATH)
RETRY_TIMEOUT = 60
def retry_exceptions():
import httpx
return (
httpx.ConnectError,
httpx.RemoteProtocolError,
httpx.ReadTimeout,
litellm.exceptions.APIConnectionError,
litellm.exceptions.APIError,
litellm.exceptions.RateLimitError,
litellm.exceptions.ServiceUnavailableError,
litellm.exceptions.Timeout,
litellm.exceptions.InternalServerError,
litellm.llms.anthropic.AnthropicError,
)
def lazy_litellm_retry_decorator(func):
def wrapper(*args, **kwargs):
decorated_func = backoff.on_exception(
backoff.expo,
retry_exceptions(),
max_time=60,
on_backoff=lambda details: print(
f"{details.get('exception', 'Exception')}\nRetry in {details['wait']:.1f} seconds."
),
)(func)
return decorated_func(*args, **kwargs)
return wrapper
def send_completion( def send_completion(
model_name, messages, functions, stream, temperature=0, extra_headers=None, max_tokens=None model_name,
messages,
functions,
stream,
temperature=0,
extra_params=None,
): ):
from aider.llm import litellm
kwargs = dict( kwargs = dict(
model=model_name, model=model_name,
messages=messages, messages=messages,
temperature=temperature,
stream=stream, stream=stream,
) )
if temperature is not None:
kwargs["temperature"] = temperature
if functions is not None: if functions is not None:
function = functions[0] function = functions[0]
kwargs["tools"] = [dict(type="function", function=function)] kwargs["tools"] = [dict(type="function", function=function)]
kwargs["tool_choice"] = {"type": "function", "function": {"name": function["name"]}} kwargs["tool_choice"] = {"type": "function", "function": {"name": function["name"]}}
if extra_headers is not None:
kwargs["extra_headers"] = extra_headers if extra_params is not None:
if max_tokens is not None: kwargs.update(extra_params)
kwargs["max_tokens"] = max_tokens
key = json.dumps(kwargs, sort_keys=True).encode() key = json.dumps(kwargs, sort_keys=True).encode()
@ -75,8 +48,6 @@ def send_completion(
if not stream and CACHE is not None and key in CACHE: if not stream and CACHE is not None and key in CACHE:
return hash_object, CACHE[key] return hash_object, CACHE[key]
# del kwargs['stream']
res = litellm.completion(**kwargs) res = litellm.completion(**kwargs)
if not stream and CACHE is not None: if not stream and CACHE is not None:
@ -85,15 +56,42 @@ def send_completion(
return hash_object, res return hash_object, res
@lazy_litellm_retry_decorator def simple_send_with_retries(model_name, messages, extra_params=None):
def simple_send_with_retries(model_name, messages): litellm_ex = LiteLLMExceptions()
try:
_hash, response = send_completion( retry_delay = 0.125
model_name=model_name, while True:
messages=messages, try:
functions=None, kwargs = {
stream=False, "model_name": model_name,
) "messages": messages,
return response.choices[0].message.content "functions": None,
except (AttributeError, litellm.exceptions.BadRequestError): "stream": False,
return "extra_params": extra_params,
}
_hash, response = send_completion(**kwargs)
if not response or not hasattr(response, "choices") or not response.choices:
return None
return response.choices[0].message.content
except litellm_ex.exceptions_tuple() as err:
ex_info = litellm_ex.get_ex_info(err)
print(str(err))
if ex_info.description:
print(ex_info.description)
should_retry = ex_info.retry
if should_retry:
retry_delay *= 2
if retry_delay > RETRY_TIMEOUT:
should_retry = False
if not should_retry:
return None
print(f"Retrying in {retry_delay:.1f} seconds...")
time.sleep(retry_delay)
continue
except AttributeError:
return None

202
aider/special.py Normal file
View file

@ -0,0 +1,202 @@
import os
ROOT_IMPORTANT_FILES = [
# Version Control
".gitignore",
".gitattributes",
# Documentation
"README",
"README.md",
"README.txt",
"README.rst",
"CONTRIBUTING",
"CONTRIBUTING.md",
"CONTRIBUTING.txt",
"CONTRIBUTING.rst",
"LICENSE",
"LICENSE.md",
"LICENSE.txt",
"CHANGELOG",
"CHANGELOG.md",
"CHANGELOG.txt",
"CHANGELOG.rst",
"SECURITY",
"SECURITY.md",
"SECURITY.txt",
"CODEOWNERS",
# Package Management and Dependencies
"requirements.txt",
"Pipfile",
"Pipfile.lock",
"pyproject.toml",
"setup.py",
"setup.cfg",
"package.json",
"package-lock.json",
"yarn.lock",
"npm-shrinkwrap.json",
"Gemfile",
"Gemfile.lock",
"composer.json",
"composer.lock",
"pom.xml",
"build.gradle",
"build.sbt",
"go.mod",
"go.sum",
"Cargo.toml",
"Cargo.lock",
"mix.exs",
"rebar.config",
"project.clj",
"Podfile",
"Cartfile",
"dub.json",
"dub.sdl",
# Configuration and Settings
".env",
".env.example",
".editorconfig",
"tsconfig.json",
"jsconfig.json",
".babelrc",
"babel.config.js",
".eslintrc",
".eslintignore",
".prettierrc",
".stylelintrc",
"tslint.json",
".pylintrc",
".flake8",
".rubocop.yml",
".scalafmt.conf",
".dockerignore",
".gitpod.yml",
"sonar-project.properties",
"renovate.json",
"dependabot.yml",
".pre-commit-config.yaml",
"mypy.ini",
"tox.ini",
".yamllint",
"pyrightconfig.json",
# Build and Compilation
"webpack.config.js",
"rollup.config.js",
"parcel.config.js",
"gulpfile.js",
"Gruntfile.js",
"build.xml",
"build.boot",
"project.json",
"build.cake",
"MANIFEST.in",
# Testing
"pytest.ini",
"phpunit.xml",
"karma.conf.js",
"jest.config.js",
"cypress.json",
".nycrc",
".nycrc.json",
# CI/CD
".travis.yml",
".gitlab-ci.yml",
"Jenkinsfile",
"azure-pipelines.yml",
"bitbucket-pipelines.yml",
"appveyor.yml",
"circle.yml",
".circleci/config.yml",
".github/dependabot.yml",
"codecov.yml",
".coveragerc",
# Docker and Containers
"Dockerfile",
"docker-compose.yml",
"docker-compose.override.yml",
# Cloud and Serverless
"serverless.yml",
"firebase.json",
"now.json",
"netlify.toml",
"vercel.json",
"app.yaml",
"terraform.tf",
"main.tf",
"cloudformation.yaml",
"cloudformation.json",
"ansible.cfg",
"kubernetes.yaml",
"k8s.yaml",
# Database
"schema.sql",
"liquibase.properties",
"flyway.conf",
# Framework-specific
"next.config.js",
"nuxt.config.js",
"vue.config.js",
"angular.json",
"gatsby-config.js",
"gridsome.config.js",
# API Documentation
"swagger.yaml",
"swagger.json",
"openapi.yaml",
"openapi.json",
# Development environment
".nvmrc",
".ruby-version",
".python-version",
"Vagrantfile",
# Quality and metrics
".codeclimate.yml",
"codecov.yml",
# Documentation
"mkdocs.yml",
"_config.yml",
"book.toml",
"readthedocs.yml",
".readthedocs.yaml",
# Package registries
".npmrc",
".yarnrc",
# Linting and formatting
".isort.cfg",
".markdownlint.json",
".markdownlint.yaml",
# Security
".bandit",
".secrets.baseline",
# Misc
".pypirc",
".gitkeep",
".npmignore",
]
# Normalize the lists once
NORMALIZED_ROOT_IMPORTANT_FILES = set(os.path.normpath(path) for path in ROOT_IMPORTANT_FILES)
def is_important(file_path):
file_name = os.path.basename(file_path)
dir_name = os.path.normpath(os.path.dirname(file_path))
normalized_path = os.path.normpath(file_path)
# Check for GitHub Actions workflow files
if dir_name == os.path.normpath(".github/workflows") and file_name.endswith(".yml"):
return True
return normalized_path in NORMALIZED_ROOT_IMPORTANT_FILES
def filter_important_files(file_paths):
"""
Filter a list of file paths to return only those that are commonly important in codebases.
:param file_paths: List of file paths to check
:return: List of file paths that match important file patterns
"""
return list(filter(is_important, file_paths))

View file

@ -8,3 +8,9 @@ model_warnings = "https://aider.chat/docs/llms/warnings.html"
token_limits = "https://aider.chat/docs/troubleshooting/token-limits.html" token_limits = "https://aider.chat/docs/troubleshooting/token-limits.html"
llms = "https://aider.chat/docs/llms.html" llms = "https://aider.chat/docs/llms.html"
large_repos = "https://aider.chat/docs/faq.html#can-i-use-aider-in-a-large-mono-repo" large_repos = "https://aider.chat/docs/faq.html#can-i-use-aider-in-a-large-mono-repo"
github_issues = "https://github.com/Aider-AI/aider/issues/new"
git_index_version = "https://github.com/Aider-AI/aider/issues/211"
install_properly = "https://aider.chat/docs/troubleshooting/imports.html"
analytics = "https://aider.chat/docs/more/analytics.html"
release_notes = "https://aider.chat/HISTORY.html#release-notes"
edit_formats = "https://aider.chat/docs/more/edit-formats.html"

View file

@ -1,5 +1,8 @@
import itertools import itertools
import os import os
import platform
import shlex
import shutil
import subprocess import subprocess
import sys import sys
import tempfile import tempfile
@ -10,7 +13,7 @@ import git
from aider.dump import dump # noqa: F401 from aider.dump import dump # noqa: F401
IMAGE_EXTENSIONS = {".png", ".jpg", ".jpeg", ".gif", ".bmp", ".tiff", ".webp"} IMAGE_EXTENSIONS = {".png", ".jpg", ".jpeg", ".gif", ".bmp", ".tiff", ".webp", ".pdf"}
class IgnorantTemporaryDirectory: class IgnorantTemporaryDirectory:
@ -191,12 +194,31 @@ def split_chat_history_markdown(text, include_tool=False):
return messages return messages
# Copied from pip, MIT license
# https://github.com/pypa/pip/blob/b989e6ef04810bbd4033a3683020bd4ddcbdb627/src/pip/_internal/utils/entrypoints.py#L73
def get_best_invocation_for_this_python() -> str:
"""Try to figure out the best way to invoke the current Python."""
exe = sys.executable
exe_name = os.path.basename(exe)
# Try to use the basename, if it's the first executable.
found_executable = shutil.which(exe_name)
if found_executable and os.path.samefile(found_executable, exe):
return exe_name
# Use the full executable name, because we couldn't find something simpler.
return exe
def get_pip_install(args): def get_pip_install(args):
cmd = [ cmd = [
sys.executable, get_best_invocation_for_this_python(),
"-m", "-m",
"pip", "pip",
"install", "install",
"--upgrade",
"--upgrade-strategy",
"only-if-needed",
] ]
cmd += args cmd += args
return cmd return cmd
@ -204,7 +226,7 @@ def get_pip_install(args):
def run_install(cmd): def run_install(cmd):
print() print()
print("Installing: ", " ".join(cmd)) print("Installing:", printable_shell_command(cmd))
try: try:
output = [] output = []
@ -215,6 +237,8 @@ def run_install(cmd):
text=True, text=True,
bufsize=1, bufsize=1,
universal_newlines=True, universal_newlines=True,
encoding=sys.stdout.encoding,
errors="replace",
) )
spinner = Spinner("Installing...") spinner = Spinner("Installing...")
@ -251,8 +275,12 @@ class Spinner:
self.start_time = time.time() self.start_time = time.time()
self.last_update = 0 self.last_update = 0
self.visible = False self.visible = False
self.is_tty = sys.stdout.isatty()
def step(self): def step(self):
if not self.is_tty:
return
current_time = time.time() current_time = time.time()
if not self.visible and current_time - self.start_time >= 0.5: if not self.visible and current_time - self.start_time >= 0.5:
self.visible = True self.visible = True
@ -268,7 +296,7 @@ class Spinner:
print(f"\r{self.text} {next(self.spinner_chars)}\r{self.text} ", end="", flush=True) print(f"\r{self.text} {next(self.spinner_chars)}\r{self.text} ", end="", flush=True)
def end(self): def end(self):
if self.visible: if self.visible and self.is_tty:
print("\r" + " " * (len(self.text) + 3)) print("\r" + " " * (len(self.text) + 3))
@ -281,29 +309,76 @@ def find_common_root(abs_fnames):
return safe_abs_path(os.getcwd()) return safe_abs_path(os.getcwd())
def check_pip_install_extra(io, module, prompt, pip_install_cmd): def format_tokens(count):
if count < 1000:
return f"{count}"
elif count < 10000:
return f"{count / 1000:.1f}k"
else:
return f"{round(count / 1000)}k"
def touch_file(fname):
fname = Path(fname)
try: try:
__import__(module) fname.parent.mkdir(parents=True, exist_ok=True)
fname.touch()
return True return True
except (ImportError, ModuleNotFoundError): except OSError:
pass return False
def check_pip_install_extra(io, module, prompt, pip_install_cmd, self_update=False):
if module:
try:
__import__(module)
return True
except (ImportError, ModuleNotFoundError, RuntimeError):
pass
cmd = get_pip_install(pip_install_cmd) cmd = get_pip_install(pip_install_cmd)
io.tool_error(prompt) if prompt:
if not io.confirm_ask("Run pip install?", default="y", subject=" ".join(cmd)): io.tool_warning(prompt)
if self_update and platform.system() == "Windows":
io.tool_output("Run this command to update:")
print()
print(printable_shell_command(cmd)) # plain print so it doesn't line-wrap
return
if not io.confirm_ask("Run pip install?", default="y", subject=printable_shell_command(cmd)):
return return
success, output = run_install(cmd) success, output = run_install(cmd)
if success: if success:
if not module:
return True
try: try:
__import__(module) __import__(module)
return True return True
except (ImportError, ModuleNotFoundError) as err: except (ImportError, ModuleNotFoundError, RuntimeError) as err:
io.tool_error(str(err)) io.tool_error(str(err))
pass pass
io.tool_error(output) io.tool_error(output)
print() print()
print(f"Failed to install {pip_install_cmd[0]}") print("Install failed, try running this command manually:")
print(printable_shell_command(cmd))
def printable_shell_command(cmd_list):
"""
Convert a list of command arguments to a properly shell-escaped string.
Args:
cmd_list (list): List of command arguments.
Returns:
str: Shell-escaped command string.
"""
if platform.system() == "Windows":
return subprocess.list2cmdline(cmd_list)
else:
return shlex.join(cmd_list)

View file

@ -9,13 +9,63 @@ import aider
from aider import utils from aider import utils
from aider.dump import dump # noqa: F401 from aider.dump import dump # noqa: F401
VERSION_CHECK_FNAME = Path.home() / ".aider" / "caches" / "versioncheck"
def install_from_main_branch(io):
"""
Install the latest version of aider from the main branch of the GitHub repository.
"""
return utils.check_pip_install_extra(
io,
None,
"Install the development version of aider from the main branch?",
["git+https://github.com/Aider-AI/aider.git"],
self_update=True,
)
def install_upgrade(io, latest_version=None):
"""
Install the latest version of aider from PyPI.
"""
if latest_version:
new_ver_text = f"Newer aider version v{latest_version} is available."
else:
new_ver_text = "Install latest version of aider?"
docker_image = os.environ.get("AIDER_DOCKER_IMAGE")
if docker_image:
text = f"""
{new_ver_text} To upgrade, run:
docker pull {docker_image}
"""
io.tool_warning(text)
return True
success = utils.check_pip_install_extra(
io,
None,
new_ver_text,
["aider-chat"],
self_update=True,
)
if success:
io.tool_output("Re-run aider to use new version.")
sys.exit()
return
def check_version(io, just_check=False, verbose=False): def check_version(io, just_check=False, verbose=False):
fname = Path.home() / ".aider" / "caches" / "versioncheck" if not just_check and VERSION_CHECK_FNAME.exists():
if not just_check and fname.exists():
day = 60 * 60 * 24 day = 60 * 60 * 24
since = time.time() - fname.stat().st_mtime since = time.time() - os.path.getmtime(VERSION_CHECK_FNAME)
if since < day: if 0 < since < day:
if verbose: if verbose:
hours = since / 60 / 60 hours = since / 60 / 60
io.tool_output(f"Too soon to check version: {hours:.1f} hours") io.tool_output(f"Too soon to check version: {hours:.1f} hours")
@ -41,8 +91,11 @@ def check_version(io, just_check=False, verbose=False):
io.tool_error(f"Error checking pypi for new version: {err}") io.tool_error(f"Error checking pypi for new version: {err}")
return False return False
finally: finally:
fname.parent.mkdir(parents=True, exist_ok=True) VERSION_CHECK_FNAME.parent.mkdir(parents=True, exist_ok=True)
fname.touch() VERSION_CHECK_FNAME.touch()
###
# is_update_available = True
if just_check or verbose: if just_check or verbose:
if is_update_available: if is_update_available:
@ -56,27 +109,5 @@ def check_version(io, just_check=False, verbose=False):
if not is_update_available: if not is_update_available:
return False return False
docker_image = os.environ.get("AIDER_DOCKER_IMAGE") install_upgrade(io, latest_version)
if docker_image:
text = f"""
Newer aider version v{latest_version} is available. To upgrade, run:
docker pull {docker_image}
"""
io.tool_error(text)
return True
cmd = utils.get_pip_install(["--upgrade", "aider-chat"])
text = f"Newer aider version v{latest_version} is available. To upgrade, run:"
io.tool_error(text)
if io.confirm_ask("Run pip install?", subject=" ".join(cmd)):
success, output = utils.run_install(cmd)
if success:
io.tool_output("Re-run aider to use new version.")
sys.exit()
else:
io.tool_error(output)
return True return True

View file

@ -3,18 +3,25 @@ import os
import queue import queue
import tempfile import tempfile
import time import time
import warnings
from prompt_toolkit.shortcuts import prompt
from aider.llm import litellm from aider.llm import litellm
from .dump import dump # noqa: F401
warnings.filterwarnings(
"ignore", message="Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work"
)
from pydub import AudioSegment # noqa
try: try:
import soundfile as sf import soundfile as sf
except (OSError, ModuleNotFoundError): except (OSError, ModuleNotFoundError):
sf = None sf = None
from prompt_toolkit.shortcuts import prompt
from .dump import dump # noqa: F401
class SoundDeviceError(Exception): class SoundDeviceError(Exception):
pass pass
@ -27,7 +34,7 @@ class Voice:
threshold = 0.15 threshold = 0.15
def __init__(self): def __init__(self, audio_format="wav"):
if sf is None: if sf is None:
raise SoundDeviceError raise SoundDeviceError
try: try:
@ -37,6 +44,9 @@ class Voice:
self.sd = sd self.sd = sd
except (OSError, ModuleNotFoundError): except (OSError, ModuleNotFoundError):
raise SoundDeviceError raise SoundDeviceError
if audio_format not in ["wav", "mp3", "webm"]:
raise ValueError(f"Unsupported audio format: {audio_format}")
self.audio_format = audio_format
def callback(self, indata, frames, time, status): def callback(self, indata, frames, time, status):
"""This is called (from a separate thread) for each audio block.""" """This is called (from a separate thread) for each audio block."""
@ -72,16 +82,24 @@ class Voice:
return self.raw_record_and_transcribe(history, language) return self.raw_record_and_transcribe(history, language)
except KeyboardInterrupt: except KeyboardInterrupt:
return return
except SoundDeviceError as e:
print(f"Error: {e}")
print("Please ensure you have a working audio input device connected and try again.")
return
def raw_record_and_transcribe(self, history, language): def raw_record_and_transcribe(self, history, language):
self.q = queue.Queue() self.q = queue.Queue()
filename = tempfile.mktemp(suffix=".wav") temp_wav = tempfile.mktemp(suffix=".wav")
try: try:
sample_rate = int(self.sd.query_devices(None, "input")["default_samplerate"]) sample_rate = int(self.sd.query_devices(None, "input")["default_samplerate"])
except (TypeError, ValueError): except (TypeError, ValueError):
sample_rate = 16000 # fallback to 16kHz if unable to query device sample_rate = 16000 # fallback to 16kHz if unable to query device
except self.sd.PortAudioError:
raise SoundDeviceError(
"No audio input device detected. Please check your audio settings and try again."
)
self.start_time = time.time() self.start_time = time.time()
@ -89,17 +107,31 @@ class Voice:
with self.sd.InputStream(samplerate=sample_rate, channels=1, callback=self.callback): with self.sd.InputStream(samplerate=sample_rate, channels=1, callback=self.callback):
prompt(self.get_prompt, refresh_interval=0.1) prompt(self.get_prompt, refresh_interval=0.1)
except self.sd.PortAudioError as err: except self.sd.PortAudioError as err:
print(err) raise SoundDeviceError(f"Error accessing audio input device: {err}")
return
with sf.SoundFile(filename, mode="x", samplerate=sample_rate, channels=1) as file: with sf.SoundFile(temp_wav, mode="x", samplerate=sample_rate, channels=1) as file:
while not self.q.empty(): while not self.q.empty():
file.write(self.q.get()) file.write(self.q.get())
if self.audio_format != "wav":
filename = tempfile.mktemp(suffix=f".{self.audio_format}")
audio = AudioSegment.from_wav(temp_wav)
audio.export(filename, format=self.audio_format)
os.remove(temp_wav)
else:
filename = temp_wav
with open(filename, "rb") as fh: with open(filename, "rb") as fh:
transcript = litellm.transcription( try:
model="whisper-1", file=fh, prompt=history, language=language transcript = litellm.transcription(
) model="whisper-1", file=fh, prompt=history, language=language
)
except Exception as err:
print(f"Unable to transcribe {filename}: {err}")
return
if self.audio_format != "wav":
os.remove(filename)
text = transcript.text text = transcript.text
return text return text

View file

@ -1,20 +1,347 @@
--- ---
title: Release history title: Release history
parent: More info parent: More info
nav_order: 999 nav_order: 900
highlight_image: /assets/blame.jpg highlight_image: /assets/blame.jpg
description: Release notes and stats on aider writing its own code. description: Release notes and stats on aider writing its own code.
--- ---
# Release history
{% include blame.md %} {% include blame.md %}
The above
[stats are based on the git commit history](/docs/faq.html#how-are-the-aider-wrote-xx-of-code-stats-computed)
of the aider repo.
## Release notes
<!--[[[cog <!--[[[cog
# This page is a copy of HISTORY.md, adding the front matter above. # This page is a copy of HISTORY.md, adding the front matter above.
text = open("HISTORY.md").read() text = open("HISTORY.md").read()
text = text.replace("# Release history", "")
cog.out(text) cog.out(text)
]]]--> ]]]-->
# Release history
### main branch
- PDF support for Sonnet and Gemini models.
- Set cwd to repo root when running shell commands.
- Improved error handling for failed .gitignore file operations.
- Improved error handling for input history file permissions.
- Improved error handling for analytics file access.
- Aider wrote 85% of the code in this release.
### Aider v0.65.1
- Bugfix to `--alias`.
### Aider v0.65.0
- Added `--alias` config to define [custom model aliases](https://aider.chat/docs/config/model-aliases.html).
- Added `--[no-]detect-urls` flag to disable detecting and offering to scrape URLs found in the chat.
- Ollama models now default to an 8k context window.
- Added [RepoMap support for Dart language](https://aider.chat/docs/languages.html) by @malkoG.
- Ask 2.5% of users if they want to opt-in to [analytics](https://aider.chat/docs/more/analytics.html).
- Skip suggesting files that share names with files already in chat.
- `/editor` returns and prefill the file content into the prompt, so you can use `/editor` to compose messages that start with `/commands`, etc.
- Enhanced error handling for analytics.
- Improved handling of UnknownEditFormat exceptions with helpful documentation links.
- Bumped dependencies to pick up grep-ast 0.4.0 for Dart language support.
- Aider wrote 81% of the code in this release.
### Aider v0.64.1
- Disable streaming for o1 on OpenRouter.
### Aider v0.64.0
- Added [`/editor` command](https://aider.chat/docs/usage/commands.html) to open system editor for writing prompts, by @thehunmonkgroup.
- Full support for `gpt-4o-2024-11-20`.
- Stream o1 models by default.
- `/run` and suggested shell commands are less mysterious and now confirm that they "Added XX lines of output to the chat."
- Ask 1% of users if they want to opt-in to [analytics](https://aider.chat/docs/more/analytics.html).
- Added support for [optional multiline input tags](https://aider.chat/docs/usage/commands.html#entering-multi-line-chat-messages) with matching closing tags.
- Improved [model settings configuration](https://aider.chat/docs/config/adv-model-settings.html#global-extra-params) with support for global `extra_params` for `litellm.completion()`.
- Architect mode now asks to add files suggested by the LLM.
- Fixed bug in fuzzy model name matching.
- Added Timeout exception to handle API provider timeouts.
- Added `--show-release-notes` to control release notes display on first run of new version.
- Save empty dict to cache file on model metadata download failure, to delay retry.
- Improved error handling and code formatting.
- Aider wrote 74% of the code in this release.
### Aider v0.63.2
- Fixed bug in fuzzy model name matching when litellm provider info is missing.
- Modified model metadata file loading to allow override of resource file.
- Allow recursive loading of dirs using `--read`.
- Updated dependency versions to pick up litellm fix for ollama models.
- Added exponential backoff retry when writing files to handle editor file locks.
- Updated Qwen 2.5 Coder 32B model configuration.
### Aider v0.63.1
- Fixed bug in git ignored file handling.
- Improved error handling for git operations.
### Aider v0.63.0
- Support for Qwen 2.5 Coder 32B.
- `/web` command just adds the page to the chat, without triggering an LLM response.
- Improved prompting for the user's preferred chat language.
- Improved handling of LiteLLM exceptions.
- Bugfix for double-counting tokens when reporting cache stats.
- Bugfix for the LLM creating new files.
- Other small bug fixes.
- Aider wrote 55% of the code in this release.
### Aider v0.62.0
- Full support for Claude 3.5 Haiku
- Scored 75% on [aider's code editing leaderboard](https://aider.chat/docs/leaderboards/).
- Almost as good as Sonnet at much lower cost.
- Launch with `--haiku` to use it.
- Easily apply file edits from ChatGPT, Claude or other web apps
- Chat with ChatGPT or Claude via their web app.
- Give it your source files and ask for the changes you want.
- Use the web app's "copy response" button to copy the entire reply from the LLM.
- Run `aider --apply-clipboard-edits file-to-edit.js`.
- Aider will edit your file with the LLM's changes.
- Bugfix for creating new files.
- Aider wrote 84% of the code in this release.
### Aider v0.61.0
- Load and save aider slash-commands to files:
- `/save <fname>` command will make a file of `/add` and `/read-only` commands that recreate the current file context in the chat.
- `/load <fname>` will replay the commands in the file.
- You can use `/load` to run any arbitrary set of slash-commands, not just `/add` and `/read-only`.
- Use `--load <fname>` to run a list of commands on launch, before the interactive chat begins.
- Anonymous, opt-in [analytics](https://aider.chat/docs/more/analytics.html) with no personal data sharing.
- Aider follows litellm's `supports_vision` attribute to enable image support for models.
- Bugfix for when diff mode flexibly handles the model using the wrong filename.
- Displays filenames in sorted order for `/add` and `/read-only`.
- New `--no-fancy-input` switch disables prompt toolkit input, now still available with `--no-pretty`.
- Override browser config with `--no-browser` or `--no-gui`.
- Offer to open documentation URLs when errors occur.
- Properly support all o1 models, regardless of provider.
- Improved layout of filenames above input prompt.
- Better handle corrupted repomap tags cache.
- Improved handling of API errors, especially when accessing the weak model.
- Aider wrote 68% of the code in this release.
### Aider v0.60.1
- Enable image support for Sonnet 10/22.
- Display filenames in sorted order.
### Aider v0.60.0
- Full support for Sonnet 10/22, the new SOTA model on aider's code editing benchmark.
- Aider uses Sonnet 10/22 by default.
- Improved formatting of added and read-only files above chat prompt, by @jbellis.
- Improved support for o1 models by more flexibly parsing their nonconforming code edit replies.
- Corrected diff edit format prompt that only the first match is replaced.
- Stronger whole edit format prompt asking for clean file names.
- Now offers to add `.env` to the `.gitignore` file.
- Ships with a small model metadata json file to handle models not yet updated in litellm.
- Model settings for o1 models on azure.
- Bugfix to properly include URLs in `/help` RAG results.
- Aider wrote 49% of the code in this release.
### Aider v0.59.1
- Check for obsolete `yes: true` in yaml config, show helpful error.
- Model settings for openrouter/anthropic/claude-3.5-sonnet:beta
### Aider v0.59.0
- Improvements to `/read-only`:
- Now supports shell-style auto-complete of the full file system.
- Still auto-completes the full paths of the repo files like `/add`.
- Now supports globs like `src/**/*.py`
- Renamed `--yes` to `--yes-always`.
- Now uses `AIDER_YES_ALWAYS` env var and `yes-always:` yaml key.
- Existing YAML and .env files will need to be updated.
- Can still abbreviate to `--yes` on the command line.
- Config file now uses standard YAML list syntax with ` - list entries`, one per line.
- `/settings` now includes the same announcement lines that would print at launch.
- Sanity checks the `--editor-model` on launch now, same as main and weak models.
- Added `--skip-sanity-check-repo` switch to speedup launch in large repos.
- Bugfix so architect mode handles Control-C properly.
- Repo-map is deterministic now, with improved caching logic.
- Improved commit message prompt.
- Aider wrote 77% of the code in this release.
### Aider v0.58.1
- Fixed bug where cache warming pings caused subsequent user messages to trigger a tight loop of LLM requests.
### Aider v0.58.0
- [Use a pair of Architect/Editor models for improved coding](https://aider.chat/2024/09/26/architect.html)
- Use a strong reasoning model like o1-preview as your Architect.
- Use a cheaper, faster model like gpt-4o as your Editor.
- New `--o1-preview` and `--o1-mini` shortcuts.
- Support for new Gemini 002 models.
- Better support for Qwen 2.5 models.
- Many confirmation questions can be skipped for the rest of the session with "(D)on't ask again" response.
- Autocomplete for `/read-only` supports the entire filesystem.
- New settings for completion menu colors.
- New `/copy` command to copy the last LLM response to the clipboard.
- Renamed `/clipboard` to `/paste`.
- Will now follow HTTP redirects when scraping urls.
- New `--voice-format` switch to send voice audio as wav/mp3/webm, by @mbailey.
- ModelSettings takes `extra_params` dict to specify any extras to pass to `litellm.completion()`.
- Support for cursor shapes when in vim mode.
- Numerous bug fixes.
- Aider wrote 53% of the code in this release.
### Aider v0.57.1
- Fixed dependency conflict between aider-chat[help] and [playwright].
### Aider v0.57.0
- Support for OpenAI o1 models:
- o1-preview now works well with diff edit format.
- o1-preview with diff now matches SOTA leaderboard result with whole edit format.
- `aider --model o1-mini`
- `aider --model o1-preview`
- On Windows, `/run` correctly uses PowerShell or cmd.exe.
- Support for new 08-2024 Cohere models, by @jalammar.
- Can now recursively add directories with `/read-only`.
- User input prompts now fall back to simple `input()` if `--no-pretty` or a Windows console is not available.
- Improved sanity check of git repo on startup.
- Improvements to prompt cache chunking strategy.
- Removed "No changes made to git tracked files".
- Numerous bug fixes for corner case crashes.
- Updated all dependency versions.
- Aider wrote 70% of the code in this release.
### Aider v0.56.0
- Enables prompt caching for Sonnet via OpenRouter by @fry69
- Enables 8k output tokens for Sonnet via VertexAI and DeepSeek V2.5.
- New `/report` command to open your browser with a pre-populated GitHub Issue.
- New `--chat-language` switch to set the spoken language.
- Now `--[no-]suggest-shell-commands` controls both prompting for and offering to execute shell commands.
- Check key imports on launch, provide helpful error message if dependencies aren't available.
- Renamed `--models` to `--list-models` by @fry69.
- Numerous bug fixes for corner case crashes.
- Aider wrote 56% of the code in this release.
### Aider v0.55.0
- Only print the pip command when self updating on Windows, without running it.
- Converted many error messages to warning messages.
- Added `--tool-warning-color` setting.
- Blanket catch and handle git errors in any `/command`.
- Catch and handle glob errors in `/add`, errors writing files.
- Disabled built in linter for typescript.
- Catch and handle terminals which don't support pretty output.
- Catch and handle playwright and pandoc errors.
- Catch `/voice` transcription exceptions, show the WAV file so the user can recover it.
- Aider wrote 53% of the code in this release.
### Aider v0.54.12
- Switched to `vX.Y.Z.dev` version naming.
### Aider v0.54.11
- Improved printed pip command output on Windows.
### Aider v0.54.10
- Bugfix to test command in platform info.
### Aider v0.54.9
- Include important devops files in the repomap.
- Print quoted pip install commands to the user.
- Adopt setuptools_scm to provide dev versions with git hashes.
- Share active test and lint commands with the LLM.
- Catch and handle most errors creating new files, reading existing files.
- Catch and handle most git errors.
- Added --verbose debug output for shell commands.
### Aider v0.54.8
- Startup QOL improvements:
- Sanity check the git repo and exit gracefully on problems.
- Pause for confirmation after model sanity check to allow user to review warnings.
- Bug fix for shell commands on Windows.
- Do not fuzzy match filenames when LLM is creating a new file, by @ozapinq
- Numerous corner case bug fixes submitted via new crash report -> GitHub Issue feature.
- Crash reports now include python version, OS, etc.
### Aider v0.54.7
- Offer to submit a GitHub issue pre-filled with uncaught exception info.
- Bugfix for infinite output.
### Aider v0.54.6
- New `/settings` command to show active settings.
- Only show cache warming status update if `--verbose`.
### Aider v0.54.5
- Bugfix for shell commands on Windows.
- Refuse to make git repo in $HOME, warn user.
- Don't ask again in current session about a file the user has said not to add to the chat.
- Added `--update` as an alias for `--upgrade`.
### Aider v0.54.4
- Bugfix to completions for `/model` command.
- Bugfix: revert home dir special case.
### Aider v0.54.3
- Dependency `watchdog<5` for docker image.
### Aider v0.54.2
- When users launch aider in their home dir, help them find/create a repo in a subdir.
- Added missing `pexpect` dependency.
### Aider v0.54.0
- Added model settings for `gemini/gemini-1.5-pro-exp-0827` and `gemini/gemini-1.5-flash-exp-0827`.
- Shell and `/run` commands can now be interactive in environments where a pty is available.
- Optionally share output of suggested shell commands back to the LLM.
- New `--[no-]suggest-shell-commands` switch to configure shell commands.
- Performance improvements for autocomplete in large/mono repos.
- New `--upgrade` switch to install latest version of aider from pypi.
- Bugfix to `--show-prompt`.
- Disabled automatic reply to the LLM on `/undo` for all models.
- Removed pager from `/web` output.
- Aider wrote 64% of the code in this release.
### Aider v0.53.0
- [Keep your prompt cache from expiring](https://aider.chat/docs/usage/caching.html#preventing-cache-expiration) with `--cache-keepalive-pings`.
- Pings the API every 5min to keep the cache warm.
- You can now bulk accept/reject a series of add url and run shell confirmations.
- Improved matching of filenames from S/R blocks with files in chat.
- Stronger prompting for Sonnet to make edits in code chat mode.
- Stronger prompting for the LLM to specify full file paths.
- Improved shell command prompting.
- Weak model now uses `extra_headers`, to support Anthropic beta features.
- New `--install-main-branch` to update to the latest dev version of aider.
- Improved error messages on attempt to add not-git subdir to chat.
- Show model metadata info on `--verbose`.
- Improved warnings when LLMs env variables aren't set.
- Bugfix to windows filenames which contain `\_`.
- Aider wrote 59% of the code in this release.
### Aider v0.52.1
- Bugfix for NameError when applying edits.
### Aider v0.52.0 ### Aider v0.52.0
@ -536,7 +863,7 @@ cog.out(text)
### Aider v0.14.0 ### Aider v0.14.0
- [Support for Claude2 and other LLMs via OpenRouter](https://aider.chat/docs/faq.html#accessing-other-llms-with-openrouter) by @joshuavial - [Support for Claude2 and other LLMs via OpenRouter](https://aider.chat/docs/faq.html#accessing-other-llms-with-openrouter) by @joshuavial
- Documentation for [running the aider benchmarking suite](https://github.com/paul-gauthier/aider/tree/main/benchmark) - Documentation for [running the aider benchmarking suite](https://github.com/Aider-AI/aider/tree/main/benchmark)
- Aider now requires Python >= 3.9 - Aider now requires Python >= 3.9
@ -581,7 +908,7 @@ cog.out(text)
- Added `/git` command to run git from inside aider chats. - Added `/git` command to run git from inside aider chats.
- Use Meta-ENTER (Esc+ENTER in some environments) to enter multiline chat messages. - Use Meta-ENTER (Esc+ENTER in some environments) to enter multiline chat messages.
- Create a `.gitignore` with `.aider*` to prevent users from accidentaly adding aider files to git. - Create a `.gitignore` with `.aider*` to prevent users from accidentally adding aider files to git.
- Check pypi for newer versions and notify user. - Check pypi for newer versions and notify user.
- Updated keyboard interrupt logic so that 2 ^C in 2 seconds always forces aider to exit. - Updated keyboard interrupt logic so that 2 ^C in 2 seconds always forces aider to exit.
- Provide GPT with detailed error if it makes a bad edit block, ask for a retry. - Provide GPT with detailed error if it makes a bad edit block, ask for a retry.

View file

@ -24,7 +24,7 @@ exclude:
aux_links: aux_links:
"GitHub": "GitHub":
- "https://github.com/paul-gauthier/aider" - "https://github.com/Aider-AI/aider"
"Discord": "Discord":
- "https://discord.gg/Tv2uQnR88V" - "https://discord.gg/Tv2uQnR88V"
"Blog": "Blog":
@ -32,13 +32,17 @@ aux_links:
nav_external_links: nav_external_links:
- title: "GitHub" - title: "GitHub"
url: "https://github.com/paul-gauthier/aider" url: "https://github.com/Aider-AI/aider"
- title: "Discord" - title: "Discord"
url: "https://discord.gg/Tv2uQnR88V" url: "https://discord.gg/Tv2uQnR88V"
repository: paul-gauthier/aider repository: Aider-AI/aider
callouts: callouts:
tip: tip:
title: Tip title: Tip
color: green color: green
note:
title: Note
color: yellow

View file

@ -0,0 +1,492 @@
- dirname: 2024-09-25-21-17-19--architect-sonnet-sonnet-diff
test_cases: 133
model: claude-3.5-sonnet
editor_model: claude-3.5-sonnet
editor_edit_format: diff
edit_format: architect
commit_hash: c18d6a8-dirty
pass_rate_1: 62.4
pass_rate_2: 80.5
percent_cases_well_formed: 100.0
error_outputs: 3
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 183
lazy_comments: 6
syntax_errors: 9
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 2
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-09-25
versions: 0.57.2.dev
seconds_per_case: 25.1
total_cost: 4.9502
- dirname: 2024-07-04-14-32-08--claude-3.5-sonnet-diff-continue
test_cases: 133
model: claude-3.5-sonnet
edit_format: diff
commit_hash: 35f21b5
pass_rate_1: 57.1
pass_rate_2: 77.4
percent_cases_well_formed: 99.2
error_outputs: 23
released: 2024-06-20
num_malformed_responses: 4
num_with_malformed_responses: 1
user_asks: 2
lazy_comments: 0
syntax_errors: 1
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
command: aider --sonnet
date: 2024-07-04
versions: 0.42.1-dev
seconds_per_case: 17.6
total_cost: 3.6346
- dirname: 2024-09-25-21-25-01--architect-o1mini-4o-jr-diff
test_cases: 133
model: o1-mini
editor_model: gpt-4o
editor_edit_format: diff
edit_format: architect
commit_hash: 3f682ed-dirty, 25e833b
pass_rate_1: 51.1
pass_rate_2: 70.7
percent_cases_well_formed: 100.0
error_outputs: 12
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 214
lazy_comments: 6
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model o1-mini
date: 2024-09-25
versions: 0.57.2.dev
seconds_per_case: 23.7
total_cost: 9.3158
- dirname: 2024-09-26-15-05-58--architect-o1mini-deep-jr-whole
test_cases: 133
model: o1-mini
edit_format: architect
commit_hash: 1676653-dirty
editor_model: deepseek
editor_edit_format: whole
pass_rate_1: 51.9
pass_rate_2: 71.4
percent_cases_well_formed: 100.0
error_outputs: 0
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 199
lazy_comments: 11
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 2
command: aider --model o1-mini
date: 2024-09-26
versions: 0.57.2.dev
seconds_per_case: 48.2
total_cost: 5.6069
- dirname: 2024-09-25-21-33-40--architect-4o-4o-jr-diff
test_cases: 133
model: gpt-4o
editor_model: gpt-4o
editor_edit_format: diff
edit_format: architect
commit_hash: 9f3cd92
pass_rate_1: 56.4
pass_rate_2: 75.2
percent_cases_well_formed: 100.0
error_outputs: 13
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 207
lazy_comments: 8
syntax_errors: 1
indentation_errors: 1
exhausted_context_windows: 0
test_timeouts: 3
command: aider --model gpt-4o
date: 2024-09-25
versions: 0.57.2.dev
seconds_per_case: 18.2
total_cost: 6.0918
- dirname: 2024-09-21-16-45-11--o1-preview-flex-sr-markers
test_cases: 133
model: o1-preview
edit_format: diff
commit_hash: 5493654-dirty
pass_rate_1: 57.9
pass_rate_2: 79.7
percent_cases_well_formed: 93.2
error_outputs: 11
num_malformed_responses: 11
num_with_malformed_responses: 9
user_asks: 3
lazy_comments: 0
syntax_errors: 10
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model o1-preview
date: 2024-09-21
versions: 0.56.1.dev
seconds_per_case: 80.9
total_cost: 63.9190
- dirname: 2024-09-25-21-39-05--architect-o1preview-4o-jr-diff
test_cases: 133
model: o1-preview
editor_model: gpt-4o
editor_edit_format: diff
edit_format: architect
commit_hash: 9f3cd92
pass_rate_1: 63.2
pass_rate_2: 80.5
percent_cases_well_formed: 100.0
error_outputs: 23
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 191
lazy_comments: 2
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 4
command: aider --model o1-preview
date: 2024-09-25
versions: 0.57.2.dev
seconds_per_case: 42.3
total_cost: 39.3766
- dirname: 2024-09-25-21-52-42--architect-o1preview-sonnet-jr-diff
test_cases: 133
model: o1-preview
editor_model: claude-3.5-sonnet
editor_edit_format: diff
edit_format: architect
commit_hash: 9f3cd92
editor_model: claude-3-5-sonnet
pass_rate_1: 60.9
pass_rate_2: 82.7
percent_cases_well_formed: 100.0
error_outputs: 1
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 180
lazy_comments: 3
syntax_errors: 9
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 3
command: aider --model o1-preview
date: 2024-09-25
versions: 0.57.2.dev
seconds_per_case: 44.9
total_cost: 37.6192
- dirname: 2024-09-21-16-40-56--o1-mini-flex-sr-markers
test_cases: 36
model: o1-mini
edit_format: diff
commit_hash: 5493654
pass_rate_1: 50.0
pass_rate_2: 61.1
percent_cases_well_formed: 100.0
error_outputs: 0
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 3
lazy_comments: 0
syntax_errors: 0
indentation_errors: 1
exhausted_context_windows: 0
test_timeouts: 0
command: aider --model o1-mini
date: 2024-09-21
versions: 0.56.1.dev
seconds_per_case: 26.7
total_cost: 2.4226
- dirname: 2024-09-25-23-12-14--architect-o1mini-deep-jr-diff
test_cases: 133
model: o1-mini
edit_format: architect
commit_hash: 9f3cd92-dirty
editor_model: deepseek
editor_edit_format: diff
pass_rate_1: 48.9
pass_rate_2: 69.2
percent_cases_well_formed: 100.0
error_outputs: 1
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 202
lazy_comments: 12
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 2
command: aider --model o1-mini
date: 2024-09-25
versions: 0.57.2.dev
seconds_per_case: 52.2
total_cost: 5.7927
- dirname: 2024-09-25-23-18-16--architect-o1preview-deep-jr-diff
test_cases: 133
model: o1-preview
edit_format: architect
commit_hash: 9f3cd92-dirty
editor_model: deepseek
editor_edit_format: diff
pass_rate_1: 64.7
pass_rate_2: 80.5
percent_cases_well_formed: 100.0
error_outputs: 5
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 180
lazy_comments: 2
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model o1-preview
date: 2024-09-25
versions: 0.57.2.dev
seconds_per_case: 73.2
total_cost: 35.7887
- dirname: 2024-09-25-23-30-36--architect-o1preview-deep-jr-whole
test_cases: 133
model: o1-preview
edit_format: architect
commit_hash: 9f3cd92-dirty
editor_model: deepseek
editor_edit_format: whole
pass_rate_1: 63.9
pass_rate_2: 85.0
percent_cases_well_formed: 100.0
error_outputs: 0
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 181
lazy_comments: 12
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 2
command: aider --model o1-preview
date: 2024-09-25
versions: 0.57.2.dev
seconds_per_case: 67.4
total_cost: 35.3152
- dirname: 2024-09-26-15-15-17--architect-sonnet-deep-jr-whole
test_cases: 133
model: claude-3.5-sonnet
edit_format: architect
commit_hash: bc1559f-dirty
editor_model: deepseek
editor_edit_format: whole
pass_rate_1: 61.7
pass_rate_2: 78.9
percent_cases_well_formed: 100.0
error_outputs: 0
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 184
lazy_comments: 5
syntax_errors: 9
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 3
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-09-26
versions: 0.57.2.dev
seconds_per_case: 37.2
total_cost: 2.1510
- dirname: 2024-09-26-15-33-28--costs-gpt4o-diff
test_cases: 133
model: gpt-4o
edit_format: diff
commit_hash: 89aa385-dirty
pass_rate_1: 55.6
pass_rate_2: 71.4
percent_cases_well_formed: 97.7
error_outputs: 5
num_malformed_responses: 5
num_with_malformed_responses: 3
user_asks: 10
lazy_comments: 0
syntax_errors: 0
indentation_errors: 1
exhausted_context_windows: 0
test_timeouts: 0
command: aider --model gpt-4o
date: 2024-09-26
versions: 0.57.2.dev
seconds_per_case: 9.7
total_cost: 3.8088
- dirname: 2024-09-26-15-41-08--architect-4o-deep-jr-whole
test_cases: 133
model: gpt-4o
edit_format: architect
commit_hash: 89aa385-dirty
editor_model: deepseek
editor_edit_format: whole
pass_rate_1: 60.9
pass_rate_2: 73.7
percent_cases_well_formed: 100.0
error_outputs: 0
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 187
lazy_comments: 12
syntax_errors: 5
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model gpt-4o
date: 2024-09-26
versions: 0.57.2.dev
seconds_per_case: 38.0
total_cost: 2.4737
- dirname: 2024-09-26-15-54-08--architect-4o-deep-jr-diff
test_cases: 133
model: gpt-4o
edit_format: architect
commit_hash: 89aa385-dirty
editor_model: deepseek
editor_edit_format: diff
pass_rate_1: 57.1
pass_rate_2: 74.4
percent_cases_well_formed: 100.0
error_outputs: 4
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 192
lazy_comments: 6
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 2
command: aider --model gpt-4o
date: 2024-09-26
versions: 0.57.2.dev
seconds_per_case: 44.0
total_cost: 2.5498
- dirname: 2024-09-26-16-06-39--architect-sonnet-deep-jr-diff
test_cases: 133
model: claude-3.5-sonnet
edit_format: architect
commit_hash: 89aa385-dirty
editor_model: deepseek
editor_edit_format: diff
pass_rate_1: 61.7
pass_rate_2: 78.9
percent_cases_well_formed: 100.0
error_outputs: 2
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 184
lazy_comments: 2
syntax_errors: 9
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 2
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-09-26
versions: 0.57.2.dev
seconds_per_case: 43.2
total_cost: 2.1488
- dirname: 2024-09-27-18-15-32--architect-4omini-4omini
test_cases: 133
model: gpt-4o-mini
edit_format: architect
commit_hash: 0bd8058-dirty
editor_model: gpt-4o-mini
editor_edit_format: whole
pass_rate_1: 43.6
pass_rate_2: 60.2
percent_cases_well_formed: 100.0
error_outputs: 0
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 208
lazy_comments: 2
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 3
command: aider --model gpt-4o-mini
date: 2024-09-27
versions: 0.57.2.dev
seconds_per_case: 21.0
total_cost: 0.1527
- dirname: 2024-07-18-18-57-46--gpt-4o-mini-whole
test_cases: 133
model: gpt-4o-mini
edit_format: whole
commit_hash: d31eef3-dirty
pass_rate_1: 40.6
pass_rate_2: 55.6
released: 2024-07-18
percent_cases_well_formed: 100.0
error_outputs: 1
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 1
lazy_comments: 0
syntax_errors: 1
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 2
command: aider --model gpt-4o-mini
date: 2024-07-18
versions: 0.44.1-dev
seconds_per_case: 7.8
total_cost: 0.0916
- dirname: 2024-09-29-22-35-36--architect-o1preview-o1mini-whole
test_cases: 133
model: o1-preview
edit_format: architect
commit_hash: 53ca83b
editor_model: o1-mini
editor_edit_format: whole
pass_rate_1: 65.4
pass_rate_2: 85.0
percent_cases_well_formed: 100.0
error_outputs: 0
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 179
lazy_comments: 4
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model o1-preview
date: 2024-09-29
versions: 0.58.1.dev
seconds_per_case: 39.7
total_cost: 36.2078

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,186 @@
- dirname: 2024-07-18-18-57-46--gpt-4o-mini-whole
test_cases: 133
model: gpt-4o-mini (whole)
edit_format: whole
commit_hash: d31eef3-dirty
pass_rate_1: 40.6
pass_rate_2: 55.6
released: 2024-07-18
percent_cases_well_formed: 100.0
error_outputs: 1
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 1
lazy_comments: 0
syntax_errors: 1
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 2
command: aider --model gpt-4o-mini
date: 2024-07-18
versions: 0.44.1-dev
seconds_per_case: 7.8
total_cost: 0.0916
- dirname: 2024-07-04-14-32-08--claude-3.5-sonnet-diff-continue
test_cases: 133
model: claude-3.5-sonnet (diff)
edit_format: diff
commit_hash: 35f21b5
pass_rate_1: 57.1
pass_rate_2: 77.4
percent_cases_well_formed: 99.2
error_outputs: 23
released: 2024-06-20
num_malformed_responses: 4
num_with_malformed_responses: 1
user_asks: 2
lazy_comments: 0
syntax_errors: 1
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
command: aider --sonnet
date: 2024-07-04
versions: 0.42.1-dev
seconds_per_case: 17.6
total_cost: 3.6346
- dirname: 2024-08-06-18-28-39--gpt-4o-2024-08-06-diff-again
test_cases: 133
model: gpt-4o-2024-08-06 (diff)
edit_format: diff
commit_hash: ed9ed89
pass_rate_1: 57.1
pass_rate_2: 71.4
percent_cases_well_formed: 98.5
error_outputs: 18
num_malformed_responses: 2
num_with_malformed_responses: 2
user_asks: 10
lazy_comments: 0
syntax_errors: 6
indentation_errors: 2
exhausted_context_windows: 0
test_timeouts: 5
released: 2024-08-06
command: aider --model openai/gpt-4o-2024-08-06
date: 2024-08-06
versions: 0.48.1-dev
seconds_per_case: 6.5
total_cost: 0.0000
- dirname: 2024-09-12-19-57-35--o1-mini-whole
test_cases: 133
model: o1-mini (whole)
edit_format: whole
commit_hash: 36fa773-dirty, 291b456
pass_rate_1: 49.6
pass_rate_2: 70.7
percent_cases_well_formed: 90.0
error_outputs: 0
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 17
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model o1-mini
date: 2024-09-12
versions: 0.56.1.dev
seconds_per_case: 103.0
total_cost: 5.3725
- dirname: 2024-09-12-20-56-22--o1-mini-diff
test_cases: 133
model: o1-mini (diff)
edit_format: diff
commit_hash: 4598a37-dirty, 291b456, 752e823-dirty
pass_rate_1: 45.1
pass_rate_2: 62.4
percent_cases_well_formed: 85.7
error_outputs: 26
num_malformed_responses: 26
num_with_malformed_responses: 19
user_asks: 2
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model o1-mini --edit-format diff
date: 2024-09-12
versions: 0.56.1.dev
seconds_per_case: 177.7
total_cost: 11.1071
- dirname: 2024-09-05-21-26-49--sonnet-whole-sep5
test_cases: 133
model: claude-3.5-sonnet (whole)
edit_format: whole
commit_hash: 8cfdcbd
pass_rate_1: 55.6
pass_rate_2: 75.2
percent_cases_well_formed: 100.0
error_outputs: 0
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 0
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 0
command: aider --model openrouter/anthropic/claude-3.5-sonnet --edit-format whole
date: 2024-09-05
versions: 0.55.1.dev
seconds_per_case: 15.2
total_cost: 2.3502
- dirname: 2024-09-12-22-44-14--o1-preview-diff
test_cases: 133
model: o1-preview (diff)
edit_format: diff
commit_hash: 72f52bd
pass_rate_1: 56.4
pass_rate_2: 75.2
percent_cases_well_formed: 84.2
error_outputs: 27
num_malformed_responses: 27
num_with_malformed_responses: 21
user_asks: 8
lazy_comments: 0
syntax_errors: 7
indentation_errors: 3
exhausted_context_windows: 0
test_timeouts: 3
command: aider --model o1-preview
date: 2024-09-12
versions: 0.56.1.dev
seconds_per_case: 95.8
total_cost: 71.7927
- dirname: 2024-09-13-02-13-59--o1-preview-whole
test_cases: 133
model: o1-preview (whole)
edit_format: whole
commit_hash: 72f52bd-dirty
pass_rate_1: 58.6
pass_rate_2: 79.7
percent_cases_well_formed: 100.0
error_outputs: 0
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 2
lazy_comments: 0
syntax_errors: 1
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 2
command: aider --model o1-preview
date: 2024-09-13
versions: 0.56.1.dev
seconds_per_case: 47.4
total_cost: 38.0612

View file

@ -0,0 +1,322 @@
- dirname: 2024-11-09-11-09-15--Qwen2.5-Coder-32B-Instruct
test_cases: 133
model: "HuggingFace via GLHF: BF16"
released: 2024-11-12
edit_format: diff
commit_hash: ec9982a
pass_rate_1: 59.4
pass_rate_2: 71.4
percent_cases_well_formed: 94.7
error_outputs: 17
num_malformed_responses: 17
num_with_malformed_responses: 7
user_asks: 1
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 3
command: aider --model openai/hf:Qwen/Qwen2.5-Coder-32B-Instruct --openai-api-base https://glhf.chat/api/openai/v1
date: 2024-11-09
versions: 0.59.2.dev
seconds_per_case: 22.5
total_cost: 0.0000
- dirname: 2024-11-22-18-56-13--ollama-qwen2.5-coder:32b-instruct-fp16
test_cases: 132
model: "Ollama: fp16"
edit_format: diff
commit_hash: f06452c-dirty, 6a0a97c-dirty, 4e9ae16-dirty, 5506d0f-dirty
pass_rate_1: 58.3
pass_rate_2: 71.4
percent_cases_well_formed: 90.2
error_outputs: 27
num_malformed_responses: 26
num_with_malformed_responses: 13
user_asks: 2
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 0
command: aider --model ollama/qwen2.5-coder:32b-instruct-fp16
date: 2024-11-22
versions: 0.64.2.dev
seconds_per_case: 119.6
total_cost: 0.0000
- dirname: 2024-11-22-14-53-26--hyperbolic-qwen25coder32binstruct
test_cases: 133
model: "Hyperbolic: BF16"
edit_format: diff
commit_hash: f9ef161, 17aef7b-dirty
pass_rate_1: 57.9
pass_rate_2: 69.2
percent_cases_well_formed: 91.7
error_outputs: 30
num_malformed_responses: 29
num_with_malformed_responses: 11
user_asks: 9
lazy_comments: 0
syntax_errors: 4
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 2
command: aider --model openai/Qwen/Qwen2.5-Coder-32B-Instruct --openai-api-base https://api.hyperbolic.xyz/v1/
date: 2024-11-22
versions: 0.64.2.dev
seconds_per_case: 33.2
total_cost: 0.0000
- dirname: 2024-11-22-17-53-35--qwen25-coder-32b-Instruct-4bit
test_cases: 133
model: "mlx-community: 4bit"
edit_format: diff
commit_hash: a16dcab-dirty
pass_rate_1: 60.2
pass_rate_2: 72.2
percent_cases_well_formed: 88.7
error_outputs: 31
num_malformed_responses: 30
num_with_malformed_responses: 15
user_asks: 6
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 1
test_timeouts: 0
command: aider --model openai/mlx-community/Qwen2.5-Coder-32B-Instruct-4bit
date: 2024-11-23
versions: 0.64.2.dev
seconds_per_case: 53.4
total_cost: 0.0000
- dirname: 2024-11-23-15-07-20--qwen25-coder-32b-Instruct-8bit
test_cases: 133
model: "mlx-community: 8bit"
edit_format: diff
commit_hash: a16dcab-dirty
pass_rate_1: 59.4
pass_rate_2: 72.2
percent_cases_well_formed: 92.5
error_outputs: 20
num_malformed_responses: 15
num_with_malformed_responses: 10
user_asks: 7
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 5
test_timeouts: 2
command: aider --model openai/mlx-community/Qwen2.5-Coder-32B-Instruct-8bit
date: 2024-11-23
versions: 0.64.2.dev
seconds_per_case: 98.4
total_cost: 0.0000
- dirname: 2024-11-24-22-18-18--or-all-or-fixed-blank-messages2
test_cases: 133
model: "OpenRouter: multiple"
edit_format: diff
commit_hash: 0c59d32
pass_rate_1: 57.1
pass_rate_2: 67.7
percent_cases_well_formed: 95.5
error_outputs: 56
num_malformed_responses: 10
num_with_malformed_responses: 6
user_asks: 14
lazy_comments: 0
syntax_errors: 6
indentation_errors: 0
exhausted_context_windows: 3
test_timeouts: 1
command: aider --model openrouter/qwen/qwen-2.5-coder-32b-instruct
date: 2024-11-24
versions: 0.64.2.dev
seconds_per_case: 21.2
total_cost: 0.1420
- dirname: 2024-11-23-21-08-53--ollama-qwen2.5-coder:32b-instruct-q4_K_M-8kctx
test_cases: 133
model: "Ollama: q4_K_M"
edit_format: diff
commit_hash: baa1335-dirty, e63df83-dirty, ff8c1aa-dirty
pass_rate_1: 54.9
pass_rate_2: 66.9
percent_cases_well_formed: 94.0
error_outputs: 21
num_malformed_responses: 21
num_with_malformed_responses: 8
user_asks: 5
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 3
command: aider --model ollama/qwen2.5-coder:32b-instruct-q4_K_M
date: 2024-11-23
versions: 0.64.2.dev
seconds_per_case: 35.7
total_cost: 0.0000
- dirname: 2024-11-24-02-23-32--deepinfra-qwen-diff
test_cases: 133
model: "Deepinfra: BF16"
edit_format: diff
commit_hash: bb78e2f
pass_rate_1: 58.6
pass_rate_2: 72.2
percent_cases_well_formed: 94.7
error_outputs: 15
num_malformed_responses: 13
num_with_malformed_responses: 7
user_asks: 3
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 2
test_timeouts: 3
command: aider --model deepinfra/Qwen/Qwen2.5-Coder-32B-Instruct
date: 2024-11-24
versions: 0.64.2.dev
seconds_per_case: 17.5
total_cost: 0.0000
- dirname: 2024-11-24-04-12-58--fireworks-qwen-diff
test_cases: 133
model: "Fireworks: unknown"
edit_format: diff
commit_hash: 757eac0
pass_rate_1: 57.9
pass_rate_2: 72.2
percent_cases_well_formed: 94.0
error_outputs: 23
num_malformed_responses: 19
num_with_malformed_responses: 8
user_asks: 8
lazy_comments: 0
syntax_errors: 6
indentation_errors: 0
exhausted_context_windows: 4
test_timeouts: 1
command: aider --model fireworks_ai/accounts/fireworks/models/qwen2p5-coder-32b-instruct
date: 2024-11-24
versions: 0.64.2.dev
seconds_per_case: 10.4
total_cost: 0.5759
- dirname: 2024-11-24-02-04-59--ollama-qwen2.5-coder:32b-instruct-q2_K-8kctx
test_cases: 133
model: "Ollama: q2_K"
edit_format: diff
commit_hash: 757eac0, bb78e2f, 8d0ba40-dirty, 1d09e96
pass_rate_1: 48.9
pass_rate_2: 61.7
percent_cases_well_formed: 91.7
error_outputs: 32
num_malformed_responses: 32
num_with_malformed_responses: 11
user_asks: 8
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model ollama/qwen2.5-coder:32b-instruct-q2_K
date: 2024-11-24
versions: 0.64.2.dev
seconds_per_case: 97.8
total_cost: 0.0000
- dirname: 2024-11-24-14-56-49--qwen25-32b-or-fireworks
test_cases: 133
model: "Fireworks via OpenRouter: unknown"
edit_format: diff
commit_hash: c2f184f
pass_rate_1: 55.6
pass_rate_2: 67.7
percent_cases_well_formed: 94.0
error_outputs: 39
num_malformed_responses: 24
num_with_malformed_responses: 8
user_asks: 13
lazy_comments: 0
syntax_errors: 1
indentation_errors: 1
exhausted_context_windows: 7
test_timeouts: 4
command: aider --model openrouter/qwen/qwen-2.5-coder-32b-instruct
date: 2024-11-24
versions: 0.64.2.dev
seconds_per_case: 16.1
total_cost: 0.1391
- dirname: 2024-11-24-22-03-19--or-hyperbolic-or-fixed-blank-messages2
test_cases: 133
model: "Hyperbolic via OpenRouter: BF16"
edit_format: diff
commit_hash: 0c59d32
pass_rate_1: 55.6
pass_rate_2: 68.4
percent_cases_well_formed: 89.5
error_outputs: 28
num_malformed_responses: 24
num_with_malformed_responses: 14
user_asks: 29
lazy_comments: 0
syntax_errors: 1
indentation_errors: 0
exhausted_context_windows: 4
test_timeouts: 1
command: aider --model openrouter/qwen/qwen-2.5-coder-32b-instruct
date: 2024-11-24
versions: 0.64.2.dev
seconds_per_case: 41.5
total_cost: 0.1402
- dirname: 2024-11-24-15-00-50--qwen25-32b-or-deepinfra
test_cases: 133
model: "Deepinfra via OpenRouter: BF16"
edit_format: diff
commit_hash: c2f184f
pass_rate_1: 57.1
pass_rate_2: 69.9
percent_cases_well_formed: 89.5
error_outputs: 35
num_malformed_responses: 31
num_with_malformed_responses: 14
user_asks: 11
lazy_comments: 0
syntax_errors: 1
indentation_errors: 1
exhausted_context_windows: 4
test_timeouts: 1
command: aider --model openrouter/qwen/qwen-2.5-coder-32b-instruct
date: 2024-11-24
versions: 0.64.2.dev
seconds_per_case: 28.5
total_cost: 0.1390
- dirname: 2024-11-26-03-15-06--ollama-qwen2.5-coder:32b-instruct-fp16-2kctx
test_cases: 132
model: "Ollama: fp16, 2k ctx"
edit_format: diff
commit_hash: 68be6c5-dirty, 554d274, 2ff3a23, 2ff3a23-dirty, 61759f9, dd48b74, 3ebd47d-dirty
pass_rate_1: 43.2
pass_rate_2: 51.9
percent_cases_well_formed: 46.2
error_outputs: 171
num_malformed_responses: 165
num_with_malformed_responses: 71
user_asks: 97
lazy_comments: 2
syntax_errors: 4
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 0
command: "aider --model ollama/qwen2.5-coder:32b-instruct-fp16 # num_ctx: 2048"
date: 2024-11-26
versions: 0.64.2.dev,0.65.1.dev
seconds_per_case: 188.6
total_cost: 0.0000

View file

@ -145,7 +145,7 @@
- dirname: 2024-07-01-18-30-33--refac-claude-3.5-sonnet-diff-not-lazy - dirname: 2024-07-01-18-30-33--refac-claude-3.5-sonnet-diff-not-lazy
test_cases: 89 test_cases: 89
model: claude-3.5-sonnet (diff) model: claude-3.5-sonnet-20240620
edit_format: diff edit_format: diff
commit_hash: 7396e38-dirty commit_hash: 7396e38-dirty
pass_rate_1: 64.0 pass_rate_1: 64.0
@ -167,7 +167,7 @@
- dirname: 2024-07-24-07-49-39--refac-deepseek-coder-v2-0724 - dirname: 2024-07-24-07-49-39--refac-deepseek-coder-v2-0724
test_cases: 89 test_cases: 89
model: DeepSeek Coder V2 0724 model: DeepSeek Coder V2 0724 (deprecated)
edit_format: diff edit_format: diff
commit_hash: bb6e597 commit_hash: bb6e597
pass_rate_1: 32.6 pass_rate_1: 32.6
@ -209,3 +209,90 @@
seconds_per_case: 16.9 seconds_per_case: 16.9
total_cost: 4.0873 total_cost: 4.0873
- dirname: 2024-09-05-15-19-05--refac-deepseek-v2.5-no-shell
test_cases: 89
model: DeepSeek Chat V2.5
edit_format: diff
commit_hash: 1279c86, 1279c86-dirty
pass_rate_1: 31.5
percent_cases_well_formed: 67.4
error_outputs: 90
num_malformed_responses: 88
num_with_malformed_responses: 29
user_asks: 8
lazy_comments: 7
syntax_errors: 0
indentation_errors: 6
exhausted_context_windows: 2
test_timeouts: 0
command: aider --deepseek
date: 2024-09-05
versions: 0.55.1.dev
seconds_per_case: 225.4
total_cost: 1.0338
- dirname: 2024-10-22-19-57-27--refac-openrouter-sonnet-1022
test_cases: 89
model: claude-3-5-sonnet-20241022
edit_format: diff
commit_hash: 4a3e6ef
pass_rate_1: 92.1
percent_cases_well_formed: 91.0
error_outputs: 13
num_malformed_responses: 12
num_with_malformed_responses: 8
user_asks: 14
lazy_comments: 2
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 0
command: aider --sonnet
date: 2024-10-22
versions: 0.60.1.dev
seconds_per_case: 32.5
total_cost: 8.4644
- dirname: 2024-10-22-20-03-10--refac-o1mini
test_cases: 89
model: o1-mini
edit_format: diff
commit_hash: 4a3e6ef-dirty
pass_rate_1: 44.9
percent_cases_well_formed: 29.2
error_outputs: 151
num_malformed_responses: 150
num_with_malformed_responses: 63
user_asks: 28
lazy_comments: 2
syntax_errors: 5
indentation_errors: 4
exhausted_context_windows: 1
test_timeouts: 0
command: aider --model o1-mini
date: 2024-10-22
versions: 0.60.1.dev
seconds_per_case: 115.3
total_cost: 29.0492
- dirname: 2024-10-22-20-26-36--refac-o1preview
test_cases: 89
model: o1-preview
edit_format: diff
commit_hash: 4a3e6ef-dirty
pass_rate_1: 75.3
percent_cases_well_formed: 57.3
error_outputs: 75
num_malformed_responses: 74
num_with_malformed_responses: 38
user_asks: 19
lazy_comments: 2
syntax_errors: 2
indentation_errors: 3
exhausted_context_windows: 1
test_timeouts: 0
command: aider --model o1-preview
date: 2024-10-22
versions: 0.60.1.dev
seconds_per_case: 231.7
total_cost: 120.9850

View file

@ -0,0 +1,459 @@
- dirname: 2024-06-20-15-16-41--claude-3.5-sonnet-diff
test_cases: 133
model: openrouter/anthropic/claude-3.5-sonnet
edit_format: diff
commit_hash: 068609e-dirty
pass_rate_1: 57.9
pass_rate_2: 74.4
percent_cases_well_formed: 97.0
error_outputs: 48
num_malformed_responses: 11
num_with_malformed_responses: 4
user_asks: 0
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-06-20
versions: 0.38.1-dev
seconds_per_case: 21.6
total_cost: 0.0000
- dirname: 2024-06-24-12-48-43--claude-3.5-sonnet-udiff
test_cases: 133
model: openrouter/anthropic/claude-3.5-sonnet
edit_format: udiff
commit_hash: 7be08c7
pass_rate_1: 62.4
pass_rate_2: 74.4
percent_cases_well_formed: 100.0
error_outputs: 10
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 10
lazy_comments: 0
syntax_errors: 1
indentation_errors: 2
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-06-24
versions: 0.39.1-dev
seconds_per_case: 14.3
total_cost: 0.0000
- dirname: 2024-06-24-17-44-31--claude-3.5-sonnet-diff-less-chatty
test_cases: 133
model: openrouter/anthropic/claude-3.5-sonnet
edit_format: diff
commit_hash: 0d484e5
pass_rate_1: 57.9
pass_rate_2: 74.4
percent_cases_well_formed: 99.2
error_outputs: 14
num_malformed_responses: 3
num_with_malformed_responses: 1
user_asks: 2
lazy_comments: 0
syntax_errors: 1
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 4
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-06-24
versions: 0.39.1-dev
seconds_per_case: 16.0
total_cost: 0.0000
- dirname: 2024-06-24-17-50-46--claude-3.5-sonnet-diff-less-chatty2
test_cases: 133
model: openrouter/anthropic/claude-3.5-sonnet
edit_format: diff
commit_hash: 3015495
pass_rate_1: 59.4
pass_rate_2: 76.7
percent_cases_well_formed: 99.2
error_outputs: 5
num_malformed_responses: 1
num_with_malformed_responses: 1
user_asks: 1
lazy_comments: 0
syntax_errors: 0
indentation_errors: 1
exhausted_context_windows: 0
test_timeouts: 2
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-06-24
versions: 0.39.1-dev
seconds_per_case: 15.7
total_cost: 0.0000
- dirname: 2024-06-24-17-56-40--claude-3.5-sonnet-diff-less-chatty-sys-examples
test_cases: 133
model: openrouter/anthropic/claude-3.5-sonnet
edit_format: diff
commit_hash: 3015495-dirty
pass_rate_1: 58.6
pass_rate_2: 75.9
percent_cases_well_formed: 100.0
error_outputs: 2
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 0
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 3
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-06-24
versions: 0.39.1-dev
seconds_per_case: 15.9
total_cost: 0.0000
- dirname: 2024-07-04-14-32-08--claude-3.5-sonnet-diff-continue
test_cases: 133
model: openrouter/anthropic/claude-3.5-sonnet
edit_format: diff
commit_hash: 35f21b5
pass_rate_1: 57.1
pass_rate_2: 77.4
percent_cases_well_formed: 99.2
error_outputs: 23
num_malformed_responses: 4
num_with_malformed_responses: 1
user_asks: 2
lazy_comments: 0
syntax_errors: 1
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-07-04
versions: 0.42.1-dev
seconds_per_case: 17.6
total_cost: 3.6346
- dirname: 2024-07-06-19-39-59--claude-3.5-sonnet-diff-platform
test_cases: 133
model: openrouter/anthropic/claude-3.5-sonnet
edit_format: diff
commit_hash: e47c2a9-dirty
pass_rate_1: 57.9
pass_rate_2: 78.2
percent_cases_well_formed: 100.0
error_outputs: 0
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 0
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-07-06
versions: 0.42.1-dev
seconds_per_case: 14.6
total_cost: 3.5616
- dirname: 2024-07-24-17-11-07--claude-3.5-sonnet-diff-july24
test_cases: 133
model: openrouter/anthropic/claude-3.5-sonnet
edit_format: diff
commit_hash: 859a13e
pass_rate_1: 59.4
pass_rate_2: 78.2
percent_cases_well_formed: 99.2
error_outputs: 6
num_malformed_responses: 1
num_with_malformed_responses: 1
user_asks: 1
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-07-24
versions: 0.45.2-dev
seconds_per_case: 16.9
total_cost: 3.4981
- dirname: 2024-07-28-20-23-42--claude-3.5-sonnet-diff-no-reminder
test_cases: 94
model: openrouter/anthropic/claude-3.5-sonnet
edit_format: diff
commit_hash: e799e89-dirty
pass_rate_1: 59.6
pass_rate_2: 83.0
percent_cases_well_formed: 98.9
error_outputs: 12
num_malformed_responses: 2
num_with_malformed_responses: 1
user_asks: 2
lazy_comments: 0
syntax_errors: 1
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-07-28
versions: 0.45.2-dev
seconds_per_case: 15.7
total_cost: 2.4340
- dirname: 2024-08-14-00-46-09--claude-3.5-sonnet-diff-no-ipynb-again
test_cases: 133
model: openrouter/anthropic/claude-3.5-sonnet
edit_format: diff
commit_hash: 139f799
pass_rate_1: 57.9
pass_rate_2: 75.9
percent_cases_well_formed: 98.5
error_outputs: 22
num_malformed_responses: 5
num_with_malformed_responses: 2
user_asks: 249
lazy_comments: 0
syntax_errors: 1
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-08-14
versions: 0.50.1-dev
seconds_per_case: 18.0
total_cost: 3.7058
- dirname: 2024-06-21-00-07-01--claude-3.5-sonnet-do-over
test_cases: 133
model: openrouter/anthropic/claude-3.5-sonnet
edit_format: diff
commit_hash: fb26174-dirty
pass_rate_1: 59.4
pass_rate_2: 80.5
percent_cases_well_formed: 99.2
error_outputs: 20
num_malformed_responses: 4
num_with_malformed_responses: 1
user_asks: 1
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-06-21
versions: 0.39.1-dev
seconds_per_case: 18.3
total_cost: 0.0000
- dirname: 2024-06-21-00-18-25--claude-3.5-sonnet-do-over2
test_cases: 133
model: openrouter/anthropic/claude-3.5-sonnet
edit_format: diff
commit_hash: fb26174-dirty
pass_rate_1: 58.6
pass_rate_2: 77.4
percent_cases_well_formed: 98.5
error_outputs: 22
num_malformed_responses: 4
num_with_malformed_responses: 2
user_asks: 0
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 0
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-06-21
versions: 0.39.1-dev
seconds_per_case: 17.3
total_cost: 0.0000
- dirname: 2024-06-24-00-09-40--claude-3.5-sonnet-chatty
test_cases: 133
model: openrouter/anthropic/claude-3.5-sonnet
edit_format: diff
commit_hash: b44c246-dirty
pass_rate_1: 59.4
pass_rate_2: 75.2
percent_cases_well_formed: 98.5
error_outputs: 21
num_malformed_responses: 5
num_with_malformed_responses: 2
user_asks: 0
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 2
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-06-24
versions: 0.39.1-dev
seconds_per_case: 15.7
total_cost: 0.0000
- dirname: 2024-06-24-00-33-35--claude-3.5-sonnet-chatty-do-over
test_cases: 133
model: openrouter/anthropic/claude-3.5-sonnet
edit_format: diff
commit_hash: bc1dfa3
pass_rate_1: 58.6
pass_rate_2: 76.7
percent_cases_well_formed: 97.7
error_outputs: 26
num_malformed_responses: 6
num_with_malformed_responses: 3
user_asks: 0
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 2
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-06-24
versions: 0.39.1-dev
seconds_per_case: 16.4
total_cost: 0.0000
- dirname: 2024-08-18-19-57-30--claude-3.5-sonnet-aug18
test_cases: 133
model: openrouter/anthropic/claude-3.5-sonnet
edit_format: diff
commit_hash: 5099a5c
pass_rate_1: 54.9
pass_rate_2: 78.9
percent_cases_well_formed: 97.7
error_outputs: 47
num_malformed_responses: 11
num_with_malformed_responses: 3
user_asks: 0
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 2
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-08-18
versions: 0.50.2-dev
seconds_per_case: 22.3
total_cost: 3.9008
- dirname: 2024-08-18-20-23-50--claude-3.5-sonnet-aug18-cache-prompts
test_cases: 133
model: openrouter/anthropic/claude-3.5-sonnet
edit_format: diff
commit_hash: 53db8cf-dirty
pass_rate_1: 56.4
pass_rate_2: 78.9
percent_cases_well_formed: 97.7
error_outputs: 16
num_malformed_responses: 4
num_with_malformed_responses: 3
user_asks: 0
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 3
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-08-18
versions: 0.50.2-dev
seconds_per_case: 21.1
total_cost: 3.6918
- dirname: 2024-08-18-23-11-04--claude-3.5-sonnet-aug18-cache-prompts-cold
test_cases: 133
model: openrouter/anthropic/claude-3.5-sonnet
edit_format: diff
commit_hash: 53db8cf-dirty
pass_rate_1: 56.4
pass_rate_2: 78.2
percent_cases_well_formed: 97.0
error_outputs: 30
num_malformed_responses: 7
num_with_malformed_responses: 4
user_asks: 1
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 2
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-08-18
versions: 0.50.2-dev
seconds_per_case: 21.8
total_cost: 3.7858
- dirname: 2024-08-21-01-07-39--sonnet-diff-cache
test_cases: 133
model: claude-3-5-sonnet-20240620
edit_format: diff
commit_hash: e12157b-dirty
pass_rate_1: 57.1
pass_rate_2: 82.0
percent_cases_well_formed: 98.5
error_outputs: 12
num_malformed_responses: 2
num_with_malformed_responses: 2
user_asks: 0
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 2
command: aider --model claude-3-5-sonnet-20240620
date: 2024-08-21
versions: 0.51.2-dev
seconds_per_case: 14.5
total_cost: 3.1795
- dirname: 2024-08-21-00-50-49--shell-cmds-sonnet-user-remind
test_cases: 133
model: openrouter/anthropic/claude-3.5-sonnet
edit_format: diff
commit_hash: 919ea05
pass_rate_1: 63.2
pass_rate_2: 79.7
percent_cases_well_formed: 98.5
error_outputs: 18
num_malformed_responses: 4
num_with_malformed_responses: 2
user_asks: 26
lazy_comments: 0
syntax_errors: 0
indentation_errors: 2
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-08-21
versions: 0.51.2-dev
seconds_per_case: 16.3
total_cost: 3.4738
- dirname: 2024-08-21-00-55-30--shell-cmds-sonnet-no-user-remind
test_cases: 133
model: openrouter/anthropic/claude-3.5-sonnet
edit_format: diff
commit_hash: 5c7707a
pass_rate_1: 63.9
pass_rate_2: 80.5
percent_cases_well_formed: 97.7
error_outputs: 51
num_malformed_responses: 12
num_with_malformed_responses: 3
user_asks: 24
lazy_comments: 0
syntax_errors: 0
indentation_errors: 1
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-08-21
versions: 0.51.2-dev
seconds_per_case: 17.7
total_cost: 3.8990

View file

@ -0,0 +1,97 @@
document.addEventListener('DOMContentLoaded', function () {
var ctx = document.getElementById('editChart').getContext('2d');
const HIGHTLIGHT_MODEL = 'no no no no';
var leaderboardData = {
labels: [],
datasets: [{
label: 'Percent completed correctly',
data: [],
backgroundColor: function(context) {
const label = context.chart.data.labels[context.dataIndex] || '';
return (label && label.includes(HIGHTLIGHT_MODEL)) ? 'rgba(255, 99, 132, 0.2)' : 'rgba(54, 162, 235, 0.2)';
},
borderColor: function(context) {
const label = context.chart.data.labels[context.dataIndex] || '';
return (label && label.includes(HIGHTLIGHT_MODEL)) ? 'rgba(255, 99, 132, 1)' : 'rgba(54, 162, 235, 1)';
},
borderWidth: 1
}]
};
var allData = [];
{% for row in edit_sorted %}
allData.push({
model: '{{ row.model }}',
pass_rate_2: {{ row.pass_rate_2 }},
percent_cases_well_formed: {{ row.percent_cases_well_formed }}
});
{% endfor %}
function updateChart() {
var selectedRows = document.querySelectorAll('tr.selected');
var showAll = selectedRows.length === 0;
leaderboardData.labels = [];
leaderboardData.datasets[0].data = [];
allData.forEach(function(row, index) {
var rowElement = document.getElementById('edit-row-' + index);
if (showAll) {
rowElement.classList.remove('selected');
}
if (showAll || rowElement.classList.contains('selected')) {
leaderboardData.labels.push(row.model);
leaderboardData.datasets[0].data.push(row.pass_rate_2);
}
});
leaderboardChart.update();
}
var tableBody = document.querySelector('table tbody');
allData.forEach(function(row, index) {
var tr = tableBody.children[index];
tr.id = 'edit-row-' + index;
tr.style.cursor = 'pointer';
tr.onclick = function() {
this.classList.toggle('selected');
updateChart();
};
});
var leaderboardChart = new Chart(ctx, {
type: 'bar',
data: leaderboardData,
options: {
scales: {
y: {
beginAtZero: true
}
}
}
});
updateChart();
// Add search functionality for edit table
document.getElementById('editSearchInput').addEventListener('keyup', function() {
var searchWords = this.value.toLowerCase().split(' ').filter(word => word.length > 0);
var tableBody = document.querySelector('table:first-of-type tbody');
var rows = tableBody.getElementsByTagName('tr');
leaderboardData.labels = [];
leaderboardData.datasets[0].data = [];
for (var i = 0; i < rows.length; i++) {
var rowText = rows[i].textContent;
if (searchWords.every(word => rowText.toLowerCase().includes(word))) {
rows[i].style.display = '';
leaderboardData.labels.push(allData[i].model);
leaderboardData.datasets[0].data.push(allData[i].pass_rate_2);
} else {
rows[i].style.display = 'none';
}
}
leaderboardChart.update();
});
});

View file

@ -2,7 +2,7 @@
You can get started quickly like this: You can get started quickly like this:
``` ```
python -m pip install aider-chat python -m pip install -U aider-chat
# Change directory into a git repo # Change directory into a git repo
cd /to/your/git/repo cd /to/your/git/repo

View file

@ -1,5 +1,5 @@
If you need more help, please check our If you need more help, please check our
[GitHub issues](https://github.com/paul-gauthier/aider/issues) [GitHub issues](https://github.com/Aider-AI/aider/issues)
and file a new issue if your problem isn't discussed. and file a new issue if your problem isn't discussed.
Or drop into our Or drop into our
[Discord](https://discord.gg/Tv2uQnR88V) [Discord](https://discord.gg/Tv2uQnR88V)

View file

View file

@ -0,0 +1,170 @@
<canvas id="{{ include.chart_id }}" width="800" height="450" style="margin-top: 20px"></canvas>
<script>
document.addEventListener('DOMContentLoaded', function () {
var ctx = document.getElementById('{{ include.chart_id }}').getContext('2d');
var leaderboardData = {
labels: [],
datasets: [{
label: 'Percent completed correctly',
data: [],
backgroundColor: [],
borderColor: [],
borderWidth: 1
}]
};
var allData = [];
{% for row in include.data %}
allData.push({
model: '{{ row.model }}',
pass_rate: {{ row[include.pass_rate_key] }},
percent_cases_well_formed: {{ row.percent_cases_well_formed }},
edit_format: '{{ row.edit_format }}'
});
{% endfor %}
function updateChart() {
var selectedRows = document.querySelectorAll('tr.selected');
var showAll = selectedRows.length === 0;
leaderboardData.labels = [];
leaderboardData.datasets[0].data = [];
leaderboardData.datasets[0].backgroundColor = [];
leaderboardData.datasets[0].borderColor = [];
allData.forEach(function(row, index) {
var rowElement = document.getElementById('{{ include.row_prefix }}-' + index);
if (showAll) {
rowElement.classList.remove('selected');
}
if (showAll || rowElement.classList.contains('selected')) {
leaderboardData.labels.push(row.model);
leaderboardData.datasets[0].data.push(row.pass_rate);
switch (row.edit_format) {
case 'whole':
leaderboardData.datasets[0].backgroundColor.push('rgba(255, 99, 132, 0.2)');
leaderboardData.datasets[0].borderColor.push('rgba(255, 99, 132, 1)');
break;
case 'diff':
leaderboardData.datasets[0].backgroundColor.push('rgba(54, 162, 235, 0.2)');
leaderboardData.datasets[0].borderColor.push('rgba(54, 162, 235, 1)');
break;
case 'udiff':
leaderboardData.datasets[0].backgroundColor.push('rgba(75, 192, 192, 0.2)');
leaderboardData.datasets[0].borderColor.push('rgba(75, 192, 192, 1)');
break;
case 'diff-fenced':
leaderboardData.datasets[0].backgroundColor.push('rgba(153, 102, 255, 0.2)');
leaderboardData.datasets[0].borderColor.push('rgba(153, 102, 255, 1)');
break;
default:
leaderboardData.datasets[0].backgroundColor.push('rgba(201, 203, 207, 0.2)');
leaderboardData.datasets[0].borderColor.push('rgba(201, 203, 207, 1)');
}
}
});
// Apply legend filtering
var meta = leaderboardChart.getDatasetMeta(0);
meta.data.forEach(function(bar, index) {
if (leaderboardData.labels.includes(allData[index].model)) {
bar.hidden = (allData[index].edit_format === 'whole' && meta.data[0].hidden) ||
(allData[index].edit_format !== 'whole' && meta.data[1].hidden);
} else {
bar.hidden = true;
}
});
leaderboardChart.update();
}
var tableBody = document.querySelector('table tbody');
allData.forEach(function(row, index) {
var tr = tableBody.children[index];
tr.id = '{{ include.row_prefix }}-' + index;
tr.style.cursor = 'pointer';
tr.onclick = function() {
this.classList.toggle('selected');
updateChart();
};
});
var leaderboardChart = new Chart(ctx, {
type: 'bar',
data: leaderboardData,
options: {
scales: {
y: {
beginAtZero: true,
title: {
display: true,
text: 'Correct Exercises (%)'
}
},
x: {
ticks: {
autoSkip: false,
maxRotation: 90,
minRotation: 0
}
}
},
plugins: {
legend: {
display: true,
position: 'top',
labels: {
generateLabels: function(chart) {
var uniqueFormats = [...new Set(allData.map(item => item.edit_format))];
return uniqueFormats.map(format => {
var color;
switch (format) {
case 'whole':
color = { fill: 'rgba(255, 99, 132, 0.2)', stroke: 'rgba(255, 99, 132, 1)' };
break;
case 'diff':
color = { fill: 'rgba(54, 162, 235, 0.2)', stroke: 'rgba(54, 162, 235, 1)' };
break;
case 'udiff':
color = { fill: 'rgba(75, 192, 192, 0.2)', stroke: 'rgba(75, 192, 192, 1)' };
break;
case 'diff-fenced':
color = { fill: 'rgba(153, 102, 255, 0.2)', stroke: 'rgba(153, 102, 255, 1)' };
break;
default:
color = { fill: 'rgba(201, 203, 207, 0.2)', stroke: 'rgba(201, 203, 207, 1)' };
}
return {
text: format,
fillStyle: color.fill,
strokeStyle: color.stroke,
lineWidth: 1,
hidden: false
};
});
}
},
onClick: function(e, legendItem, legend) {
var ci = legend.chart;
var clickedFormat = legendItem.text;
legendItem.hidden = !legendItem.hidden;
ci.data.datasets[0].data.forEach(function(dataPoint, i) {
var meta = ci.getDatasetMeta(0);
if (allData[i].edit_format === clickedFormat) {
meta.data[i].hidden = legendItem.hidden;
}
});
ci.update();
}
}
}
}
});
updateChart();
});
</script>

View file

@ -1,4 +1,19 @@
You can send long, multi-line messages in the chat in a few ways: You can send long, multi-line messages in the chat in a few ways:
- Paste a multi-line message directly into the chat. - Paste a multi-line message directly into the chat.
- Enter `{` alone on the first line to start a multiline message and `}` alone on the last line to end it. - Enter `{` alone on the first line to start a multiline message and `}` alone on the last line to end it.
- Or, start with `{tag` (where "tag" is any sequence of letters/numbers) and end with `tag}`. This is useful when you need to include closing braces `}` in your message.
- Use Meta-ENTER to start a new line without sending the message (Esc+ENTER in some environments). - Use Meta-ENTER to start a new line without sending the message (Esc+ENTER in some environments).
- Use `/paste` to paste text from the clipboard into the chat.
- Use the `/editor` command to open your editor to create the next chat message. See [editor configuration docs](/docs/config/editor.html) for more info.
Example with a tag:
```
{python
def hello():
print("Hello}") # Note: contains a brace
python}
```
{: .note }
People often ask for SHIFT-ENTER to be a soft-newline.
Unfortunately there is no portable way to detect that keystroke in terminals.

View file

@ -1,7 +1,7 @@
<footer class="site-footer"> <footer class="site-footer">
Aider is AI pair programming in your terminal. Aider is AI pair programming in your terminal.
Aider is on Aider is on
<a href="https://github.com/paul-gauthier/aider">GitHub</a> <a href="https://github.com/Aider-AI/aider">GitHub</a>
and and
<a href="https://discord.gg/Tv2uQnR88V">Discord</a>. <a href="https://discord.gg/Tv2uQnR88V">Discord</a>.
</footer> </footer>

View file

@ -0,0 +1,95 @@
document.addEventListener('DOMContentLoaded', function () {
var ctx = document.getElementById('quantChart').getContext('2d');
var allData = [];
{% for row in site.data.quant %}
allData.push({
model: '{{ row.model }}',
pass_rate_2: {{ row.pass_rate_2 }}
});
{% endfor %}
// Sort data by pass_rate_2 in descending order
allData.sort((a, b) => b.pass_rate_2 - a.pass_rate_2);
var chart;
function updateChart(filterText) {
var filteredData = allData.filter(row =>
row.model.toLowerCase().includes(filterText.toLowerCase())
);
var chartData = {
labels: filteredData.map(row => row.model),
datasets: [{
label: 'Percent completed correctly',
data: filteredData.map(row => row.pass_rate_2),
backgroundColor: 'rgba(54, 162, 235, 0.2)',
borderColor: 'rgba(54, 162, 235, 1)',
borderWidth: 1
}]
};
if (chart) {
chart.data = chartData;
chart.update();
} else {
chart = new Chart(ctx, {
type: 'bar',
data: chartData,
options: {
plugins: {
legend: {
display: false
},
title: {
display: true,
text: 'Aider code editing benchmark',
font: {
size: 16
}
}
},
scales: {
y: {
beginAtZero: true,
title: {
display: true,
text: 'Percent completed correctly',
font: {
size: 14
}
},
ticks: {
font: {
size: 16
}
}
},
x: {
ticks: {
font: {
size: 16
}
},
title: {
display: true,
text: 'Provider: quantization',
font: {
size: 14
}
}
}
}
}
});
}
}
// Initial chart render
updateChart('');
// Connect search input to chart filtering
document.getElementById('quantSearchInput').addEventListener('keyup', function() {
updateChart(this.value);
});
});

View file

@ -0,0 +1,90 @@
document.addEventListener('DOMContentLoaded', function () {
var ctx = document.getElementById('refacChart').getContext('2d');
var leaderboardData = {
labels: [],
datasets: [{
label: 'Percent completed correctly',
data: [],
backgroundColor: 'rgba(54, 162, 235, 0.2)',
borderColor: 'rgba(54, 162, 235, 1)',
borderWidth: 1
}]
};
var allData = [];
{% for row in refac_sorted %}
allData.push({
model: '{{ row.model }}',
pass_rate_1: {{ row.pass_rate_1 }},
percent_cases_well_formed: {{ row.percent_cases_well_formed }}
});
{% endfor %}
function updateChart() {
var selectedRows = document.querySelectorAll('tr.selected');
var showAll = selectedRows.length === 0;
leaderboardData.labels = [];
leaderboardData.datasets[0].data = [];
allData.forEach(function(row, index) {
var rowElement = document.getElementById('refac-row-' + index);
if (showAll) {
rowElement.classList.remove('selected');
}
if (showAll || rowElement.classList.contains('selected')) {
leaderboardData.labels.push(row.model);
leaderboardData.datasets[0].data.push(row.pass_rate_1);
}
});
leaderboardChart.update();
}
var tableBody = document.querySelectorAll('table tbody')[1];
allData.forEach(function(row, index) {
var tr = tableBody.children[index];
tr.id = 'refac-row-' + index;
tr.style.cursor = 'pointer';
tr.onclick = function() {
this.classList.toggle('selected');
updateChart();
};
});
var leaderboardChart = new Chart(ctx, {
type: 'bar',
data: leaderboardData,
options: {
scales: {
y: {
beginAtZero: true
}
}
}
});
updateChart();
// Add search functionality for refactoring table
document.getElementById('refacSearchInput').addEventListener('keyup', function() {
var searchWords = this.value.toLowerCase().split(' ').filter(word => word.length > 0);
var tableBody = document.querySelectorAll('table tbody')[1];
var rows = tableBody.getElementsByTagName('tr');
leaderboardData.labels = [];
leaderboardData.datasets[0].data = [];
for (var i = 0; i < rows.length; i++) {
var rowText = rows[i].textContent;
if (searchWords.every(word => rowText.toLowerCase().includes(word))) {
rows[i].style.display = '';
leaderboardData.labels.push(allData[i].model);
leaderboardData.datasets[0].data.push(allData[i].pass_rate_1);
} else {
rows[i].style.display = 'none';
}
}
leaderboardChart.update();
});
});

View file

@ -0,0 +1,9 @@
To use aider with pipx on replit, you can run these commands in the replit shell:
```
pip install pipx
pipx run aider-chat ...normal aider args...
```
If you install aider with pipx on replit and try and run it as just `aider` it will crash with a missing `libstdc++.so.6` library.

View file

@ -110,9 +110,9 @@ source code, by including the critical lines of code for each definition.
Here's a Here's a
sample of the map of the aider repo, just showing the maps of sample of the map of the aider repo, just showing the maps of
[base_coder.py](https://github.com/paul-gauthier/aider/blob/main/aider/coders/base_coder.py) [base_coder.py](https://github.com/Aider-AI/aider/blob/main/aider/coders/base_coder.py)
and and
[commands.py](https://github.com/paul-gauthier/aider/blob/main/aider/commands.py) [commands.py](https://github.com/Aider-AI/aider/blob/main/aider/commands.py)
: :
``` ```
@ -188,7 +188,7 @@ It specifically uses the
[py-tree-sitter-languages](https://github.com/grantjenks/py-tree-sitter-languages) [py-tree-sitter-languages](https://github.com/grantjenks/py-tree-sitter-languages)
python module, python module,
which provides simple, pip-installable binary wheels for which provides simple, pip-installable binary wheels for
[most popular programming languages](https://github.com/paul-gauthier/grep-ast/blob/main/grep_ast/parsers.py). [most popular programming languages](https://github.com/Aider-AI/grep-ast/blob/main/grep_ast/parsers.py).
Tree-sitter parses source code into an Abstract Syntax Tree (AST) based Tree-sitter parses source code into an Abstract Syntax Tree (AST) based
on the syntax of the programming language. on the syntax of the programming language.
@ -209,7 +209,7 @@ that aider originally used.
Switching from ctags to tree-sitter provides a bunch of benefits: Switching from ctags to tree-sitter provides a bunch of benefits:
- The map is richer, showing full function call signatures and other details straight from the source files. - The map is richer, showing full function call signatures and other details straight from the source files.
- Thanks to `py-tree-sitter-languages`, we get full support for many programming languages via a python package that's automatically installed as part of the normal `python -m pip install aider-chat`. - Thanks to `py-tree-sitter-languages`, we get full support for many programming languages via a python package that's automatically installed as part of the normal `python -m pip install -U aider-chat`.
- We remove the requirement for users to manually install `universal-ctags` via some external tool or package manager (brew, apt, choco, etc). - We remove the requirement for users to manually install `universal-ctags` via some external tool or package manager (brew, apt, choco, etc).
- Tree-sitter integration is a key enabler for future work and capabilities for aider. - Tree-sitter integration is a key enabler for future work and capabilities for aider.
@ -245,7 +245,7 @@ just install [aider](https://aider.chat/docs/install.html).
## Credits ## Credits
Aider uses Aider uses
[modified versions of the tags.scm files](https://github.com/paul-gauthier/aider/tree/main/aider/queries) [modified versions of the tags.scm files](https://github.com/Aider-AI/aider/tree/main/aider/queries)
from these from these
open source tree-sitter language implementations: open source tree-sitter language implementations:

View file

@ -23,14 +23,14 @@ making it the best available model for pair programming with AI.
To use Claude 3 Opus with aider: To use Claude 3 Opus with aider:
``` ```
python -m pip install aider-chat python -m pip install -U aider-chat
export ANTHROPIC_API_KEY=sk-... export ANTHROPIC_API_KEY=sk-...
aider --opus aider --opus
``` ```
## Aider's code editing benchmark ## Aider's code editing benchmark
[Aider](https://github.com/paul-gauthier/aider) [Aider](https://github.com/Aider-AI/aider)
is an open source command line chat tool that lets you is an open source command line chat tool that lets you
pair program with AI on code in your local git repo. pair program with AI on code in your local git repo.

View file

@ -52,7 +52,7 @@ def some_complex_method(foo, bar):
# ... implement method here ... # ... implement method here ...
``` ```
Aider uses a ["laziness" benchmark suite](https://github.com/paul-gauthier/refactor-benchmark) Aider uses a ["laziness" benchmark suite](https://github.com/Aider-AI/refactor-benchmark)
which is designed to both provoke and quantify lazy coding. which is designed to both provoke and quantify lazy coding.
It consists of It consists of
89 python refactoring tasks 89 python refactoring tasks

View file

@ -46,7 +46,7 @@ It also supports [connecting to almost any LLM](https://aider.chat/docs/llms.htm
Use the `--browser` switch to launch the browser version of aider: Use the `--browser` switch to launch the browser version of aider:
``` ```
python -m pip install aider-chat python -m pip install -U aider-chat
export OPENAI_API_KEY=<key> # Mac/Linux export OPENAI_API_KEY=<key> # Mac/Linux
setx OPENAI_API_KEY <key> # Windows, restart shell after setx setx OPENAI_API_KEY <key> # Windows, restart shell after setx

View file

@ -15,7 +15,7 @@ nav_exclude: true
I recently wanted to draw a graph showing how LLM code editing skill has been I recently wanted to draw a graph showing how LLM code editing skill has been
changing over time as new models have been released by OpenAI, Anthropic and others. changing over time as new models have been released by OpenAI, Anthropic and others.
I have all the I have all the
[data in a yaml file](https://github.com/paul-gauthier/aider/blob/main/website/_data/edit_leaderboard.yml) that is used to render [data in a yaml file](https://github.com/Aider-AI/aider/blob/main/website/_data/edit_leaderboard.yml) that is used to render
[aider's LLM leaderboards](https://aider.chat/docs/leaderboards/). [aider's LLM leaderboards](https://aider.chat/docs/leaderboards/).
Below is the aider chat transcript, which shows: Below is the aider chat transcript, which shows:

View file

@ -25,7 +25,7 @@ This increases the ability of the LLM to understand the problem and
make the correct changes to resolve it. make the correct changes to resolve it.
Aider ships with basic linters built with tree-sitter that support Aider ships with basic linters built with tree-sitter that support
[most popular programming languages](https://github.com/paul-gauthier/grep-ast/blob/main/grep_ast/parsers.py). [most popular programming languages](https://github.com/Aider-AI/grep-ast/blob/main/grep_ast/parsers.py).
These built in linters will detect syntax errors and other fatal problems with the code. These built in linters will detect syntax errors and other fatal problems with the code.
You can also configure aider to use your preferred linters. You can also configure aider to use your preferred linters.

View file

@ -76,7 +76,7 @@ The held out "acceptance tests" were *only* used
after benchmarking to compute statistics on which problems aider after benchmarking to compute statistics on which problems aider
correctly resolved. correctly resolved.
The [full harness to run aider on SWE Bench Lite is available on GitHub](https://github.com/paul-gauthier/aider-swe-bench). The [full harness to run aider on SWE Bench Lite is available on GitHub](https://github.com/Aider-AI/aider-swe-bench).
The benchmarking process was similar to how a developer might use aider to The benchmarking process was similar to how a developer might use aider to
resolve a GitHub issue: resolve a GitHub issue:

View file

@ -12,8 +12,12 @@ nav_exclude: true
[![self assembly](/assets/self-assembly.jpg)](https://aider.chat/assets/self-assembly.jpg) [![self assembly](/assets/self-assembly.jpg)](https://aider.chat/assets/self-assembly.jpg)
{: .note }
This article is quite out dated. For current statistics, see
[aider's release history](/HISTORY.html).
The The
[aider git repo](https://github.com/paul-gauthier/aider) [aider git repo](https://github.com/Aider-AI/aider)
currently contains about 4K commits and 14K lines of code. currently contains about 4K commits and 14K lines of code.
Aider made 15% of the commits, inserting 4.8K and deleting 1.5K lines of code. Aider made 15% of the commits, inserting 4.8K and deleting 1.5K lines of code.

View file

@ -64,7 +64,7 @@ with the problem statement
submitted as the opening chat message from "the user". submitted as the opening chat message from "the user".
- After that aider ran as normal, except all of aider's - After that aider ran as normal, except all of aider's
suggestions were always accepted without user approval. suggestions were always accepted without user approval.
- A [simple harness](https://github.com/paul-gauthier/aider-swe-bench#the-aider-agent) was used to retry the SWE Bench problem if aider produced code that wasn't *plausibly correct*. - A [simple harness](https://github.com/Aider-AI/aider-swe-bench#the-aider-agent) was used to retry the SWE Bench problem if aider produced code that wasn't *plausibly correct*.
Plausibly correct means that aider reported that it had successfully edited the repo Plausibly correct means that aider reported that it had successfully edited the repo
without causing syntax errors or breaking any *pre-existing* tests. without causing syntax errors or breaking any *pre-existing* tests.
- If the solution from aider with GPT-4o wasn't plausible, the harness launched aider to try again from scratch using Claude 3 Opus. - If the solution from aider with GPT-4o wasn't plausible, the harness launched aider to try again from scratch using Claude 3 Opus.
@ -90,7 +90,7 @@ For a detailed discussion of the benchmark
methodology, see the methodology, see the
[article about aider's SWE Bench Lite results](https://aider.chat/2024/05/22/swe-bench-lite.html). [article about aider's SWE Bench Lite results](https://aider.chat/2024/05/22/swe-bench-lite.html).
Also, the Also, the
[aider SWE Bench repository on GitHub](https://github.com/paul-gauthier/aider-swe-bench) [aider SWE Bench repository on GitHub](https://github.com/Aider-AI/aider-swe-bench)
contains the harness and statistics code used for the benchmarks. contains the harness and statistics code used for the benchmarks.
The benchmarking process was similar to how a developer might use aider to The benchmarking process was similar to how a developer might use aider to

View file

@ -37,8 +37,8 @@ Users who tested Sonnet with a preview of
[aider's latest release](https://aider.chat/HISTORY.html#aider-v0410) [aider's latest release](https://aider.chat/HISTORY.html#aider-v0410)
were thrilled: were thrilled:
- *Works like a charm. It is a monster. It refactors files of any size like it is nothing. The continue trick with Sonnet is truly the holy grail. Aider beats [other tools] hands down. I'm going to cancel both subscriptions.* -- [Emasoft](https://github.com/paul-gauthier/aider/issues/705#issuecomment-2200338971) - *Works like a charm. It is a monster. It refactors files of any size like it is nothing. The continue trick with Sonnet is truly the holy grail. Aider beats [other tools] hands down. I'm going to cancel both subscriptions.* -- [Emasoft](https://github.com/Aider-AI/aider/issues/705#issuecomment-2200338971)
- *Thanks heaps for this feature - it's a real game changer. I can be more ambitious when asking Claude for larger features.* -- [cngarrison](https://github.com/paul-gauthier/aider/issues/705#issuecomment-2196026656) - *Thanks heaps for this feature - it's a real game changer. I can be more ambitious when asking Claude for larger features.* -- [cngarrison](https://github.com/Aider-AI/aider/issues/705#issuecomment-2196026656)
- *Fantastic...! It's such an improvement not being constrained by output token length issues. [I refactored] a single JavaScript file into seven smaller files using a single Aider request.* -- [John Galt](https://discord.com/channels/1131200896827654144/1253492379336441907/1256250487934554143) - *Fantastic...! It's such an improvement not being constrained by output token length issues. [I refactored] a single JavaScript file into seven smaller files using a single Aider request.* -- [John Galt](https://discord.com/channels/1131200896827654144/1253492379336441907/1256250487934554143)
## Hitting the 4k token output limit ## Hitting the 4k token output limit
@ -116,7 +116,7 @@ for more details, but
you can get started quickly with aider and Sonnet like this: you can get started quickly with aider and Sonnet like this:
``` ```
$ python -m pip install aider-chat $ python -m pip install -U aider-chat
$ export ANTHROPIC_API_KEY=<key> # Mac/Linux $ export ANTHROPIC_API_KEY=<key> # Mac/Linux
$ setx ANTHROPIC_API_KEY <key> # Windows, restart shell after setx $ setx ANTHROPIC_API_KEY <key> # Windows, restart shell after setx

View file

@ -30,7 +30,7 @@ included for scale.
You can code with all of these models using aider like this: You can code with all of these models using aider like this:
``` ```
$ python -m pip install aider-chat $ python -m pip install -U aider-chat
# Change directory into a git repo to work on # Change directory into a git repo to work on
$ cd /to/your/git/repo $ cd /to/your/git/repo

View file

@ -0,0 +1,145 @@
---
title: Sonnet seems as good as ever
excerpt: Sonnet's score on the aider code editing benchmark has been stable since it launched.
highlight_image: /assets/sonnet-seems-fine.jpg
---
{% if page.date %}
<p class="post-date">{{ page.date | date: "%B %d, %Y" }}</p>
{% endif %}
# Sonnet seems as good as ever
Recently there has been a lot of speculation that Sonnet has been
dumbed-down, nerfed or is otherwise performing worse.
Sonnet seems as good as ever, when performing the
[aider code editing benchmark](/docs/benchmarks.html#the-benchmark)
via the API.
Below is a graph showing the performance of Claude 3.5 Sonnet over time.
It shows every clean, comparable benchmark run performed since Sonnet launched.
Benchmarks were performed for various reasons, usually
to evaluate the effects of small changes to aider's system prompts.
The graph shows variance, but no indication of a noteworthy
degradation.
There is always some variance in benchmark results, typically +/- 2%
between runs with identical prompts.
It's worth noting that these results would not capture any changes
made to Anthropic web chat's use of Sonnet.
<div class="chart-container" style="position: relative; height:400px; width:100%">
<canvas id="sonnetPerformanceChart"></canvas>
</div>
<script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
<script src="https://cdn.jsdelivr.net/npm/moment@2.29.4/moment.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/chartjs-adapter-moment@1.0.1/dist/chartjs-adapter-moment.min.js"></script>
<script>
document.addEventListener('DOMContentLoaded', function() {
var ctx = document.getElementById('sonnetPerformanceChart').getContext('2d');
var sonnetData = {{ site.data.sonnet-fine | jsonify }};
var chartData = sonnetData.map(item => ({
x: moment(item.date).toDate(),
y1: item.pass_rate_1,
y2: item.pass_rate_2
})).sort((a, b) => a.x - b.x);
new Chart(ctx, {
type: 'scatter',
data: {
datasets: [{
label: 'Pass Rate 1',
data: chartData.map(item => ({ x: item.x, y: item.y1 })),
backgroundColor: 'rgb(75, 192, 192)',
pointRadius: 5,
pointHoverRadius: 7
}, {
label: 'Pass Rate 2',
data: chartData.map(item => ({ x: item.x, y: item.y2 })),
backgroundColor: 'rgb(255, 99, 132)',
pointRadius: 5,
pointHoverRadius: 7
}]
},
options: {
responsive: true,
maintainAspectRatio: false,
scales: {
y: {
beginAtZero: true,
title: {
display: true,
text: 'Pass Rate (%)',
font: {
size: 14
}
},
ticks: {
font: {
size: 12
}
}
},
x: {
type: 'time',
time: {
unit: 'day'
},
title: {
display: true,
text: 'Date',
font: {
size: 14
}
},
ticks: {
font: {
size: 12
}
}
}
},
plugins: {
title: {
display: true,
text: 'Claude 3.5 Sonnet Performance Over Time',
font: {
size: 18
}
},
legend: {
labels: {
font: {
size: 14
}
}
},
tooltip: {
callbacks: {
label: function(context) {
let label = context.dataset.label || '';
if (label) {
label += ': ';
}
if (context.parsed.y !== null) {
label += context.parsed.y.toFixed(1) + '%';
}
return label;
}
}
}
}
}
});
});
</script>
> This graph shows the performance of Claude 3.5 Sonnet on
[aider's code editing benchmark](/docs/benchmarks.html#the-benchmark)
> over time. 'Pass Rate 1' represents the initial success rate, while 'Pass Rate 2' shows the success rate after a second attempt with a chance to fix testing errors.
> The
> [aider LLM code editing leaderboard](https://aider.chat/docs/leaderboards/)
> ranks models based on Pass Rate 2.

View file

@ -0,0 +1,116 @@
---
title: o1-preview is SOTA on the aider leaderboard
excerpt: Preliminary benchmark results for the new OpenAI o1 models.
nav_exclude: true
---
{% if page.date %}
<p class="post-date">{{ page.date | date: "%B %d, %Y" }}</p>
{% endif %}
# OpenAI o1-preview is SOTA on the aider leaderboard
<script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
{% assign edit_sorted = site.data.o1_results | sort: 'pass_rate_2' | reverse %}
{% include leaderboard_graph.html
chart_id="editChart"
data=edit_sorted
row_prefix="edit-row"
pass_rate_key="pass_rate_2"
%}
## o1-preview
OpenAI o1-preview scored 79.7% on aider's code editing benchmark,
a state of the art result.
It achieved this result with the
["whole" edit format](/docs/leaderboards/#notes-on-the-edit-format),
where the LLM returns a full copy of the source code file with changes.
It is much more practical to use aider's
["diff" edit format](/docs/leaderboards/#notes-on-the-edit-format),
which allows the LLM to return search/replace blocks to
efficiently edit the source code.
This saves significant time and token costs.
Using the diff edit format the o1-preview model had a strong
benchmark score of 75.2%.
This likely places o1-preview between Sonnet and GPT-4o for practical use,
but at significantly higher cost.
## o1-mini
OpenAI o1-mini is priced similarly to GPT-4o and Claude 3.5 Sonnet,
but scored below those models.
It also works best with the whole edit format.
## Future work
The o1-preview model had trouble conforming to aider's diff edit format.
The o1-mini model had trouble conforming to both the whole and diff edit formats.
Aider is extremely permissive and tries hard to accept anything close
to the correct formats.
It is surprising that such strong models had trouble with
the syntactic requirements of simple text output formats.
It seems likely that aider could optimize its prompts and edit formats to
better harness the o1 models.
## Using aider with o1
OpenAI's new o1 models are supported in v0.57.0 of aider:
```
aider --model o1-mini
aider --model o1-preview
```
{: .note }
> These are initial benchmark results for the o1 models,
> based on aider v0.56.1-dev.
> See the [aider leaderboards](/docs/leaderboards/) for up-to-date results
> based on the latest aider releases.
<table style="width: 100%; max-width: 800px; margin: auto; border-collapse: collapse; box-shadow: 0 2px 4px rgba(0,0,0,0.1); font-size: 14px;">
<thead style="background-color: #f2f2f2;">
<tr>
<th style="padding: 8px; text-align: left;">Model</th>
<th style="padding: 8px; text-align: center;">Percent completed correctly</th>
<th style="padding: 8px; text-align: center;">Percent using correct edit format</th>
<th style="padding: 8px; text-align: left;">Command</th>
<th style="padding: 8px; text-align: center;">Edit format</th>
</tr>
</thead>
<tbody>
{% for row in edit_sorted %}
<tr style="border-bottom: 1px solid #ddd;">
<td style="padding: 8px;">{{ row.model }}</td>
<td style="padding: 8px; text-align: center;">{{ row.pass_rate_2 }}%</td>
<td style="padding: 8px; text-align: center;">{{ row.percent_cases_well_formed }}%</td>
<td style="padding: 8px;"><code>{{ row.command }}</code></td>
<td style="padding: 8px; text-align: center;">{{ row.edit_format }}</td>
</tr>
{% endfor %}
</tbody>
</table>
<style>
tr.selected {
color: #0056b3;
}
table {
table-layout: fixed;
}
td, th {
word-wrap: break-word;
overflow-wrap: break-word;
}
td:nth-child(3), td:nth-child(4) {
font-size: 12px;
}
</style>

View file

@ -0,0 +1,418 @@
---
title: Separating code reasoning and editing
excerpt: An Architect model describes how to solve the coding problem, and an Editor model translates that into file edits. This Architect/Editor approach produces SOTA benchmark results.
highlight_image: /assets/architect.jpg
draft: false
nav_exclude: true
---
{% if page.date %}
<p class="post-date">{{ page.date | date: "%B %d, %Y" }}</p>
{% endif %}
# Separating code reasoning and editing
Aider now has experimental support for using two models to complete each coding task:
- An Architect model is asked to describe how to solve the coding problem.
- An Editor model is given the Architect's solution and asked to produce specific code editing instructions to apply those changes to existing source files.
Splitting up "code reasoning" and "code editing" in this manner
has produced SOTA results on
[aider's code editing benchmark](/docs/benchmarks.html#the-benchmark).
Using o1-preview as the Architect with either DeepSeek or o1-mini as the
Editor produced the SOTA score of 85%.
Using the Architect/Editor approach
also significantly improved the benchmark scores of many
models, compared to their previous "solo" baseline scores (striped bars).
<style>
.shaded td {
background-color: #f2f2f2;
border-top: 1px solid #ccc;
}
.table-container {
max-width: 100%;
overflow-x: auto;
}
.responsive-table {
border-collapse: separate;
border-spacing: 0;
width: 100%;
font-size: 16px;
border: 1px solid #ddd;
}
.responsive-table th, .responsive-table td {
padding: 8px;
text-align: left;
border-bottom: 1px solid #ddd;
word-break: break-word;
}
.responsive-table th {
background-color: #e2e2e2;
}
.responsive-table th:first-child,
.responsive-table td:first-child {
border-left: 1px solid #ddd;
}
.responsive-table th:last-child,
.responsive-table td:last-child {
border-right: 1px solid #ddd;
}
@media screen and (max-width: 600px) {
.responsive-table {
font-size: 12px;
}
.responsive-table th, .responsive-table td {
padding: 4px;
}
}
</style>
<style>
#passRateChart {
max-width: 100%;
height: auto !important;
}
</style>
<script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
<script src="https://cdn.jsdelivr.net/npm/chartjs-plugin-annotation@1.0.2"></script>
{% assign sorted_data = site.data.architect | sort: "pass_rate_2" | reverse %}
<canvas id="passRateChart" width="400" height="250"></canvas>
<script>
document.addEventListener("DOMContentLoaded", function() {
var ctx = document.getElementById('passRateChart').getContext('2d');
// Function to determine aspect ratio and base font size based on screen width
function getChartSettings() {
if (window.innerWidth < 600) {
return { aspectRatio: 1, baseFontSize: 8 }; // Slightly taller for small screens
} else if (window.innerWidth < 800) {
return { aspectRatio: 1.2, baseFontSize: 10 }; // Slightly taller for small screens
} else {
return { aspectRatio: 1.4, baseFontSize: 12 }; // Slightly taller for larger screens
}
}
var chartSettings = getChartSettings();
var baseFontSize = chartSettings.baseFontSize;
var labels = [];
var data = [];
var colorMapping = {
"claude-3.5-sonnet": "rgba(75, 192, 192, 0.2)",
"gpt-4o": "rgba(255, 99, 132, 0.2)",
"o1-preview": "rgba(54, 162, 235, 0.2)",
"o1-mini": "rgba(255, 206, 86, 0.2)",
"gpt-4o-mini": "rgba(153, 102, 255, 0.2)"
};
var borderColorMapping = {
"claude-3.5-sonnet": "rgba(75, 192, 192, 1)",
"gpt-4o": "rgba(255, 99, 132, 1)",
"o1-preview": "rgba(54, 162, 235, 1)",
"o1-mini": "rgba(255, 206, 86, 1)",
"gpt-4o-mini": "rgba(153, 102, 255, 1)"
};
var backgroundColors = [];
var borderColors = [];
var patterns = {};
for (var key in colorMapping) {
patterns[key] = ctx.createPattern(createStripePattern(colorMapping[key]), 'repeat');
}
{% assign grouped_data = sorted_data | group_by: "model" %}
{% for group in grouped_data %}
{% for item in group.items %}
if ("{{ item.editor_model }}" == "") {
labels.push("Baseline");
} else {
labels.push("{{ item.editor_model }}/{{ item.editor_edit_format | default: item.edit_format }}");
}
data.push({{ item.pass_rate_2 }});
if ("{{ item.editor_model }}" == "") {
backgroundColors.push(patterns["{{ item.model }}"]);
} else {
backgroundColors.push(colorMapping["{{ item.model }}"]);
}
borderColors.push(borderColorMapping["{{ item.model }}"]);
{% endfor %}
{% endfor %}
labels.reverse();
data.reverse();
backgroundColors.reverse();
borderColors.reverse();
var chart = new Chart(ctx, {
type: 'bar',
data: {
labels: labels,
datasets: [{
label: 'Pass Rate',
data: data,
backgroundColor: backgroundColors,
borderColor: borderColors,
borderWidth: 1
}]
},
options: {
responsive: true,
maintainAspectRatio: true,
aspectRatio: chartSettings.aspectRatio,
scales: {
y: {
beginAtZero: true,
title: {
display: true,
text: 'Pass Rate (%)',
font: {
size: baseFontSize + 6
}
},
ticks: {
font: {
size: baseFontSize
}
}
},
x: {
title: {
display: true,
text: 'Editor model and edit format',
font: {
size: baseFontSize + 6
}
},
ticks: {
font: {
size: baseFontSize + 4
},
maxRotation: 90, // Allow full rotation if needed
minRotation: 45 // Start rotating at 45 degrees to fit more labels
}
}
},
plugins: {
annotation: {
annotations: {
line1: {
type: 'line',
yMin: 79.7,
yMax: 79.7,
borderColor: 'rgba(255, 99, 132, 0.8)',
borderWidth: 2,
borderDash: [6, 6],
label: {
content: 'Previous SOTA',
enabled: true,
position: 'start',
xAdjust: 10,
font: {
size: baseFontSize
}
}
}
}
},
legend: {
display: true,
title: {
display: true,
text: 'Architect model',
font: {
size: baseFontSize + 2,
weight: 'bold'
}
},
labels: {
font: {
size: baseFontSize + 4
},
generateLabels: function(chart) {
var colorMapping = {
"o1-preview": "rgba(54, 162, 235, 0.2)",
"claude-3.5-sonnet": "rgba(75, 192, 192, 0.2)",
"gpt-4o": "rgba(255, 99, 132, 0.2)",
"o1-mini": "rgba(255, 206, 86, 0.2)",
"gpt-4o-mini": "rgba(153, 102, 255, 0.2)"
};
return Object.keys(colorMapping).reverse().map(function(key) {
return {
text: key,
fillStyle: colorMapping[key],
strokeStyle: colorMapping[key].replace('0.2', '1'),
lineWidth: 1
};
});
}
}
}
}
}
});
// Update aspect ratio and font sizes on window resize
window.addEventListener('resize', function() {
var newSettings = getChartSettings();
chart.options.aspectRatio = newSettings.aspectRatio;
baseFontSize = newSettings.baseFontSize;
// Update font sizes
chart.options.scales.y.title.font.size = baseFontSize + 6;
chart.options.scales.y.ticks.font.size = baseFontSize;
chart.options.scales.x.title.font.size = baseFontSize + 6;
chart.options.scales.x.ticks.font.size = baseFontSize + 4;
chart.options.plugins.annotation.annotations.line1.label.font.size = baseFontSize;
chart.options.plugins.legend.title.font.size = baseFontSize + 4;
chart.options.plugins.legend.labels.font.size = baseFontSize + 4;
chart.update();
});
});
function createStripePattern(baseColor) {
var canvas = document.createElement('canvas');
canvas.width = 10;
canvas.height = 10;
var ctx = canvas.getContext('2d');
ctx.fillStyle = baseColor;
ctx.fillRect(0, 0, canvas.width, canvas.height);
ctx.strokeStyle = 'rgba(0, 0, 0, 0.1)';
ctx.lineWidth = 2;
ctx.beginPath();
ctx.moveTo(0, 0);
ctx.lineTo(10, 10);
ctx.stroke();
return canvas;
}
</script>
## Motivation
This approach was motivated by the release of OpenAI's o1 models.
They are strong at reasoning, but often fail to output properly formatted
code editing instructions.
It helps to instead let them describe the solution
however they prefer and then pass that output to a more traditional LLM.
This second Editor LLM can then interpret the solution description and
produce the code editing instructions needed to update
the existing source code.
This approach has recently become attractive for aider due to
rapid improvements in the speed and costs of frontier models.
In particular, chaining older LLMs would have been quite slow and
incompatible with aider's goal of providing an interactive,
pair programming AI coding experience.
## Code reasoning and code editing
Normally aider asks the model to solve a coding problem in one prompt,
asking the LLM to explain the solution and return
a well formatted series of file edits.
All of [aider's editing formats](/docs/more/edit-formats.html)
require the LLM to return source code edits in a specific text
format, so that aider can process the edits and apply them to the local source files.
Because this all happens in a single prompt/response round trip to the LLM,
the model has to split its attention between
solving the coding problem and conforming to the edit format.
The Architect/Editor approach splits this into two inference steps, possibly
using two different LLMs:
1. Solve the coding problem (Architect).
2. Turn the proposed solution into a series of well formed code edits (Editor).
The Architect/Editor approach allows the Architect to focus on solving the coding problem
and *describe the solution however comes naturally to it*.
Similarly, the Editor can focus all of its attention on properly formatting the edits
without needing to reason much about how to solve the coding problem.
We can assign the Architect and Editor roles to LLMs which are well suited to their needs.
Strong reasoning model like o1-preview make excellent Architects, while
the Editor role can be assigned to an appropriate model based on cost, speed
and code editing skill.
## Results
The graph above and the table below show the
[aider's code editing benchmark](/docs/benchmarks.html#the-benchmark)
score for various combinations of Architect and Editor models.
Some noteworthy observations:
- Pairing o1-preview as Architect with either Deepseek or o1-mini as Editor sets a SOTA significantly above the previous best score. This result is obtained with the "whole" editing format, requiring the Editor to output a full update copy of each edited source file. Both of these steps are therefore quite slow, so probably not practical for interactive use with aider.
- Pairing OpenAI's o1-preview with Anthropic's Sonnet as the Editor produces the second best result. This is an entirely practical configuration for users able to work with both providers.
- Pairing many models with themselves in the Architect/Editor configuration can provide
significant benefits.
Sonnet, GPT-4o and GPT-4o-mini all scored higher when used as an Architect/Editor pair.
- Deepseek is surprisingly effective as an Editor model. It seems remarkably capable at turning proposed coding solutions into new, updated versions of the source files. Using the efficient "diff" editing format, Deepseek helps all the Architect models except for Sonnet.
## Try it!
The development version of aider
has built in defaults to support Architect/Editor coding with
o1-preview, o1-mini, GPT-4o and Claude 3.5 Sonnet.
Run aider with `--architect` or get started quickly like this:
```
pip install -U aider-chat
# Change directory into a git repo
cd /to/your/git/repo
# Work with Claude 3.5 Sonnet as the Architect and Editor
export ANTHROPIC_API_KEY=your-key-goes-here
aider --sonnet --architect
# Work with OpenAI models, using gpt-4o as the Editor
export OPENAI_API_KEY=your-key-goes-here
aider --4o --architect
aider --o1-mini --architect
aider --o1-preview --architect
```
## More info
Aider has a number of "chat modes", and "architect" is available as a new chat mode.
The `--architect` switch is a shortcut for `--chat-mode architect`.
For more details, see documentation on
[aider's chat modes](/docs/usage/modes.html).
## Full results
Below are the benchmark results using various models as the Architect, paired with
various models as the Editor.
Each section includes a "baseline" result,
where the model works
by itself in aider's normal "code" editing mode
(not as part of an Architect/Editor configuration).
This "solo" baseline represents the performance previously available when using
this model with aider.
<div class="table-container">
<table class="responsive-table">
<thead>
<tr>
<th>Architect</th>
<th>Editor</th>
<th>Edit Format</th>
<th>Pass Rate</th>
</tr>
</thead>
<tbody>
{% for group in grouped_data %}
{% assign group_class = forloop.index | modulo: 2 | plus: 1 %}
{% for item in group.items %}
<tr class="{% if group_class == 1 %}shaded{% endif %}">
<td>{{ item.model }}</td>
<td>{% if item.editor_model %}{{ item.editor_model }}{% else %}<b>Baseline</b>{% endif %}</td>
<td style="text-align: center;">{{ item.editor_edit_format | default: item.edit_format }}</td>
<td style="text-align: right;">{{ item.pass_rate_2 }}%</td>
</tr>
{% endfor %}
{% endfor %}
</tbody>
</table>
</div>

View file

@ -0,0 +1,178 @@
---
title: Details matter with open source models
excerpt: Open source LLMs are becoming very powerful, but pay attention to how you (or your provider) are serving the model. It can affect code editing skill.
highlight_image: /assets/quantization.jpg
draft: false
nav_exclude: true
---
{% if page.date %}
<p class="post-date">{{ page.date | date: "%B %d, %Y" }}</p>
{% endif %}
# Details matter with open source models
{: .no_toc }
Open source models like Qwen 2.5 32B Instruct are performing very well on
aider's code editing benchmark, rivaling closed source frontier models.
But pay attention to how your model is being served and quantized,
as it can impact code editing skill.
Open source models are often available at a variety of quantizations,
and can be served with different token limits.
These details matter when working with code.
The graph and table below compares different versions of the Qwen 2.5 Coder 32B Instruct model,
served both locally and from a variety of cloud providers.
- The [HuggingFace BF16 weights](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) served via [glhf.chat](https://glhf.chat).
- [4bit and 8bit quants for mlx](https://t.co/cwX3DYX35D).
- The results from [OpenRouter's mix of providers](https://openrouter.ai/qwen/qwen-2.5-coder-32b-instruct/providers) which serve the model with different levels of quantization.
- Results from individual providers served via OpenRouter and directly to their own APIs.
- Ollama locally serving different quantizations from the [Ollama model library](https://ollama.com/library/qwen2.5-coder:32b-instruct-q4_K_M).
This benchmarking effort highlighted a number of pitfalls and details which
can have a significant impact on the model's ability to correctly edit code:
- **Quantization** -- Open source models are often available at dozens of different quantizations.
Most seem to only modestly decrease code editing skill, but stronger quantizations
do have a real impact.
- **Context window** -- Cloud providers can decide how large a context window to accept,
and they often choose differently. Ollama defaults to a tiny 2k context window,
and silently discards data that exceeds it. Such a small window has
catastrophic effects on performance.
- **Output token limits** -- Open source models are often served with wildly
differing output token limits. This has a direct impact on how much code the
model can write or edit in a response.
- **Buggy cloud providers** -- Between Qwen 2.5 Coder 32B Instruct
and DeepSeek V2.5, there were
multiple cloud providers with broken or buggy API endpoints.
They seemed
to be returning result different from expected based on the advertised
quantization and context sizes.
The harm caused to the code editing benchmark varied from serious
to catastrophic.
The best versions of the model rival GPT-4o, while the worst performing
quantization is more like the older GPT-4 Turbo.
Even an excellent fp16 quantization falls to GPT-3.5 Turbo levels of performance
if run with Ollama's default 2k context window.
### Sections
{: .no_toc }
- TOC
{:toc}
## Benchmark results
<canvas id="quantChart" width="800" height="600" style="margin: 20px 0"></canvas>
<script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
<script>
{% include quant-chart.js %}
</script>
<input type="text" id="quantSearchInput" placeholder="Search..." style="width: 100%; max-width: 800px; margin: 10px auto; padding: 8px; display: block; border: 1px solid #ddd; border-radius: 4px;">
<table style="width: 100%; max-width: 800px; margin: auto; border-collapse: collapse; box-shadow: 0 2px 4px rgba(0,0,0,0.1); font-size: 14px;">
<thead style="background-color: #f2f2f2;">
<tr>
<th style="padding: 8px; text-align: left;">Model</th>
<th style="padding: 8px; text-align: center;">Percent completed correctly</th>
<th style="padding: 8px; text-align: center;">Percent using correct edit format</th>
<th style="padding: 8px; text-align: left;">Command</th>
<th style="padding: 8px; text-align: center;">Edit format</th>
</tr>
</thead>
<tbody>
{% assign quant_sorted = site.data.quant | sort: 'pass_rate_2' | reverse %}
{% for row in quant_sorted %}
<tr style="border-bottom: 1px solid #ddd;">
<td style="padding: 8px;">{{ row.model }}</td>
<td style="padding: 8px; text-align: center;">{{ row.pass_rate_2 }}%</td>
<td style="padding: 8px; text-align: center;">{{ row.percent_cases_well_formed }}%</td>
<td style="padding: 8px;"><code>{{ row.command }}</code></td>
<td style="padding: 8px; text-align: center;">{{ row.edit_format }}</td>
</tr>
{% endfor %}
</tbody>
</table>
<style>
tr.selected {
color: #0056b3;
}
table {
table-layout: fixed;
}
td, th {
word-wrap: break-word;
overflow-wrap: break-word;
}
td:nth-child(3), td:nth-child(4) {
font-size: 12px;
}
</style>
<script>
document.getElementById('quantSearchInput').addEventListener('keyup', function() {
var input = this.value.toLowerCase();
var rows = document.querySelectorAll('tbody tr');
rows.forEach(function(row) {
var text = row.textContent.toLowerCase();
if(text.includes(input)) {
row.style.display = '';
row.classList.add('selected');
} else {
row.style.display = 'none';
row.classList.remove('selected');
}
});
});
</script>
## Setting Ollama's context window size
[Ollama uses a 2k context window by default](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-specify-the-context-window-size),
which is very small for working with aider.
Unlike most other LLM servers, Ollama does not throw an error if you submit
a request that exceeds the context window.
Instead, it just silently truncates the request by discarding the "oldest" messages
in the chat to make it fit within the context window.
Except for the single 2k context result,
all of the Ollama results above were collected with at least an 8k context window.
An 8k window is large enough to attempt all the coding problems in the benchmark.
Aider sets Ollama's context window to 8k by default, starting in aider v0.65.0.
You can change the Ollama server's context window with a
[`.aider.model.settings.yml` file](https://aider.chat/docs/config/adv-model-settings.html#model-settings)
like this:
```
- name: ollama/qwen2.5-coder:32b-instruct-fp16
extra_params:
num_ctx: 8192
```
## Choosing providers with OpenRouter
OpenRouter allows you to ignore specific providers in your
[preferences](https://openrouter.ai/settings/preferences).
This can be used to limit your OpenRouter requests to be
served by only your preferred providers.
## Notes
This article went through many revisions as I received feedback from
numerous members of the community.
Here are some of the noteworthy learnings and changes:
- The first version of this article included incorrect Ollama models.
- Earlier Ollama results used the too small default 2k context window,
artificially harming the benchmark results.
- The benchmark results appear to have uncovered a problem in the way
OpenRouter was communicating with Hyperbolic.
They fixed the issue 11/24/24, shortly after it was pointed out.

Binary file not shown.

After

Width:  |  Height:  |  Size: 337 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 86 KiB

After

Width:  |  Height:  |  Size: 103 KiB

Before After
Before After

File diff suppressed because it is too large Load diff

Before

Width:  |  Height:  |  Size: 74 KiB

After

Width:  |  Height:  |  Size: 59 KiB

Before After
Before After

Binary file not shown.

After

Width:  |  Height:  |  Size: 307 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 148 KiB

File diff suppressed because it is too large Load diff

Some files were not shown because too many files have changed in this diff Show more