Merge branch 'main' into help

This commit is contained in:
Paul Gauthier 2024-07-04 13:26:04 -03:00
commit b3eb1dea49
68 changed files with 1211 additions and 710 deletions

View file

@ -69,3 +69,17 @@ jobs:
- name: Deploy to GitHub Pages - name: Deploy to GitHub Pages
id: deployment id: deployment
uses: actions/deploy-pages@v2 uses: actions/deploy-pages@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- name: Install linkchecker
run: |
python -m pip install --upgrade pip
pip install linkchecker
- name: Run linkchecker
run: |
linkchecker https://aider.chat

View file

@ -30,11 +30,6 @@ jobs:
with: with:
python-version: ${{ matrix.python-version }} python-version: ${{ matrix.python-version }}
- name: Install universal ctags
run: |
sudo apt-get update
sudo apt-get install -y universal-ctags
- name: Install dependencies - name: Install dependencies
run: | run: |
python -m pip install --upgrade pip python -m pip install --upgrade pip

View file

@ -30,10 +30,6 @@ jobs:
with: with:
python-version: ${{ matrix.python-version }} python-version: ${{ matrix.python-version }}
- name: Install universal ctags
run: |
choco install universal-ctags
- name: Install dependencies - name: Install dependencies
run: | run: |
python -m pip install --upgrade pip python -m pip install --upgrade pip

View file

@ -1,17 +1,39 @@
# Release history # Release history
### v0.40.5 ### Aider v0.42.0
- Performance release:
- 5X faster launch!
- Faster auto-complete in large git repos (users report ~100X speedup)!
### Aider v0.41.0
- [Allow Claude 3.5 Sonnet to stream back >4k tokens!](https://aider.chat/2024/07/01/sonnet-not-lazy.html)
- It is the first model capable of writing such large coherent, useful code edits.
- Do large refactors or generate multiple files of new code in one go.
- Aider now uses `claude-3-5-sonnet-20240620` by default if `ANTHROPIC_API_KEY` is set in the environment.
- [Enabled image support](https://aider.chat/docs/images-urls.html) for 3.5 Sonnet and for GPT-4o & 3.5 Sonnet via OpenRouter (by @yamitzky).
- Added `--attribute-commit-message` to prefix aider's commit messages with "aider:".
- Fixed regression in quality of one-line commit messages.
- Automatically retry on Anthropic `overloaded_error`.
- Bumped dependency versions.
### Aider v0.40.6
- Fixed `/undo` so it works regardless of `--attribute` settings.
### Aider v0.40.5
- Bump versions to pickup latest litellm to fix streaming issue with Gemini - Bump versions to pickup latest litellm to fix streaming issue with Gemini
- https://github.com/BerriAI/litellm/issues/4408 - https://github.com/BerriAI/litellm/issues/4408
### v0.40.1 ### Aider v0.40.1
- Improved context awareness of repomap. - Improved context awareness of repomap.
- Restored proper `--help` functionality. - Restored proper `--help` functionality.
### v0.40.0 ### Aider v0.40.0
- Improved prompting to discourage Sonnet from wasting tokens emitting unchanging code (#705). - Improved prompting to discourage Sonnet from wasting tokens emitting unchanging code (#705).
- Improved error info for token limit errors. - Improved error info for token limit errors.
@ -20,14 +42,14 @@
- Improved invocation of flake8 linter for python code. - Improved invocation of flake8 linter for python code.
### v0.39.0 ### Aider v0.39.0
- Use `--sonnet` for Claude 3.5 Sonnet, which is the top model on [aider's LLM code editing leaderboard](https://aider.chat/docs/leaderboards/#claude-35-sonnet-takes-the-top-spot). - Use `--sonnet` for Claude 3.5 Sonnet, which is the top model on [aider's LLM code editing leaderboard](https://aider.chat/docs/leaderboards/#claude-35-sonnet-takes-the-top-spot).
- All `AIDER_xxx` environment variables can now be set in `.env` (by @jpshack-at-palomar). - All `AIDER_xxx` environment variables can now be set in `.env` (by @jpshack-at-palomar).
- Use `--llm-history-file` to log raw messages sent to the LLM (by @daniel-vainsencher). - Use `--llm-history-file` to log raw messages sent to the LLM (by @daniel-vainsencher).
- Commit messages are no longer prefixed with "aider:". Instead the git author and committer names have "(aider)" added. - Commit messages are no longer prefixed with "aider:". Instead the git author and committer names have "(aider)" added.
### v0.38.0 ### Aider v0.38.0
- Use `--vim` for [vim keybindings](https://aider.chat/docs/commands.html#vi) in the chat. - Use `--vim` for [vim keybindings](https://aider.chat/docs/commands.html#vi) in the chat.
- [Add LLM metadata](https://aider.chat/docs/llms/warnings.html#specifying-context-window-size-and-token-costs) via `.aider.models.json` file (by @caseymcc). - [Add LLM metadata](https://aider.chat/docs/llms/warnings.html#specifying-context-window-size-and-token-costs) via `.aider.models.json` file (by @caseymcc).
@ -38,7 +60,7 @@
- Documentation updates, moved into website/ subdir. - Documentation updates, moved into website/ subdir.
- Moved tests/ into aider/tests/. - Moved tests/ into aider/tests/.
### v0.37.0 ### Aider v0.37.0
- Repo map is now optimized based on text of chat history as well as files added to chat. - Repo map is now optimized based on text of chat history as well as files added to chat.
- Improved prompts when no files have been added to chat to solicit LLM file suggestions. - Improved prompts when no files have been added to chat to solicit LLM file suggestions.
@ -49,7 +71,7 @@
- Detect supported audio sample rates for `/voice`. - Detect supported audio sample rates for `/voice`.
- Other small bug fixes. - Other small bug fixes.
### v0.36.0 ### Aider v0.36.0
- [Aider can now lint your code and fix any errors](https://aider.chat/2024/05/22/linting.html). - [Aider can now lint your code and fix any errors](https://aider.chat/2024/05/22/linting.html).
- Aider automatically lints and fixes after every LLM edit. - Aider automatically lints and fixes after every LLM edit.
@ -62,7 +84,7 @@
- Aider will automatically attempt to fix any test failures. - Aider will automatically attempt to fix any test failures.
### v0.35.0 ### Aider v0.35.0
- Aider now uses GPT-4o by default. - Aider now uses GPT-4o by default.
- GPT-4o tops the [aider LLM code editing leaderboard](https://aider.chat/docs/leaderboards/) at 72.9%, versus 68.4% for Opus. - GPT-4o tops the [aider LLM code editing leaderboard](https://aider.chat/docs/leaderboards/) at 72.9%, versus 68.4% for Opus.
@ -71,7 +93,7 @@
- Improved reflection feedback to LLMs using the diff edit format. - Improved reflection feedback to LLMs using the diff edit format.
- Improved retries on `httpx` errors. - Improved retries on `httpx` errors.
### v0.34.0 ### Aider v0.34.0
- Updated prompting to use more natural phrasing about files, the git repo, etc. Removed reliance on read-write/read-only terminology. - Updated prompting to use more natural phrasing about files, the git repo, etc. Removed reliance on read-write/read-only terminology.
- Refactored prompting to unify some phrasing across edit formats. - Refactored prompting to unify some phrasing across edit formats.
@ -81,11 +103,11 @@
- Bugfix: catch and retry on all litellm exceptions. - Bugfix: catch and retry on all litellm exceptions.
### v0.33.0 ### Aider v0.33.0
- Added native support for [Deepseek models](https://aider.chat/docs/llms.html#deepseek) using `DEEPSEEK_API_KEY` and `deepseek/deepseek-chat`, etc rather than as a generic OpenAI compatible API. - Added native support for [Deepseek models](https://aider.chat/docs/llms.html#deepseek) using `DEEPSEEK_API_KEY` and `deepseek/deepseek-chat`, etc rather than as a generic OpenAI compatible API.
### v0.32.0 ### Aider v0.32.0
- [Aider LLM code editing leaderboards](https://aider.chat/docs/leaderboards/) that rank popular models according to their ability to edit code. - [Aider LLM code editing leaderboards](https://aider.chat/docs/leaderboards/) that rank popular models according to their ability to edit code.
- Leaderboards include GPT-3.5/4 Turbo, Opus, Sonnet, Gemini 1.5 Pro, Llama 3, Deepseek Coder & Command-R+. - Leaderboards include GPT-3.5/4 Turbo, Opus, Sonnet, Gemini 1.5 Pro, Llama 3, Deepseek Coder & Command-R+.
@ -94,31 +116,31 @@
- Improved retry handling on errors from model APIs. - Improved retry handling on errors from model APIs.
- Benchmark outputs results in YAML, compatible with leaderboard. - Benchmark outputs results in YAML, compatible with leaderboard.
### v0.31.0 ### Aider v0.31.0
- [Aider is now also AI pair programming in your browser!](https://aider.chat/2024/05/02/browser.html) Use the `--browser` switch to launch an experimental browser based version of aider. - [Aider is now also AI pair programming in your browser!](https://aider.chat/2024/05/02/browser.html) Use the `--browser` switch to launch an experimental browser based version of aider.
- Switch models during the chat with `/model <name>` and search the list of available models with `/models <query>`. - Switch models during the chat with `/model <name>` and search the list of available models with `/models <query>`.
### v0.30.1 ### Aider v0.30.1
- Adding missing `google-generativeai` dependency - Adding missing `google-generativeai` dependency
### v0.30.0 ### Aider v0.30.0
- Added [Gemini 1.5 Pro](https://aider.chat/docs/llms.html#free-models) as a recommended free model. - Added [Gemini 1.5 Pro](https://aider.chat/docs/llms.html#free-models) as a recommended free model.
- Allow repo map for "whole" edit format. - Allow repo map for "whole" edit format.
- Added `--models <MODEL-NAME>` to search the available models. - Added `--models <MODEL-NAME>` to search the available models.
- Added `--no-show-model-warnings` to silence model warnings. - Added `--no-show-model-warnings` to silence model warnings.
### v0.29.2 ### Aider v0.29.2
- Improved [model warnings](https://aider.chat/docs/llms.html#model-warnings) for unknown or unfamiliar models - Improved [model warnings](https://aider.chat/docs/llms.html#model-warnings) for unknown or unfamiliar models
### v0.29.1 ### Aider v0.29.1
- Added better support for groq/llama3-70b-8192 - Added better support for groq/llama3-70b-8192
### v0.29.0 ### Aider v0.29.0
- Added support for [directly connecting to Anthropic, Cohere, Gemini and many other LLM providers](https://aider.chat/docs/llms.html). - Added support for [directly connecting to Anthropic, Cohere, Gemini and many other LLM providers](https://aider.chat/docs/llms.html).
- Added `--weak-model <model-name>` which allows you to specify which model to use for commit messages and chat history summarization. - Added `--weak-model <model-name>` which allows you to specify which model to use for commit messages and chat history summarization.
@ -132,32 +154,32 @@
- Fixed crash when operating in a repo in a detached HEAD state. - Fixed crash when operating in a repo in a detached HEAD state.
- Fix: Use the same default model in CLI and python scripting. - Fix: Use the same default model in CLI and python scripting.
### v0.28.0 ### Aider v0.28.0
- Added support for new `gpt-4-turbo-2024-04-09` and `gpt-4-turbo` models. - Added support for new `gpt-4-turbo-2024-04-09` and `gpt-4-turbo` models.
- Benchmarked at 61.7% on Exercism benchmark, comparable to `gpt-4-0613` and worse than the `gpt-4-preview-XXXX` models. See [recent Exercism benchmark results](https://aider.chat/2024/03/08/claude-3.html). - Benchmarked at 61.7% on Exercism benchmark, comparable to `gpt-4-0613` and worse than the `gpt-4-preview-XXXX` models. See [recent Exercism benchmark results](https://aider.chat/2024/03/08/claude-3.html).
- Benchmarked at 34.1% on the refactoring/laziness benchmark, significantly worse than the `gpt-4-preview-XXXX` models. See [recent refactor bencmark results](https://aider.chat/2024/01/25/benchmarks-0125.html). - Benchmarked at 34.1% on the refactoring/laziness benchmark, significantly worse than the `gpt-4-preview-XXXX` models. See [recent refactor bencmark results](https://aider.chat/2024/01/25/benchmarks-0125.html).
- Aider continues to default to `gpt-4-1106-preview` as it performs best on both benchmarks, and significantly better on the refactoring/laziness benchmark. - Aider continues to default to `gpt-4-1106-preview` as it performs best on both benchmarks, and significantly better on the refactoring/laziness benchmark.
### v0.27.0 ### Aider v0.27.0
- Improved repomap support for typescript, by @ryanfreckleton. - Improved repomap support for typescript, by @ryanfreckleton.
- Bugfix: Only /undo the files which were part of the last commit, don't stomp other dirty files - Bugfix: Only /undo the files which were part of the last commit, don't stomp other dirty files
- Bugfix: Show clear error message when OpenAI API key is not set. - Bugfix: Show clear error message when OpenAI API key is not set.
- Bugfix: Catch error for obscure languages without tags.scm file. - Bugfix: Catch error for obscure languages without tags.scm file.
### v0.26.1 ### Aider v0.26.1
- Fixed bug affecting parsing of git config in some environments. - Fixed bug affecting parsing of git config in some environments.
### v0.26.0 ### Aider v0.26.0
- Use GPT-4 Turbo by default. - Use GPT-4 Turbo by default.
- Added `-3` and `-4` switches to use GPT 3.5 or GPT-4 (non-Turbo). - Added `-3` and `-4` switches to use GPT 3.5 or GPT-4 (non-Turbo).
- Bug fix to avoid reflecting local git errors back to GPT. - Bug fix to avoid reflecting local git errors back to GPT.
- Improved logic for opening git repo on launch. - Improved logic for opening git repo on launch.
### v0.25.0 ### Aider v0.25.0
- Issue a warning if user adds too much code to the chat. - Issue a warning if user adds too much code to the chat.
- https://aider.chat/docs/faq.html#how-can-i-add-all-the-files-to-the-chat - https://aider.chat/docs/faq.html#how-can-i-add-all-the-files-to-the-chat
@ -167,18 +189,18 @@
- Show the user a FAQ link if edits fail to apply. - Show the user a FAQ link if edits fail to apply.
- Made past articles part of https://aider.chat/blog/ - Made past articles part of https://aider.chat/blog/
### v0.24.1 ### Aider v0.24.1
- Fixed bug with cost computations when --no-steam in effect - Fixed bug with cost computations when --no-steam in effect
### v0.24.0 ### Aider v0.24.0
- New `/web <url>` command which scrapes the url, turns it into fairly clean markdown and adds it to the chat. - New `/web <url>` command which scrapes the url, turns it into fairly clean markdown and adds it to the chat.
- Updated all OpenAI model names, pricing info - Updated all OpenAI model names, pricing info
- Default GPT 3.5 model is now `gpt-3.5-turbo-0125`. - Default GPT 3.5 model is now `gpt-3.5-turbo-0125`.
- Bugfix to the `!` alias for `/run`. - Bugfix to the `!` alias for `/run`.
### v0.23.0 ### Aider v0.23.0
- Added support for `--model gpt-4-0125-preview` and OpenAI's alias `--model gpt-4-turbo-preview`. The `--4turbo` switch remains an alias for `--model gpt-4-1106-preview` at this time. - Added support for `--model gpt-4-0125-preview` and OpenAI's alias `--model gpt-4-turbo-preview`. The `--4turbo` switch remains an alias for `--model gpt-4-1106-preview` at this time.
- New `/test` command that runs a command and adds the output to the chat on non-zero exit status. - New `/test` command that runs a command and adds the output to the chat on non-zero exit status.
@ -188,25 +210,25 @@
- Added `--openrouter` as a shortcut for `--openai-api-base https://openrouter.ai/api/v1` - Added `--openrouter` as a shortcut for `--openai-api-base https://openrouter.ai/api/v1`
- Fixed bug preventing use of env vars `OPENAI_API_BASE, OPENAI_API_TYPE, OPENAI_API_VERSION, OPENAI_API_DEPLOYMENT_ID`. - Fixed bug preventing use of env vars `OPENAI_API_BASE, OPENAI_API_TYPE, OPENAI_API_VERSION, OPENAI_API_DEPLOYMENT_ID`.
### v0.22.0 ### Aider v0.22.0
- Improvements for unified diff editing format. - Improvements for unified diff editing format.
- Added ! as an alias for /run. - Added ! as an alias for /run.
- Autocomplete for /add and /drop now properly quotes filenames with spaces. - Autocomplete for /add and /drop now properly quotes filenames with spaces.
- The /undo command asks GPT not to just retry reverted edit. - The /undo command asks GPT not to just retry reverted edit.
### v0.21.1 ### Aider v0.21.1
- Bugfix for unified diff editing format. - Bugfix for unified diff editing format.
- Added --4turbo and --4 aliases for --4-turbo. - Added --4turbo and --4 aliases for --4-turbo.
### v0.21.0 ### Aider v0.21.0
- Support for python 3.12. - Support for python 3.12.
- Improvements to unified diff editing format. - Improvements to unified diff editing format.
- New `--check-update` arg to check if updates are available and exit with status code. - New `--check-update` arg to check if updates are available and exit with status code.
### v0.20.0 ### Aider v0.20.0
- Add images to the chat to automatically use GPT-4 Vision, by @joshuavial - Add images to the chat to automatically use GPT-4 Vision, by @joshuavial
@ -214,22 +236,22 @@
- Improved unicode encoding for `/run` command output, by @ctoth - Improved unicode encoding for `/run` command output, by @ctoth
- Prevent false auto-commits on Windows, by @ctoth - Prevent false auto-commits on Windows, by @ctoth
### v0.19.1 ### Aider v0.19.1
- Removed stray debug output. - Removed stray debug output.
### v0.19.0 ### Aider v0.19.0
- [Significantly reduced "lazy" coding from GPT-4 Turbo due to new unified diff edit format](https://aider.chat/docs/unified-diffs.html) - [Significantly reduced "lazy" coding from GPT-4 Turbo due to new unified diff edit format](https://aider.chat/docs/unified-diffs.html)
- Score improves from 20% to 61% on new "laziness benchmark". - Score improves from 20% to 61% on new "laziness benchmark".
- Aider now uses unified diffs by default for `gpt-4-1106-preview`. - Aider now uses unified diffs by default for `gpt-4-1106-preview`.
- New `--4-turbo` command line switch as a shortcut for `--model gpt-4-1106-preview`. - New `--4-turbo` command line switch as a shortcut for `--model gpt-4-1106-preview`.
### v0.18.1 ### Aider v0.18.1
- Upgraded to new openai python client v1.3.7. - Upgraded to new openai python client v1.3.7.
### v0.18.0 ### Aider v0.18.0
- Improved prompting for both GPT-4 and GPT-4 Turbo. - Improved prompting for both GPT-4 and GPT-4 Turbo.
- Far fewer edit errors from GPT-4 Turbo (`gpt-4-1106-preview`). - Far fewer edit errors from GPT-4 Turbo (`gpt-4-1106-preview`).
@ -237,7 +259,7 @@
- Fixed bug where in-chat files were marked as both read-only and ready-write, sometimes confusing GPT. - Fixed bug where in-chat files were marked as both read-only and ready-write, sometimes confusing GPT.
- Fixed bug to properly handle repos with submodules. - Fixed bug to properly handle repos with submodules.
### v0.17.0 ### Aider v0.17.0
- Support for OpenAI's new 11/06 models: - Support for OpenAI's new 11/06 models:
- gpt-4-1106-preview with 128k context window - gpt-4-1106-preview with 128k context window
@ -249,19 +271,19 @@
- Fixed crash bug when `/add` used on file matching `.gitignore` - Fixed crash bug when `/add` used on file matching `.gitignore`
- Fixed misc bugs to catch and handle unicode decoding errors. - Fixed misc bugs to catch and handle unicode decoding errors.
### v0.16.3 ### Aider v0.16.3
- Fixed repo-map support for C#. - Fixed repo-map support for C#.
### v0.16.2 ### Aider v0.16.2
- Fixed docker image. - Fixed docker image.
### v0.16.1 ### Aider v0.16.1
- Updated tree-sitter dependencies to streamline the pip install process - Updated tree-sitter dependencies to streamline the pip install process
### v0.16.0 ### Aider v0.16.0
- [Improved repository map using tree-sitter](https://aider.chat/docs/repomap.html) - [Improved repository map using tree-sitter](https://aider.chat/docs/repomap.html)
- Switched from "edit block" to "search/replace block", which reduced malformed edit blocks. [Benchmarked](https://aider.chat/docs/benchmarks.html) at 66.2%, no regression. - Switched from "edit block" to "search/replace block", which reduced malformed edit blocks. [Benchmarked](https://aider.chat/docs/benchmarks.html) at 66.2%, no regression.
@ -269,21 +291,21 @@
- Bugfix to properly handle malformed `/add` wildcards. - Bugfix to properly handle malformed `/add` wildcards.
### v0.15.0 ### Aider v0.15.0
- Added support for `.aiderignore` file, which instructs aider to ignore parts of the git repo. - Added support for `.aiderignore` file, which instructs aider to ignore parts of the git repo.
- New `--commit` cmd line arg, which just commits all pending changes with a sensible commit message generated by gpt-3.5. - New `--commit` cmd line arg, which just commits all pending changes with a sensible commit message generated by gpt-3.5.
- Added universal ctags and multiple architectures to the [aider docker image](https://aider.chat/docs/docker.html) - Added universal ctags and multiple architectures to the [aider docker image](https://aider.chat/docs/install/docker.html)
- `/run` and `/git` now accept full shell commands, like: `/run (cd subdir; ls)` - `/run` and `/git` now accept full shell commands, like: `/run (cd subdir; ls)`
- Restored missing `--encoding` cmd line switch. - Restored missing `--encoding` cmd line switch.
### v0.14.2 ### Aider v0.14.2
- Easily [run aider from a docker image](https://aider.chat/docs/docker.html) - Easily [run aider from a docker image](https://aider.chat/docs/install/docker.html)
- Fixed bug with chat history summarization. - Fixed bug with chat history summarization.
- Fixed bug if `soundfile` package not available. - Fixed bug if `soundfile` package not available.
### v0.14.1 ### Aider v0.14.1
- /add and /drop handle absolute filenames and quoted filenames - /add and /drop handle absolute filenames and quoted filenames
- /add checks to be sure files are within the git repo (or root) - /add checks to be sure files are within the git repo (or root)
@ -291,14 +313,14 @@
- Fixed /add bug in when aider launched in repo subdir - Fixed /add bug in when aider launched in repo subdir
- Show models supported by api/key if requested model isn't available - Show models supported by api/key if requested model isn't available
### v0.14.0 ### Aider v0.14.0
- [Support for Claude2 and other LLMs via OpenRouter](https://aider.chat/docs/faq.html#accessing-other-llms-with-openrouter) by @joshuavial - [Support for Claude2 and other LLMs via OpenRouter](https://aider.chat/docs/faq.html#accessing-other-llms-with-openrouter) by @joshuavial
- Documentation for [running the aider benchmarking suite](https://github.com/paul-gauthier/aider/tree/main/benchmark) - Documentation for [running the aider benchmarking suite](https://github.com/paul-gauthier/aider/tree/main/benchmark)
- Aider now requires Python >= 3.9 - Aider now requires Python >= 3.9
### v0.13.0 ### Aider v0.13.0
- [Only git commit dirty files that GPT tries to edit](https://aider.chat/docs/faq.html#how-did-v0130-change-git-usage) - [Only git commit dirty files that GPT tries to edit](https://aider.chat/docs/faq.html#how-did-v0130-change-git-usage)
- Send chat history as prompt/context for Whisper voice transcription - Send chat history as prompt/context for Whisper voice transcription
@ -306,14 +328,14 @@
- Late-bind importing `sounddevice`, as it was slowing down aider startup - Late-bind importing `sounddevice`, as it was slowing down aider startup
- Improved --foo/--no-foo switch handling for command line and yml config settings - Improved --foo/--no-foo switch handling for command line and yml config settings
### v0.12.0 ### Aider v0.12.0
- [Voice-to-code](https://aider.chat/docs/voice.html) support, which allows you to code with your voice. - [Voice-to-code](https://aider.chat/docs/voice.html) support, which allows you to code with your voice.
- Fixed bug where /diff was causing crash. - Fixed bug where /diff was causing crash.
- Improved prompting for gpt-4, refactor of editblock coder. - Improved prompting for gpt-4, refactor of editblock coder.
- [Benchmarked](https://aider.chat/docs/benchmarks.html) at 63.2% for gpt-4/diff, no regression. - [Benchmarked](https://aider.chat/docs/benchmarks.html) at 63.2% for gpt-4/diff, no regression.
### v0.11.1 ### Aider v0.11.1
- Added a progress bar when initially creating a repo map. - Added a progress bar when initially creating a repo map.
- Fixed bad commit message when adding new file to empty repo. - Fixed bad commit message when adding new file to empty repo.
@ -322,7 +344,7 @@
- Fixed /commit bug from repo refactor, added test coverage. - Fixed /commit bug from repo refactor, added test coverage.
- [Benchmarked](https://aider.chat/docs/benchmarks.html) at 53.4% for gpt-3.5/whole (no regression). - [Benchmarked](https://aider.chat/docs/benchmarks.html) at 53.4% for gpt-3.5/whole (no regression).
### v0.11.0 ### Aider v0.11.0
- Automatically summarize chat history to avoid exhausting context window. - Automatically summarize chat history to avoid exhausting context window.
- More detail on dollar costs when running with `--no-stream` - More detail on dollar costs when running with `--no-stream`
@ -330,12 +352,12 @@
- Defend against GPT-3.5 or non-OpenAI models suggesting filenames surrounded by asterisks. - Defend against GPT-3.5 or non-OpenAI models suggesting filenames surrounded by asterisks.
- Refactored GitRepo code out of the Coder class. - Refactored GitRepo code out of the Coder class.
### v0.10.1 ### Aider v0.10.1
- /add and /drop always use paths relative to the git root - /add and /drop always use paths relative to the git root
- Encourage GPT to use language like "add files to the chat" to ask users for permission to edit them. - Encourage GPT to use language like "add files to the chat" to ask users for permission to edit them.
### v0.10.0 ### Aider v0.10.0
- Added `/git` command to run git from inside aider chats. - Added `/git` command to run git from inside aider chats.
- Use Meta-ENTER (Esc+ENTER in some environments) to enter multiline chat messages. - Use Meta-ENTER (Esc+ENTER in some environments) to enter multiline chat messages.
@ -347,7 +369,7 @@
- [Benchmarked](https://aider.chat/docs/benchmarks.html) at 64.7% for gpt-4/diff (no regression) - [Benchmarked](https://aider.chat/docs/benchmarks.html) at 64.7% for gpt-4/diff (no regression)
### v0.9.0 ### Aider v0.9.0
- Support for the OpenAI models in [Azure](https://aider.chat/docs/faq.html#azure) - Support for the OpenAI models in [Azure](https://aider.chat/docs/faq.html#azure)
- Added `--show-repo-map` - Added `--show-repo-map`
@ -356,7 +378,7 @@
- Bugfix: recognize and add files in subdirectories mentioned by user or GPT - Bugfix: recognize and add files in subdirectories mentioned by user or GPT
- [Benchmarked](https://aider.chat/docs/benchmarks.html) at 53.8% for gpt-3.5-turbo/whole (no regression) - [Benchmarked](https://aider.chat/docs/benchmarks.html) at 53.8% for gpt-3.5-turbo/whole (no regression)
### v0.8.3 ### Aider v0.8.3
- Added `--dark-mode` and `--light-mode` to select colors optimized for terminal background - Added `--dark-mode` and `--light-mode` to select colors optimized for terminal background
- Install docs link to [NeoVim plugin](https://github.com/joshuavial/aider.nvim) by @joshuavial - Install docs link to [NeoVim plugin](https://github.com/joshuavial/aider.nvim) by @joshuavial
@ -367,11 +389,11 @@
- Bugfix/improvement to /add and /drop to recurse selected directories - Bugfix/improvement to /add and /drop to recurse selected directories
- Bugfix for live diff output when using "whole" edit format - Bugfix for live diff output when using "whole" edit format
### v0.8.2 ### Aider v0.8.2
- Disabled general availability of gpt-4 (it's rolling out, not 100% available yet) - Disabled general availability of gpt-4 (it's rolling out, not 100% available yet)
### v0.8.1 ### Aider v0.8.1
- Ask to create a git repo if none found, to better track GPT's code changes - Ask to create a git repo if none found, to better track GPT's code changes
- Glob wildcards are now supported in `/add` and `/drop` commands - Glob wildcards are now supported in `/add` and `/drop` commands
@ -383,7 +405,7 @@
- Bugfix for chats with multiple files - Bugfix for chats with multiple files
- Bugfix in editblock coder prompt - Bugfix in editblock coder prompt
### v0.8.0 ### Aider v0.8.0
- [Benchmark comparing code editing in GPT-3.5 and GPT-4](https://aider.chat/docs/benchmarks.html) - [Benchmark comparing code editing in GPT-3.5 and GPT-4](https://aider.chat/docs/benchmarks.html)
- Improved Windows support: - Improved Windows support:
@ -396,15 +418,15 @@
- Added `--code-theme` switch to control the pygments styling of code blocks (by @kwmiebach) - Added `--code-theme` switch to control the pygments styling of code blocks (by @kwmiebach)
- Better status messages explaining the reason when ctags is disabled - Better status messages explaining the reason when ctags is disabled
### v0.7.2: ### Aider v0.7.2:
- Fixed a bug to allow aider to edit files that contain triple backtick fences. - Fixed a bug to allow aider to edit files that contain triple backtick fences.
### v0.7.1: ### Aider v0.7.1:
- Fixed a bug in the display of streaming diffs in GPT-3.5 chats - Fixed a bug in the display of streaming diffs in GPT-3.5 chats
### v0.7.0: ### Aider v0.7.0:
- Graceful handling of context window exhaustion, including helpful tips. - Graceful handling of context window exhaustion, including helpful tips.
- Added `--message` to give GPT that one instruction and then exit after it replies and any edits are performed. - Added `--message` to give GPT that one instruction and then exit after it replies and any edits are performed.
@ -418,13 +440,13 @@
- Initial experiments show that using functions makes 3.5 less competent at coding. - Initial experiments show that using functions makes 3.5 less competent at coding.
- Limit automatic retries when GPT returns a malformed edit response. - Limit automatic retries when GPT returns a malformed edit response.
### v0.6.2 ### Aider v0.6.2
* Support for `gpt-3.5-turbo-16k`, and all OpenAI chat models * Support for `gpt-3.5-turbo-16k`, and all OpenAI chat models
* Improved ability to correct when gpt-4 omits leading whitespace in code edits * Improved ability to correct when gpt-4 omits leading whitespace in code edits
* Added `--openai-api-base` to support API proxies, etc. * Added `--openai-api-base` to support API proxies, etc.
### v0.5.0 ### Aider v0.5.0
- Added support for `gpt-3.5-turbo` and `gpt-4-32k`. - Added support for `gpt-3.5-turbo` and `gpt-4-32k`.
- Added `--map-tokens` to set a token budget for the repo map, along with a PageRank based algorithm for prioritizing which files and identifiers to include in the map. - Added `--map-tokens` to set a token budget for the repo map, along with a PageRank based algorithm for prioritizing which files and identifiers to include in the map.

View file

@ -31,6 +31,7 @@ and works best with GPT-4o, Claude 3.5 Sonnet, Claude 3 Opus and DeepSeek Coder
# Because this page is rendered by GitHub as the repo README # Because this page is rendered by GitHub as the repo README
cog.out(open("website/_includes/get-started.md").read()) cog.out(open("website/_includes/get-started.md").read())
]]]--> ]]]-->
You can get started quickly like this: You can get started quickly like this:
``` ```
@ -39,18 +40,13 @@ $ pip install aider-chat
# Change directory into a git repo # Change directory into a git repo
$ cd /to/your/git/repo $ cd /to/your/git/repo
# Work with Claude 3.5 Sonnet on your repo
$ export ANTHROPIC_API_KEY=your-key-goes-here
$ aider
# Work with GPT-4o on your repo # Work with GPT-4o on your repo
$ export OPENAI_API_KEY=your-key-goes-here $ export OPENAI_API_KEY=your-key-goes-here
$ aider $ aider
# Or, work with Anthropic's models
$ export ANTHROPIC_API_KEY=your-key-goes-here
# Claude 3 Opus
$ aider --opus
# Claude 3.5 Sonnet
$ aider --sonnet
``` ```
<!--[[[end]]]--> <!--[[[end]]]-->
@ -79,8 +75,8 @@ and can [connect to almost any LLM](https://aider.chat/docs/llms.html).
- Edit files in your editor while chatting with aider, - Edit files in your editor while chatting with aider,
and it will always use the latest version. and it will always use the latest version.
Pair program with AI. Pair program with AI.
- Add images to the chat (GPT-4o, GPT-4 Turbo, etc). - [Add images to the chat](https://aider.chat/docs/images-urls.html) (GPT-4o, Claude 3.5 Sonnet, etc).
- Add URLs to the chat and aider will read their content. - [Add URLs to the chat](https://aider.chat/docs/images-urls.html) and aider will read their content.
- [Code with your voice](https://aider.chat/docs/voice.html). - [Code with your voice](https://aider.chat/docs/voice.html).
@ -125,4 +121,5 @@ projects like django, scikitlearn, matplotlib, etc.
- *I am an aider addict. I'm getting so much more work done, but in less time.* -- [dandandan](https://discord.com/channels/1131200896827654144/1131200896827654149/1135913253483069470) - *I am an aider addict. I'm getting so much more work done, but in less time.* -- [dandandan](https://discord.com/channels/1131200896827654144/1131200896827654149/1135913253483069470)
- *After wasting $100 on tokens trying to find something better, I'm back to Aider. It blows everything else out of the water hands down, there's no competition whatsoever.* -- [SystemSculpt](https://discord.com/channels/1131200896827654144/1131200896827654149/1178736602797846548) - *After wasting $100 on tokens trying to find something better, I'm back to Aider. It blows everything else out of the water hands down, there's no competition whatsoever.* -- [SystemSculpt](https://discord.com/channels/1131200896827654144/1131200896827654149/1178736602797846548)
- *Hands down, this is the best AI coding assistant tool so far.* -- [IndyDevDan](https://www.youtube.com/watch?v=MPYFPvxfGZs) - *Hands down, this is the best AI coding assistant tool so far.* -- [IndyDevDan](https://www.youtube.com/watch?v=MPYFPvxfGZs)
- *[Aider] changed my daily coding workflows. It's mind-blowing how a single Python application can change your life.* -- [maledorak](https://discord.com/channels/1131200896827654144/1131200896827654149/1258453375620747264)
- *Best agent for actual dev work in existing codebases.* -- [Nick Dobos](https://twitter.com/NickADobos/status/1690408967963652097?s=20) - *Best agent for actual dev work in existing codebases.* -- [Nick Dobos](https://twitter.com/NickADobos/status/1690408967963652097?s=20)

View file

@ -1 +1 @@
__version__ = "0.40.6-dev" __version__ = "0.42.1-dev"

View file

@ -6,7 +6,7 @@ import sys
import configargparse import configargparse
from aider import __version__, models from aider import __version__
from aider.args_formatter import ( from aider.args_formatter import (
DotEnvFormatter, DotEnvFormatter,
MarkdownHelpFormatter, MarkdownHelpFormatter,
@ -25,16 +25,9 @@ def get_parser(default_config_files, git_root):
description="aider is GPT powered coding in your terminal", description="aider is GPT powered coding in your terminal",
add_config_file_help=True, add_config_file_help=True,
default_config_files=default_config_files, default_config_files=default_config_files,
config_file_parser_class=configargparse.YAMLConfigFileParser,
auto_env_var_prefix="AIDER_", auto_env_var_prefix="AIDER_",
) )
group = parser.add_argument_group("Main") group = parser.add_argument_group("Main")
group.add_argument(
"--llm-history-file",
metavar="LLM_HISTORY_FILE",
default=None,
help="Log the conversation with the LLM to this file (for example, .aider.llm.history)",
)
group.add_argument( group.add_argument(
"files", metavar="FILE", nargs="*", help="files to edit with an LLM (optional)" "files", metavar="FILE", nargs="*", help="files to edit with an LLM (optional)"
) )
@ -50,12 +43,11 @@ def get_parser(default_config_files, git_root):
env_var="ANTHROPIC_API_KEY", env_var="ANTHROPIC_API_KEY",
help="Specify the Anthropic API key", help="Specify the Anthropic API key",
) )
default_model = models.DEFAULT_MODEL_NAME
group.add_argument( group.add_argument(
"--model", "--model",
metavar="MODEL", metavar="MODEL",
default=default_model, default=None,
help=f"Specify the model to use for the main chat (default: {default_model})", help="Specify the model to use for the main chat",
) )
opus_model = "claude-3-opus-20240229" opus_model = "claude-3-opus-20240229"
group.add_argument( group.add_argument(
@ -150,13 +142,13 @@ def get_parser(default_config_files, git_root):
group.add_argument( group.add_argument(
"--model-settings-file", "--model-settings-file",
metavar="MODEL_SETTINGS_FILE", metavar="MODEL_SETTINGS_FILE",
default=None, default=".aider.model.settings.yml",
help="Specify a file with aider model settings for unknown models", help="Specify a file with aider model settings for unknown models",
) )
group.add_argument( group.add_argument(
"--model-metadata-file", "--model-metadata-file",
metavar="MODEL_METADATA_FILE", metavar="MODEL_METADATA_FILE",
default=None, default=".aider.model.metadata.json",
help="Specify a file with context window and costs for unknown models", help="Specify a file with context window and costs for unknown models",
) )
group.add_argument( group.add_argument(
@ -236,6 +228,12 @@ def get_parser(default_config_files, git_root):
default=False, default=False,
help="Restore the previous chat history messages (default: False)", help="Restore the previous chat history messages (default: False)",
) )
group.add_argument(
"--llm-history-file",
metavar="LLM_HISTORY_FILE",
default=None,
help="Log the conversation with the LLM to this file (for example, .aider.llm.history)",
)
########## ##########
group = parser.add_argument_group("Output Settings") group = parser.add_argument_group("Output Settings")
@ -345,6 +343,12 @@ def get_parser(default_config_files, git_root):
default=True, default=True,
help="Attribute aider commits in the git committer name (default: True)", help="Attribute aider commits in the git committer name (default: True)",
) )
group.add_argument(
"--attribute-commit-message",
action=argparse.BooleanOptionalAction,
default=False,
help="Prefix commit messages with 'aider: ' (default: False)",
)
group.add_argument( group.add_argument(
"--dry-run", "--dry-run",
action=argparse.BooleanOptionalAction, action=argparse.BooleanOptionalAction,
@ -381,7 +385,6 @@ def get_parser(default_config_files, git_root):
) )
group.add_argument( group.add_argument(
"--test-cmd", "--test-cmd",
action="append",
help="Specify command to run tests", help="Specify command to run tests",
default=[], default=[],
) )
@ -459,6 +462,12 @@ def get_parser(default_config_files, git_root):
help="Print the system prompts and exit (debug)", help="Print the system prompts and exit (debug)",
default=False, default=False,
) )
group.add_argument(
"--exit",
action="store_true",
help="Do all startup activities then exit before accepting user input (debug)",
default=False,
)
group.add_argument( group.add_argument(
"--message", "--message",
"--msg", "--msg",

View file

@ -2,6 +2,7 @@
import hashlib import hashlib
import json import json
import mimetypes
import os import os
import re import re
import sys import sys
@ -13,8 +14,6 @@ from json.decoder import JSONDecodeError
from pathlib import Path from pathlib import Path
import git import git
import openai
from jsonschema import Draft7Validator
from rich.console import Console, Text from rich.console import Console, Text
from rich.markdown import Markdown from rich.markdown import Markdown
@ -23,7 +22,7 @@ from aider.commands import Commands
from aider.history import ChatSummary from aider.history import ChatSummary
from aider.io import InputOutput from aider.io import InputOutput
from aider.linter import Linter from aider.linter import Linter
from aider.litellm import litellm from aider.llm import litellm
from aider.mdstream import MarkdownStream from aider.mdstream import MarkdownStream
from aider.repo import GitRepo from aider.repo import GitRepo
from aider.repomap import RepoMap from aider.repomap import RepoMap
@ -37,7 +36,7 @@ class MissingAPIKeyError(ValueError):
pass pass
class ExhaustedContextWindow(Exception): class FinishReasonLength(Exception):
pass pass
@ -67,6 +66,7 @@ class Coder:
test_cmd = None test_cmd = None
lint_outcome = None lint_outcome = None
test_outcome = None test_outcome = None
multi_response_content = ""
@classmethod @classmethod
def create( def create(
@ -221,6 +221,7 @@ class Coder:
test_cmd=None, test_cmd=None,
attribute_author=True, attribute_author=True,
attribute_committer=True, attribute_committer=True,
attribute_commit_message=False,
): ):
if not fnames: if not fnames:
fnames = [] fnames = []
@ -280,6 +281,7 @@ class Coder:
models=main_model.commit_message_models(), models=main_model.commit_message_models(),
attribute_author=attribute_author, attribute_author=attribute_author,
attribute_committer=attribute_committer, attribute_committer=attribute_committer,
attribute_commit_message=attribute_commit_message,
) )
self.root = self.repo.root self.root = self.repo.root
except FileNotFoundError: except FileNotFoundError:
@ -344,6 +346,8 @@ class Coder:
# validate the functions jsonschema # validate the functions jsonschema
if self.functions: if self.functions:
from jsonschema import Draft7Validator
for function in self.functions: for function in self.functions:
Draft7Validator.check_schema(function) Draft7Validator.check_schema(function)
@ -572,10 +576,12 @@ class Coder:
image_messages = [] image_messages = []
for fname, content in self.get_abs_fnames_content(): for fname, content in self.get_abs_fnames_content():
if is_image_file(fname): if is_image_file(fname):
image_url = f"data:image/{Path(fname).suffix.lstrip('.')};base64,{content}" mime_type, _ = mimetypes.guess_type(fname)
image_messages.append( if mime_type and mime_type.startswith("image/"):
{"type": "image_url", "image_url": {"url": image_url, "detail": "high"}} image_url = f"data:{mime_type};base64,{content}"
) image_messages.append(
{"type": "image_url", "image_url": {"url": image_url, "detail": "high"}}
)
if not image_messages: if not image_messages:
return None return None
@ -805,33 +811,56 @@ class Coder:
messages = self.format_messages() messages = self.format_messages()
self.io.log_llm_history("TO LLM", format_messages(messages))
if self.verbose: if self.verbose:
utils.show_messages(messages, functions=self.functions) utils.show_messages(messages, functions=self.functions)
self.multi_response_content = ""
if self.show_pretty() and self.stream:
mdargs = dict(style=self.assistant_output_color, code_theme=self.code_theme)
self.mdstream = MarkdownStream(mdargs=mdargs)
else:
self.mdstream = None
exhausted = False exhausted = False
interrupted = False interrupted = False
try: try:
yield from self.send(messages, functions=self.functions) while True:
except KeyboardInterrupt: try:
interrupted = True yield from self.send(messages, functions=self.functions)
except ExhaustedContextWindow: break
exhausted = True except KeyboardInterrupt:
except litellm.exceptions.BadRequestError as err: interrupted = True
if "ContextWindowExceededError" in err.message: break
exhausted = True except litellm.ContextWindowExceededError:
else: # The input is overflowing the context window!
self.io.tool_error(f"BadRequestError: {err}") exhausted = True
return break
except openai.BadRequestError as err: except litellm.exceptions.BadRequestError as br_err:
if "maximum context length" in str(err): self.io.tool_error(f"BadRequestError: {br_err}")
exhausted = True return
else: except FinishReasonLength:
raise err # We hit the 4k output limit!
except Exception as err: if not self.main_model.can_prefill:
self.io.tool_error(f"Unexpected error: {err}") exhausted = True
return break
self.multi_response_content = self.get_multi_response_content()
if messages[-1]["role"] == "assistant":
messages[-1]["content"] = self.multi_response_content
else:
messages.append(dict(role="assistant", content=self.multi_response_content))
except Exception as err:
self.io.tool_error(f"Unexpected error: {err}")
traceback.print_exc()
return
finally:
if self.mdstream:
self.live_incremental_response(True)
self.mdstream = None
self.partial_response_content = self.get_multi_response_content(True)
self.multi_response_content = ""
if exhausted: if exhausted:
self.show_exhausted_error() self.show_exhausted_error()
@ -851,8 +880,6 @@ class Coder:
self.io.tool_output() self.io.tool_output()
self.io.log_llm_history("LLM RESPONSE", format_content("ASSISTANT", content))
if interrupted: if interrupted:
content += "\n^C KeyboardInterrupt" content += "\n^C KeyboardInterrupt"
self.cur_messages += [dict(role="assistant", content=content)] self.cur_messages += [dict(role="assistant", content=content)]
@ -1045,6 +1072,8 @@ class Coder:
self.partial_response_content = "" self.partial_response_content = ""
self.partial_response_function_call = dict() self.partial_response_function_call = dict()
self.io.log_llm_history("TO LLM", format_messages(messages))
interrupted = False interrupted = False
try: try:
hash_object, completion = send_with_retries( hash_object, completion = send_with_retries(
@ -1060,6 +1089,11 @@ class Coder:
self.keyboard_interrupt() self.keyboard_interrupt()
interrupted = True interrupted = True
finally: finally:
self.io.log_llm_history(
"LLM RESPONSE",
format_content("ASSISTANT", self.partial_response_content),
)
if self.partial_response_content: if self.partial_response_content:
self.io.ai_output(self.partial_response_content) self.io.ai_output(self.partial_response_content)
elif self.partial_response_function_call: elif self.partial_response_function_call:
@ -1101,7 +1135,7 @@ class Coder:
if show_func_err and show_content_err: if show_func_err and show_content_err:
self.io.tool_error(show_func_err) self.io.tool_error(show_func_err)
self.io.tool_error(show_content_err) self.io.tool_error(show_content_err)
raise Exception("No data found in openai response!") raise Exception("No data found in LLM response!")
tokens = None tokens = None
if hasattr(completion, "usage") and completion.usage is not None: if hasattr(completion, "usage") and completion.usage is not None:
@ -1129,61 +1163,62 @@ class Coder:
if tokens is not None: if tokens is not None:
self.io.tool_output(tokens) self.io.tool_output(tokens)
if (
hasattr(completion.choices[0], "finish_reason")
and completion.choices[0].finish_reason == "length"
):
raise FinishReasonLength()
def show_send_output_stream(self, completion): def show_send_output_stream(self, completion):
if self.show_pretty(): for chunk in completion:
mdargs = dict(style=self.assistant_output_color, code_theme=self.code_theme) if len(chunk.choices) == 0:
mdstream = MarkdownStream(mdargs=mdargs) continue
else:
mdstream = None
try: if (
for chunk in completion: hasattr(chunk.choices[0], "finish_reason")
if len(chunk.choices) == 0: and chunk.choices[0].finish_reason == "length"
continue ):
raise FinishReasonLength()
if ( try:
hasattr(chunk.choices[0], "finish_reason") func = chunk.choices[0].delta.function_call
and chunk.choices[0].finish_reason == "length" # dump(func)
): for k, v in func.items():
raise ExhaustedContextWindow() if k in self.partial_response_function_call:
self.partial_response_function_call[k] += v
else:
self.partial_response_function_call[k] = v
except AttributeError:
pass
try: try:
func = chunk.choices[0].delta.function_call text = chunk.choices[0].delta.content
# dump(func) if text:
for k, v in func.items(): self.partial_response_content += text
if k in self.partial_response_function_call: except AttributeError:
self.partial_response_function_call[k] += v text = None
else:
self.partial_response_function_call[k] = v
except AttributeError:
pass
try: if self.show_pretty():
text = chunk.choices[0].delta.content self.live_incremental_response(False)
if text: elif text:
self.partial_response_content += text sys.stdout.write(text)
except AttributeError: sys.stdout.flush()
text = None yield text
if self.show_pretty(): def live_incremental_response(self, final):
self.live_incremental_response(mdstream, False)
elif text:
sys.stdout.write(text)
sys.stdout.flush()
yield text
finally:
if mdstream:
self.live_incremental_response(mdstream, True)
def live_incremental_response(self, mdstream, final):
show_resp = self.render_incremental_response(final) show_resp = self.render_incremental_response(final)
if not show_resp: self.mdstream.update(show_resp, final=final)
return
mdstream.update(show_resp, final=final)
def render_incremental_response(self, final): def render_incremental_response(self, final):
return self.partial_response_content return self.get_multi_response_content()
def get_multi_response_content(self, final=False):
cur = self.multi_response_content
new = self.partial_response_content
if new.rstrip() != new and not final:
new = new.rstrip()
return cur + new
def get_rel_fname(self, fname): def get_rel_fname(self, fname):
return os.path.relpath(fname, self.root) return os.path.relpath(fname, self.root)
@ -1192,13 +1227,19 @@ class Coder:
files = [self.get_rel_fname(fname) for fname in self.abs_fnames] files = [self.get_rel_fname(fname) for fname in self.abs_fnames]
return sorted(set(files)) return sorted(set(files))
def is_file_safe(self, fname):
try:
return Path(self.abs_root_path(fname)).is_file()
except OSError:
return
def get_all_relative_files(self): def get_all_relative_files(self):
if self.repo: if self.repo:
files = self.repo.get_tracked_files() files = self.repo.get_tracked_files()
else: else:
files = self.get_inchat_relative_files() files = self.get_inchat_relative_files()
files = [fname for fname in files if Path(self.abs_root_path(fname)).is_file()] files = [fname for fname in files if self.is_file_safe(fname)]
return sorted(set(files)) return sorted(set(files))
def get_all_abs_files(self): def get_all_abs_files(self):
@ -1405,8 +1446,8 @@ class Coder:
return context return context
def auto_commit(self, edited): def auto_commit(self, edited):
# context = self.get_context_from_history(self.cur_messages) context = self.get_context_from_history(self.cur_messages)
res = self.repo.commit(fnames=edited, aider_edits=True) res = self.repo.commit(fnames=edited, context=context, aider_edits=True)
if res: if res:
commit_hash, commit_message = res commit_hash, commit_message = res
self.last_aider_commit_hash = commit_hash self.last_aider_commit_hash = commit_hash

View file

@ -1,6 +1,6 @@
from pathlib import Path
from aider import diffs from aider import diffs
from pathlib import Path
from ..dump import dump # noqa: F401 from ..dump import dump # noqa: F401
from .base_coder import Coder from .base_coder import Coder
@ -26,10 +26,10 @@ class WholeFileCoder(Coder):
try: try:
return self.get_edits(mode="diff") return self.get_edits(mode="diff")
except ValueError: except ValueError:
return self.partial_response_content return self.get_multi_response_content()
def get_edits(self, mode="update"): def get_edits(self, mode="update"):
content = self.partial_response_content content = self.get_multi_response_content()
chat_files = self.get_inchat_relative_files() chat_files = self.get_inchat_relative_files()

View file

@ -5,11 +5,9 @@ import sys
from pathlib import Path from pathlib import Path
import git import git
import openai
from prompt_toolkit.completion import Completion
from aider import models, prompts, voice from aider import models, prompts, voice
from aider.litellm import litellm from aider.llm import litellm
from aider.scrape import Scraper from aider.scrape import Scraper
from aider.utils import is_image_file from aider.utils import is_image_file
@ -42,11 +40,9 @@ class Commands:
models.sanity_check_models(self.io, model) models.sanity_check_models(self.io, model)
raise SwitchModel(model) raise SwitchModel(model)
def completions_model(self, partial): def completions_model(self):
models = litellm.model_cost.keys() models = litellm.model_cost.keys()
for model in models: return models
if partial.lower() in model.lower():
yield Completion(model, start_position=-len(partial))
def cmd_models(self, args): def cmd_models(self, args):
"Search the list of available models" "Search the list of available models"
@ -83,21 +79,25 @@ class Commands:
def is_command(self, inp): def is_command(self, inp):
return inp[0] in "/!" return inp[0] in "/!"
def get_completions(self, cmd):
assert cmd.startswith("/")
cmd = cmd[1:]
fun = getattr(self, f"completions_{cmd}", None)
if not fun:
return []
return sorted(fun())
def get_commands(self): def get_commands(self):
commands = [] commands = []
for attr in dir(self): for attr in dir(self):
if attr.startswith("cmd_"): if not attr.startswith("cmd_"):
commands.append("/" + attr[4:]) continue
cmd = attr[4:]
commands.append("/" + cmd)
return commands return commands
def get_command_completions(self, cmd_name, partial):
cmd_completions_method_name = f"completions_{cmd_name}"
cmd_completions_method = getattr(self, cmd_completions_method_name, None)
if cmd_completions_method:
for completion in cmd_completions_method(partial):
yield completion
def do_run(self, cmd_name, args): def do_run(self, cmd_name, args):
cmd_method_name = f"cmd_{cmd_name}" cmd_method_name = f"cmd_{cmd_name}"
cmd_method = getattr(self, cmd_method_name, None) cmd_method = getattr(self, cmd_method_name, None)
@ -331,10 +331,7 @@ class Commands:
return return
last_commit = self.coder.repo.repo.head.commit last_commit = self.coder.repo.repo.head.commit
if ( if last_commit.hexsha[:7] != self.coder.last_aider_commit_hash:
not last_commit.author.name.endswith(" (aider)")
or last_commit.hexsha[:7] != self.coder.last_aider_commit_hash
):
self.io.tool_error("The last commit was not made by aider in this chat session.") self.io.tool_error("The last commit was not made by aider in this chat session.")
self.io.tool_error( self.io.tool_error(
"You could try `/git reset --hard HEAD^` but be aware that this is a destructive" "You could try `/git reset --hard HEAD^` but be aware that this is a destructive"
@ -381,12 +378,11 @@ class Commands:
fname = f'"{fname}"' fname = f'"{fname}"'
return fname return fname
def completions_add(self, partial): def completions_add(self):
files = set(self.coder.get_all_relative_files()) files = set(self.coder.get_all_relative_files())
files = files - set(self.coder.get_inchat_relative_files()) files = files - set(self.coder.get_inchat_relative_files())
for fname in files: files = [self.quote_fname(fn) for fn in files]
if partial.lower() in fname.lower(): return files
yield Completion(self.quote_fname(fname), start_position=-len(partial))
def glob_filtered_to_repo(self, pattern): def glob_filtered_to_repo(self, pattern):
try: try:
@ -487,12 +483,10 @@ class Commands:
reply = prompts.added_files.format(fnames=", ".join(added_fnames)) reply = prompts.added_files.format(fnames=", ".join(added_fnames))
return reply return reply
def completions_drop(self, partial): def completions_drop(self):
files = self.coder.get_inchat_relative_files() files = self.coder.get_inchat_relative_files()
files = [self.quote_fname(fn) for fn in files]
for fname in files: return files
if partial.lower() in fname.lower():
yield Completion(self.quote_fname(fname), start_position=-len(partial))
def cmd_drop(self, args=""): def cmd_drop(self, args=""):
"Remove files from the chat session to free up context space" "Remove files from the chat session to free up context space"
@ -616,14 +610,14 @@ class Commands:
self.io.tool_output("\nNo files in chat or git repo.") self.io.tool_output("\nNo files in chat or git repo.")
return return
if chat_files: if other_files:
self.io.tool_output("Files in chat:\n") self.io.tool_output("Repo files not in the chat:\n")
for file in chat_files: for file in other_files:
self.io.tool_output(f" {file}") self.io.tool_output(f" {file}")
if other_files: if chat_files:
self.io.tool_output("\nRepo files not in the chat:\n") self.io.tool_output("\nFiles in chat:\n")
for file in other_files: for file in chat_files:
self.io.tool_output(f" {file}") self.io.tool_output(f" {file}")
def cmd_help(self, args): def cmd_help(self, args):
@ -688,7 +682,7 @@ class Commands:
try: try:
text = self.voice.record_and_transcribe(history, language=self.voice_language) text = self.voice.record_and_transcribe(history, language=self.voice_language)
except openai.OpenAIError as err: except litellm.OpenAIError as err:
self.io.tool_error(f"Unable to use OpenAI whisper model: {err}") self.io.tool_error(f"Unable to use OpenAI whisper model: {err}")
return return

View file

@ -17,9 +17,10 @@ from aider.scrape import Scraper
class CaptureIO(InputOutput): class CaptureIO(InputOutput):
lines = [] lines = []
def tool_output(self, msg): def tool_output(self, msg, log_only=False):
self.lines.append(msg) if not log_only:
super().tool_output(msg) self.lines.append(msg)
super().tool_output(msg, log_only=log_only)
def tool_error(self, msg): def tool_error(self, msg):
self.lines.append(msg) self.lines.append(msg)

View file

@ -61,7 +61,11 @@ class ChatSummary:
sized.reverse() sized.reverse()
keep = [] keep = []
total = 0 total = 0
model_max_input_tokens = self.model.info.get("max_input_tokens", 4096) - 512
# These sometimes come set with value = None
model_max_input_tokens = self.model.info.get("max_input_tokens") or 4096
model_max_input_tokens -= 512
for i in range(split_index): for i in range(split_index):
total += sized[i][0] total += sized[i][0]
if total > model_max_input_tokens: if total > model_max_input_tokens:

View file

@ -23,7 +23,6 @@ from .utils import is_image_file
class AutoCompleter(Completer): class AutoCompleter(Completer):
def __init__(self, root, rel_fnames, addable_rel_fnames, commands, encoding): def __init__(self, root, rel_fnames, addable_rel_fnames, commands, encoding):
self.commands = commands
self.addable_rel_fnames = addable_rel_fnames self.addable_rel_fnames = addable_rel_fnames
self.rel_fnames = rel_fnames self.rel_fnames = rel_fnames
self.encoding = encoding self.encoding = encoding
@ -37,6 +36,11 @@ class AutoCompleter(Completer):
self.words = set() self.words = set()
self.commands = commands
self.command_completions = dict()
if commands:
self.command_names = self.commands.get_commands()
for rel_fname in addable_rel_fnames: for rel_fname in addable_rel_fnames:
self.words.add(rel_fname) self.words.add(rel_fname)
@ -64,16 +68,31 @@ class AutoCompleter(Completer):
if text[0] == "/": if text[0] == "/":
if len(words) == 1 and not text[-1].isspace(): if len(words) == 1 and not text[-1].isspace():
candidates = self.commands.get_commands() partial = words[0]
candidates = [(cmd, cmd) for cmd in candidates] candidates = self.command_names
else: for cmd in candidates:
for completion in self.commands.get_command_completions(words[0][1:], words[-1]): if cmd.startswith(partial):
yield completion yield Completion(cmd, start_position=-len(partial))
return elif len(words) > 1 and not text[-1].isspace():
else: cmd = words[0]
candidates = self.words partial = words[-1]
candidates.update(set(self.fname_to_rel_fnames))
candidates = [(word, f"`{word}`") for word in candidates] if cmd not in self.command_names:
return
if cmd not in self.command_completions:
candidates = self.commands.get_completions(cmd)
self.command_completions[cmd] = candidates
else:
candidates = self.command_completions[cmd]
for word in candidates:
if partial in word:
yield Completion(word, start_position=-len(partial))
return
candidates = self.words
candidates.update(set(self.fname_to_rel_fnames))
candidates = [(word, f"`{word}`") for word in candidates]
last_word = words[-1] last_word = words[-1]
for word_match, word_insert in candidates: for word_match, word_insert in candidates:
@ -277,8 +296,8 @@ class InputOutput:
def log_llm_history(self, role, content): def log_llm_history(self, role, content):
if not self.llm_history_file: if not self.llm_history_file:
return return
timestamp = datetime.now().isoformat(timespec='seconds') timestamp = datetime.now().isoformat(timespec="seconds")
with open(self.llm_history_file, 'a', encoding=self.encoding) as log_file: with open(self.llm_history_file, "a", encoding=self.encoding) as log_file:
log_file.write(f"{role.upper()} {timestamp}\n") log_file.write(f"{role.upper()} {timestamp}\n")
log_file.write(content + "\n") log_file.write(content + "\n")

View file

@ -1,14 +0,0 @@
import os
import warnings
warnings.filterwarnings("ignore", category=UserWarning, module="pydantic")
os.environ["OR_SITE_URL"] = "http://aider.chat"
os.environ["OR_APP_NAME"] = "Aider"
import litellm # noqa: E402
litellm.suppress_debug_info = True
litellm.set_verbose = False
__all__ = [litellm]

29
aider/llm.py Normal file
View file

@ -0,0 +1,29 @@
import importlib
import os
import warnings
warnings.filterwarnings("ignore", category=UserWarning, module="pydantic")
os.environ["OR_SITE_URL"] = "http://aider.chat"
os.environ["OR_APP_NAME"] = "Aider"
# `import litellm` takes 1.5 seconds, defer it!
class LazyLiteLLM:
def __init__(self):
self._lazy_module = None
def __getattr__(self, name):
if self._lazy_module is None:
self._lazy_module = importlib.import_module("litellm")
self._lazy_module.suppress_debug_info = True
self._lazy_module.set_verbose = False
return getattr(self._lazy_module, name)
litellm = LazyLiteLLM()
__all__ = [litellm]

View file

@ -2,20 +2,19 @@ import configparser
import os import os
import re import re
import sys import sys
import threading
from pathlib import Path from pathlib import Path
import git import git
import httpx
from dotenv import load_dotenv from dotenv import load_dotenv
from prompt_toolkit.enums import EditingMode from prompt_toolkit.enums import EditingMode
from streamlit.web import cli
from aider import __version__, models, utils from aider import __version__, models, utils
from aider.args import get_parser from aider.args import get_parser
from aider.coders import Coder from aider.coders import Coder
from aider.commands import SwitchModel from aider.commands import SwitchModel
from aider.io import InputOutput from aider.io import InputOutput
from aider.litellm import litellm # noqa: F401; properly init litellm on launch from aider.llm import litellm # noqa: F401; properly init litellm on launch
from aider.repo import GitRepo from aider.repo import GitRepo
from aider.versioncheck import check_version from aider.versioncheck import check_version
@ -150,6 +149,8 @@ def scrub_sensitive_info(args, text):
def launch_gui(args): def launch_gui(args):
from streamlit.web import cli
from aider import gui from aider import gui
print() print()
@ -222,6 +223,14 @@ def generate_search_path_list(default_fname, git_root, command_line_file):
if command_line_file: if command_line_file:
files.append(command_line_file) files.append(command_line_file)
files.append(default_file.resolve()) files.append(default_file.resolve())
files = [Path(fn).resolve() for fn in files]
files.reverse()
uniq = []
for fn in files:
if fn not in uniq:
uniq.append(fn)
uniq.reverse()
files = uniq
files = list(map(str, files)) files = list(map(str, files))
files = list(dict.fromkeys(files)) files = list(dict.fromkeys(files))
@ -230,7 +239,7 @@ def generate_search_path_list(default_fname, git_root, command_line_file):
def register_models(git_root, model_settings_fname, io): def register_models(git_root, model_settings_fname, io):
model_settings_files = generate_search_path_list( model_settings_files = generate_search_path_list(
".aider.models.yml", git_root, model_settings_fname ".aider.model.settings.yml", git_root, model_settings_fname
) )
try: try:
@ -248,17 +257,17 @@ def register_models(git_root, model_settings_fname, io):
def register_litellm_models(git_root, model_metadata_fname, io): def register_litellm_models(git_root, model_metadata_fname, io):
model_metatdata_files = generate_search_path_list( model_metatdata_files = generate_search_path_list(
".aider.litellm.models.json", git_root, model_metadata_fname ".aider.model.metadata.json", git_root, model_metadata_fname
) )
try: try:
model_metadata_files_loaded = models.register_litellm_models(model_metatdata_files) model_metadata_files_loaded = models.register_litellm_models(model_metatdata_files)
if len(model_metadata_files_loaded) > 0: if len(model_metadata_files_loaded) > 0:
io.tool_output(f"Loaded {len(model_metadata_files_loaded)} litellm model file(s)") io.tool_output(f"Loaded {len(model_metadata_files_loaded)} model metadata file(s)")
for model_metadata_file in model_metadata_files_loaded: for model_metadata_file in model_metadata_files_loaded:
io.tool_output(f" - {model_metadata_file}") io.tool_output(f" - {model_metadata_file}")
except Exception as e: except Exception as e:
io.tool_error(f"Error loading litellm models: {e}") io.tool_error(f"Error loading model metadata models: {e}")
return 1 return 1
@ -292,6 +301,8 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
args = parser.parse_args(argv) args = parser.parse_args(argv)
if not args.verify_ssl: if not args.verify_ssl:
import httpx
litellm.client_session = httpx.Client(verify=False) litellm.client_session = httpx.Client(verify=False)
if args.gui and not return_coder: if args.gui and not return_coder:
@ -403,6 +414,11 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
register_models(git_root, args.model_settings_file, io) register_models(git_root, args.model_settings_file, io)
register_litellm_models(git_root, args.model_metadata_file, io) register_litellm_models(git_root, args.model_metadata_file, io)
if not args.model:
args.model = "gpt-4o"
if os.environ.get("ANTHROPIC_API_KEY"):
args.model = "claude-3-5-sonnet-20240620"
main_model = models.Model(args.model, weak_model=args.weak_model) main_model = models.Model(args.model, weak_model=args.weak_model)
lint_cmds = parse_lint_cmds(args.lint_cmd, io) lint_cmds = parse_lint_cmds(args.lint_cmd, io)
@ -441,6 +457,7 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
test_cmd=args.test_cmd, test_cmd=args.test_cmd,
attribute_author=args.attribute_author, attribute_author=args.attribute_author,
attribute_committer=args.attribute_committer, attribute_committer=args.attribute_committer,
attribute_commit_message=args.attribute_commit_message,
) )
except ValueError as err: except ValueError as err:
@ -528,6 +545,11 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
return 1 return 1
return return
if args.exit:
return
threading.Thread(target=load_slow_imports).start()
while True: while True:
try: try:
coder.run() coder.run()
@ -537,6 +559,20 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
coder.show_announcements() coder.show_announcements()
def load_slow_imports():
# These imports are deferred in various ways to
# improve startup time.
# This func is called in a thread to load them in the background
# while we wait for the user to type their first message.
try:
import httpx # noqa: F401
import litellm # noqa: F401
import networkx # noqa: F401
import numpy # noqa: F401
except Exception:
pass
if __name__ == "__main__": if __name__ == "__main__":
status = main() status = main()
sys.exit(status) sys.exit(status)

View file

@ -1,9 +1,11 @@
import difflib import difflib
import importlib
import json import json
import math import math
import os import os
import sys import sys
from dataclasses import dataclass, fields from dataclasses import dataclass, fields
from pathlib import Path
from typing import Optional from typing import Optional
import yaml import yaml
@ -11,10 +13,48 @@ from PIL import Image
from aider import urls from aider import urls
from aider.dump import dump # noqa: F401 from aider.dump import dump # noqa: F401
from aider.litellm import litellm from aider.llm import litellm
DEFAULT_MODEL_NAME = "gpt-4o" DEFAULT_MODEL_NAME = "gpt-4o"
OPENAI_MODELS = """
gpt-4
gpt-4o
gpt-4o-2024-05-13
gpt-4-turbo-preview
gpt-4-0314
gpt-4-0613
gpt-4-32k
gpt-4-32k-0314
gpt-4-32k-0613
gpt-4-turbo
gpt-4-turbo-2024-04-09
gpt-4-1106-preview
gpt-4-0125-preview
gpt-4-vision-preview
gpt-4-1106-vision-preview
gpt-3.5-turbo
gpt-3.5-turbo-0301
gpt-3.5-turbo-0613
gpt-3.5-turbo-1106
gpt-3.5-turbo-0125
gpt-3.5-turbo-16k
gpt-3.5-turbo-16k-0613
"""
OPENAI_MODELS = [ln.strip() for ln in OPENAI_MODELS.splitlines() if ln.strip()]
ANTHROPIC_MODELS = """
claude-2
claude-2.1
claude-3-haiku-20240307
claude-3-opus-20240229
claude-3-sonnet-20240229
claude-3-5-sonnet-20240620
"""
ANTHROPIC_MODELS = [ln.strip() for ln in ANTHROPIC_MODELS.splitlines() if ln.strip()]
@dataclass @dataclass
class ModelSettings: class ModelSettings:
@ -27,6 +67,7 @@ class ModelSettings:
lazy: bool = False lazy: bool = False
reminder_as_sys_msg: bool = False reminder_as_sys_msg: bool = False
examples_as_sys_msg: bool = False examples_as_sys_msg: bool = False
can_prefill: bool = False
# https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo # https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo
@ -166,6 +207,7 @@ MODEL_SETTINGS = [
weak_model_name="claude-3-haiku-20240307", weak_model_name="claude-3-haiku-20240307",
use_repo_map=True, use_repo_map=True,
send_undo_reply=True, send_undo_reply=True,
can_prefill=True,
), ),
ModelSettings( ModelSettings(
"openrouter/anthropic/claude-3-opus", "openrouter/anthropic/claude-3-opus",
@ -173,11 +215,13 @@ MODEL_SETTINGS = [
weak_model_name="openrouter/anthropic/claude-3-haiku", weak_model_name="openrouter/anthropic/claude-3-haiku",
use_repo_map=True, use_repo_map=True,
send_undo_reply=True, send_undo_reply=True,
can_prefill=True,
), ),
ModelSettings( ModelSettings(
"claude-3-sonnet-20240229", "claude-3-sonnet-20240229",
"whole", "whole",
weak_model_name="claude-3-haiku-20240307", weak_model_name="claude-3-haiku-20240307",
can_prefill=True,
), ),
ModelSettings( ModelSettings(
"claude-3-5-sonnet-20240620", "claude-3-5-sonnet-20240620",
@ -185,6 +229,8 @@ MODEL_SETTINGS = [
weak_model_name="claude-3-haiku-20240307", weak_model_name="claude-3-haiku-20240307",
use_repo_map=True, use_repo_map=True,
examples_as_sys_msg=True, examples_as_sys_msg=True,
can_prefill=True,
accepts_images=True,
), ),
ModelSettings( ModelSettings(
"anthropic/claude-3-5-sonnet-20240620", "anthropic/claude-3-5-sonnet-20240620",
@ -192,6 +238,7 @@ MODEL_SETTINGS = [
weak_model_name="claude-3-haiku-20240307", weak_model_name="claude-3-haiku-20240307",
use_repo_map=True, use_repo_map=True,
examples_as_sys_msg=True, examples_as_sys_msg=True,
can_prefill=True,
), ),
ModelSettings( ModelSettings(
"openrouter/anthropic/claude-3.5-sonnet", "openrouter/anthropic/claude-3.5-sonnet",
@ -199,6 +246,8 @@ MODEL_SETTINGS = [
weak_model_name="openrouter/anthropic/claude-3-haiku-20240307", weak_model_name="openrouter/anthropic/claude-3-haiku-20240307",
use_repo_map=True, use_repo_map=True,
examples_as_sys_msg=True, examples_as_sys_msg=True,
can_prefill=True,
accepts_images=True,
), ),
# Vertex AI Claude models # Vertex AI Claude models
ModelSettings( ModelSettings(
@ -206,6 +255,9 @@ MODEL_SETTINGS = [
"diff", "diff",
weak_model_name="vertex_ai/claude-3-haiku@20240307", weak_model_name="vertex_ai/claude-3-haiku@20240307",
use_repo_map=True, use_repo_map=True,
examples_as_sys_msg=True,
can_prefill=True,
accepts_images=True,
), ),
ModelSettings( ModelSettings(
"vertex_ai/claude-3-opus@20240229", "vertex_ai/claude-3-opus@20240229",
@ -213,11 +265,13 @@ MODEL_SETTINGS = [
weak_model_name="vertex_ai/claude-3-haiku@20240307", weak_model_name="vertex_ai/claude-3-haiku@20240307",
use_repo_map=True, use_repo_map=True,
send_undo_reply=True, send_undo_reply=True,
can_prefill=True,
), ),
ModelSettings( ModelSettings(
"vertex_ai/claude-3-sonnet@20240229", "vertex_ai/claude-3-sonnet@20240229",
"whole", "whole",
weak_model_name="vertex_ai/claude-3-haiku@20240307", weak_model_name="vertex_ai/claude-3-haiku@20240307",
can_prefill=True,
), ),
# Cohere # Cohere
ModelSettings( ModelSettings(
@ -282,6 +336,16 @@ MODEL_SETTINGS = [
examples_as_sys_msg=True, examples_as_sys_msg=True,
reminder_as_sys_msg=True, reminder_as_sys_msg=True,
), ),
ModelSettings(
"openrouter/openai/gpt-4o",
"diff",
weak_model_name="openrouter/openai/gpt-3.5-turbo",
use_repo_map=True,
send_undo_reply=True,
accepts_images=True,
lazy=True,
reminder_as_sys_msg=True,
),
] ]
@ -303,32 +367,17 @@ class Model:
def __init__(self, model, weak_model=None): def __init__(self, model, weak_model=None):
self.name = model self.name = model
# Do we have the model_info? self.info = self.get_model_info(model)
try:
self.info = litellm.get_model_info(model)
except Exception:
self.info = dict()
if not self.info and "gpt-4o" in self.name:
self.info = {
"max_tokens": 4096,
"max_input_tokens": 128000,
"max_output_tokens": 4096,
"input_cost_per_token": 5e-06,
"output_cost_per_token": 1.5e-5,
"litellm_provider": "openai",
"mode": "chat",
"supports_function_calling": True,
"supports_parallel_function_calling": True,
"supports_vision": True,
}
# Are all needed keys/params available? # Are all needed keys/params available?
res = self.validate_environment() res = self.validate_environment()
self.missing_keys = res.get("missing_keys") self.missing_keys = res.get("missing_keys")
self.keys_in_environment = res.get("keys_in_environment") self.keys_in_environment = res.get("keys_in_environment")
if self.info.get("max_input_tokens", 0) < 32 * 1024: max_input_tokens = self.info.get("max_input_tokens")
if not max_input_tokens:
max_input_tokens = 0
if max_input_tokens < 32 * 1024:
self.max_chat_history_tokens = 1024 self.max_chat_history_tokens = 1024
else: else:
self.max_chat_history_tokens = 2 * 1024 self.max_chat_history_tokens = 2 * 1024
@ -339,6 +388,24 @@ class Model:
else: else:
self.get_weak_model(weak_model) self.get_weak_model(weak_model)
def get_model_info(self, model):
# Try and do this quickly, without triggering the litellm import
spec = importlib.util.find_spec("litellm")
if spec:
origin = Path(spec.origin)
fname = origin.parent / "model_prices_and_context_window_backup.json"
if fname.exists():
data = json.loads(fname.read_text())
info = data.get(model)
if info:
return info
# Do it the slow way...
try:
return litellm.get_model_info(model)
except Exception:
return dict()
def configure_model_settings(self, model): def configure_model_settings(self, model):
for ms in MODEL_SETTINGS: for ms in MODEL_SETTINGS:
# direct match, or match "provider/<model>" # direct match, or match "provider/<model>"
@ -372,6 +439,15 @@ class Model:
if "gpt-3.5" in model or "gpt-4" in model: if "gpt-3.5" in model or "gpt-4" in model:
self.reminder_as_sys_msg = True self.reminder_as_sys_msg = True
if "anthropic" in model:
self.can_prefill = True
if "3.5-sonnet" in model or "3-5-sonnet" in model:
self.edit_format = "diff"
self.use_repo_map = True
self.examples_as_sys_msg = True
self.can_prefill = True
# use the defaults # use the defaults
if self.edit_format == "diff": if self.edit_format == "diff":
self.use_repo_map = True self.use_repo_map = True
@ -455,7 +531,25 @@ class Model:
with Image.open(fname) as img: with Image.open(fname) as img:
return img.size return img.size
def fast_validate_environment(self):
"""Fast path for common models. Avoids forcing litellm import."""
model = self.name
if model in OPENAI_MODELS:
var = "OPENAI_API_KEY"
elif model in ANTHROPIC_MODELS:
var = "ANTHROPIC_API_KEY"
else:
return
if os.environ.get(var):
return dict(keys_in_environment=[var], missing_keys=[])
def validate_environment(self): def validate_environment(self):
res = self.fast_validate_environment()
if res:
return res
# https://github.com/BerriAI/litellm/issues/3190 # https://github.com/BerriAI/litellm/issues/3190
model = self.name model = self.name

View file

@ -25,12 +25,14 @@ class GitRepo:
models=None, models=None,
attribute_author=True, attribute_author=True,
attribute_committer=True, attribute_committer=True,
attribute_commit_message=False,
): ):
self.io = io self.io = io
self.models = models self.models = models
self.attribute_author = attribute_author self.attribute_author = attribute_author
self.attribute_committer = attribute_committer self.attribute_committer = attribute_committer
self.attribute_commit_message = attribute_commit_message
if git_dname: if git_dname:
check_fnames = [git_dname] check_fnames = [git_dname]
@ -84,12 +86,15 @@ class GitRepo:
else: else:
commit_message = self.get_commit_message(diffs, context) commit_message = self.get_commit_message(diffs, context)
if aider_edits and self.attribute_commit_message:
commit_message = "aider: " + commit_message
if not commit_message: if not commit_message:
commit_message = "(no commit message provided)" commit_message = "(no commit message provided)"
full_commit_message = commit_message full_commit_message = commit_message
if context: # if context:
full_commit_message += "\n\n# Aider chat conversation:\n\n" + context # full_commit_message += "\n\n# Aider chat conversation:\n\n" + context
cmd = ["-m", full_commit_message, "--no-verify"] cmd = ["-m", full_commit_message, "--no-verify"]
if fnames: if fnames:

View file

@ -8,7 +8,6 @@ from collections import Counter, defaultdict, namedtuple
from importlib import resources from importlib import resources
from pathlib import Path from pathlib import Path
import networkx as nx
from diskcache import Cache from diskcache import Cache
from grep_ast import TreeContext, filename_to_lang from grep_ast import TreeContext, filename_to_lang
from pygments.lexers import guess_lexer_for_filename from pygments.lexers import guess_lexer_for_filename
@ -71,7 +70,7 @@ class RepoMap:
max_map_tokens = self.max_map_tokens max_map_tokens = self.max_map_tokens
# With no files in the chat, give a bigger view of the entire repo # With no files in the chat, give a bigger view of the entire repo
MUL = 16 MUL = 8
padding = 4096 padding = 4096
if max_map_tokens and self.max_context_window: if max_map_tokens and self.max_context_window:
target = min(max_map_tokens * MUL, self.max_context_window - padding) target = min(max_map_tokens * MUL, self.max_context_window - padding)
@ -230,6 +229,8 @@ class RepoMap:
) )
def get_ranked_tags(self, chat_fnames, other_fnames, mentioned_fnames, mentioned_idents): def get_ranked_tags(self, chat_fnames, other_fnames, mentioned_fnames, mentioned_idents):
import networkx as nx
defines = defaultdict(set) defines = defaultdict(set)
references = defaultdict(list) references = defaultdict(list)
definitions = defaultdict(set) definitions = defaultdict(set)

View file

@ -3,10 +3,8 @@
import re import re
import sys import sys
import httpx
import playwright import playwright
import pypandoc import pypandoc
from bs4 import BeautifulSoup
from playwright.sync_api import sync_playwright from playwright.sync_api import sync_playwright
from aider import __version__, urls from aider import __version__, urls
@ -59,7 +57,6 @@ class Scraper:
self.try_pandoc() self.try_pandoc()
content = self.html_to_markdown(content) content = self.html_to_markdown(content)
# content = html_to_text(content)
return content return content
@ -94,12 +91,12 @@ class Scraper:
if self.playwright_available is not None: if self.playwright_available is not None:
return return
with sync_playwright() as p: try:
try: with sync_playwright() as p:
p.chromium.launch() p.chromium.launch()
self.playwright_available = True self.playwright_available = True
except Exception: except Exception:
self.playwright_available = False self.playwright_available = False
def get_playwright_instructions(self): def get_playwright_instructions(self):
if self.playwright_available in (True, None): if self.playwright_available in (True, None):
@ -111,6 +108,8 @@ class Scraper:
return PLAYWRIGHT_INFO return PLAYWRIGHT_INFO
def scrape_with_httpx(self, url): def scrape_with_httpx(self, url):
import httpx
headers = {"User-Agent": f"Mozilla./5.0 ({aider_user_agent})"} headers = {"User-Agent": f"Mozilla./5.0 ({aider_user_agent})"}
try: try:
with httpx.Client(headers=headers) as client: with httpx.Client(headers=headers) as client:
@ -138,6 +137,8 @@ class Scraper:
self.pandoc_available = True self.pandoc_available = True
def html_to_markdown(self, page_source): def html_to_markdown(self, page_source):
from bs4 import BeautifulSoup
soup = BeautifulSoup(page_source, "html.parser") soup = BeautifulSoup(page_source, "html.parser")
soup = slimdown_html(soup) soup = slimdown_html(soup)
page_source = str(soup) page_source = str(soup)
@ -173,24 +174,6 @@ def slimdown_html(soup):
return soup return soup
# Adapted from AutoGPT, MIT License
#
# https://github.com/Significant-Gravitas/AutoGPT/blob/fe0923ba6c9abb42ac4df79da580e8a4391e0418/autogpts/autogpt/autogpt/commands/web_selenium.py#L173
def html_to_text(page_source: str) -> str:
soup = BeautifulSoup(page_source, "html.parser")
for script in soup(["script", "style"]):
script.extract()
text = soup.get_text()
lines = (line.strip() for line in text.splitlines())
chunks = (phrase.strip() for line in lines for phrase in line.split(" "))
text = "\n".join(chunk for chunk in chunks if chunk)
return text
def main(url): def main(url):
scraper = Scraper() scraper = Scraper()
content = scraper.scrape(url) content = scraper.scrape(url)

View file

@ -2,11 +2,9 @@ import hashlib
import json import json
import backoff import backoff
import httpx
import openai
from aider.dump import dump # noqa: F401 from aider.dump import dump # noqa: F401
from aider.litellm import litellm from aider.llm import litellm
# from diskcache import Cache # from diskcache import Cache
@ -16,39 +14,51 @@ CACHE = None
# CACHE = Cache(CACHE_PATH) # CACHE = Cache(CACHE_PATH)
def should_giveup(e): def lazy_litellm_retry_decorator(func):
if not hasattr(e, "status_code"): def wrapper(*args, **kwargs):
return False import httpx
if type(e) in ( def should_giveup(e):
httpx.ConnectError, if not hasattr(e, "status_code"):
httpx.RemoteProtocolError, return False
httpx.ReadTimeout,
):
return False
return not litellm._should_retry(e.status_code) if type(e) in (
httpx.ConnectError,
httpx.RemoteProtocolError,
httpx.ReadTimeout,
):
return False
return not litellm._should_retry(e.status_code)
decorated_func = backoff.on_exception(
backoff.expo,
(
httpx.ConnectError,
httpx.RemoteProtocolError,
httpx.ReadTimeout,
litellm.exceptions.APIConnectionError,
litellm.exceptions.APIError,
litellm.exceptions.RateLimitError,
litellm.exceptions.ServiceUnavailableError,
litellm.exceptions.Timeout,
litellm.llms.anthropic.AnthropicError,
),
giveup=should_giveup,
max_time=60,
on_backoff=lambda details: print(
f"{details.get('exception','Exception')}\nRetry in {details['wait']:.1f} seconds."
),
)(func)
return decorated_func(*args, **kwargs)
return wrapper
@backoff.on_exception( @lazy_litellm_retry_decorator
backoff.expo,
(
httpx.ConnectError,
httpx.RemoteProtocolError,
httpx.ReadTimeout,
litellm.exceptions.APIConnectionError,
litellm.exceptions.APIError,
litellm.exceptions.RateLimitError,
litellm.exceptions.ServiceUnavailableError,
litellm.exceptions.Timeout,
),
giveup=should_giveup,
max_time=60,
on_backoff=lambda details: print(
f"{details.get('exception','Exception')}\nRetry in {details['wait']:.1f} seconds."
),
)
def send_with_retries(model_name, messages, functions, stream, temperature=0): def send_with_retries(model_name, messages, functions, stream, temperature=0):
from aider.llm import litellm
kwargs = dict( kwargs = dict(
model=model_name, model=model_name,
messages=messages, messages=messages,
@ -85,5 +95,5 @@ def simple_send_with_retries(model_name, messages):
stream=False, stream=False,
) )
return response.choices[0].message.content return response.choices[0].message.content
except (AttributeError, openai.BadRequestError): except (AttributeError, litellm.exceptions.BadRequestError):
return return

View file

@ -1,16 +1,15 @@
import tempfile import tempfile
import unittest import unittest
from pathlib import Path from pathlib import Path
from unittest.mock import MagicMock, patch from unittest.mock import MagicMock
import git import git
import openai
from aider.coders import Coder from aider.coders import Coder
from aider.dump import dump # noqa: F401 from aider.dump import dump # noqa: F401
from aider.io import InputOutput from aider.io import InputOutput
from aider.models import Model from aider.models import Model
from aider.utils import ChdirTemporaryDirectory, GitTemporaryDirectory from aider.utils import GitTemporaryDirectory
class TestCoder(unittest.TestCase): class TestCoder(unittest.TestCase):
@ -220,7 +219,7 @@ class TestCoder(unittest.TestCase):
files = [file1, file2] files = [file1, file2]
# Initialize the Coder object with the mocked IO and mocked repo # Initialize the Coder object with the mocked IO and mocked repo
coder = Coder.create(self.GPT35, None, io=InputOutput(), fnames=files) coder = Coder.create(self.GPT35, None, io=InputOutput(), fnames=files, pretty=False)
def mock_send(*args, **kwargs): def mock_send(*args, **kwargs):
coder.partial_response_content = "ok" coder.partial_response_content = "ok"
@ -247,7 +246,7 @@ class TestCoder(unittest.TestCase):
files = [file1, file2] files = [file1, file2]
# Initialize the Coder object with the mocked IO and mocked repo # Initialize the Coder object with the mocked IO and mocked repo
coder = Coder.create(self.GPT35, None, io=InputOutput(), fnames=files) coder = Coder.create(self.GPT35, None, io=InputOutput(), fnames=files, pretty=False)
def mock_send(*args, **kwargs): def mock_send(*args, **kwargs):
coder.partial_response_content = "ok" coder.partial_response_content = "ok"
@ -330,25 +329,6 @@ class TestCoder(unittest.TestCase):
# both files should still be here # both files should still be here
self.assertEqual(len(coder.abs_fnames), 2) self.assertEqual(len(coder.abs_fnames), 2)
def test_run_with_invalid_request_error(self):
with ChdirTemporaryDirectory():
# Mock the IO object
mock_io = MagicMock()
# Initialize the Coder object with the mocked IO and mocked repo
coder = Coder.create(self.GPT35, None, mock_io)
# Call the run method and assert that InvalidRequestError is raised
with self.assertRaises(openai.BadRequestError):
with patch("litellm.completion") as Mock:
Mock.side_effect = openai.BadRequestError(
message="Invalid request",
response=MagicMock(),
body=None,
)
coder.run(with_message="hi")
def test_new_file_edit_one_commit(self): def test_new_file_edit_one_commit(self):
"""A new file shouldn't get pre-committed before the GPT edit commit""" """A new file shouldn't get pre-committed before the GPT edit commit"""
with GitTemporaryDirectory(): with GitTemporaryDirectory():
@ -357,7 +337,7 @@ class TestCoder(unittest.TestCase):
fname = Path("file.txt") fname = Path("file.txt")
io = InputOutput(yes=True) io = InputOutput(yes=True)
coder = Coder.create(self.GPT35, "diff", io=io, fnames=[str(fname)]) coder = Coder.create(self.GPT35, "diff", io=io, fnames=[str(fname)], pretty=False)
self.assertTrue(fname.exists()) self.assertTrue(fname.exists())
@ -414,7 +394,9 @@ new
fname1.write_text("ONE\n") fname1.write_text("ONE\n")
io = InputOutput(yes=True) io = InputOutput(yes=True)
coder = Coder.create(self.GPT35, "diff", io=io, fnames=[str(fname1), str(fname2)]) coder = Coder.create(
self.GPT35, "diff", io=io, fnames=[str(fname1), str(fname2)], pretty=False
)
def mock_send(*args, **kwargs): def mock_send(*args, **kwargs):
coder.partial_response_content = f""" coder.partial_response_content = f"""
@ -467,7 +449,7 @@ TWO
fname2.write_text("OTHER\n") fname2.write_text("OTHER\n")
io = InputOutput(yes=True) io = InputOutput(yes=True)
coder = Coder.create(self.GPT35, "diff", io=io, fnames=[str(fname)]) coder = Coder.create(self.GPT35, "diff", io=io, fnames=[str(fname)], pretty=False)
def mock_send(*args, **kwargs): def mock_send(*args, **kwargs):
coder.partial_response_content = f""" coder.partial_response_content = f"""
@ -545,7 +527,7 @@ three
repo.git.commit("-m", "initial") repo.git.commit("-m", "initial")
io = InputOutput(yes=True) io = InputOutput(yes=True)
coder = Coder.create(self.GPT35, "diff", io=io, fnames=[str(fname)]) coder = Coder.create(self.GPT35, "diff", io=io, fnames=[str(fname)], pretty=False)
def mock_send(*args, **kwargs): def mock_send(*args, **kwargs):
coder.partial_response_content = f""" coder.partial_response_content = f"""

View file

@ -523,8 +523,6 @@ class TestCommands(TestCase):
other_path.write_text("other content") other_path.write_text("other content")
repo.git.add(str(other_path)) repo.git.add(str(other_path))
os.environ["GIT_AUTHOR_NAME"] = "Foo (aider)"
# Create and commit a file # Create and commit a file
filename = "test_file.txt" filename = "test_file.txt"
file_path = Path(repo_dir) / filename file_path = Path(repo_dir) / filename
@ -536,8 +534,6 @@ class TestCommands(TestCase):
repo.git.add(filename) repo.git.add(filename)
repo.git.commit("-m", "second commit") repo.git.commit("-m", "second commit")
del os.environ["GIT_AUTHOR_NAME"]
# Store the commit hash # Store the commit hash
last_commit_hash = repo.head.commit.hexsha[:7] last_commit_hash = repo.head.commit.hexsha[:7]
coder.last_aider_commit_hash = last_commit_hash coder.last_aider_commit_hash = last_commit_hash

View file

@ -297,7 +297,7 @@ These changes replace the `subprocess.run` patches with `subprocess.check_output
files = [file1] files = [file1]
# Initialize the Coder object with the mocked IO and mocked repo # Initialize the Coder object with the mocked IO and mocked repo
coder = Coder.create(self.GPT35, "diff", io=InputOutput(), fnames=files) coder = Coder.create(self.GPT35, "diff", io=InputOutput(), fnames=files, pretty=False)
def mock_send(*args, **kwargs): def mock_send(*args, **kwargs):
coder.partial_response_content = f""" coder.partial_response_content = f"""
@ -340,6 +340,7 @@ new
io=InputOutput(dry_run=True), io=InputOutput(dry_run=True),
fnames=files, fnames=files,
dry_run=True, dry_run=True,
pretty=False,
) )
def mock_send(*args, **kwargs): def mock_send(*args, **kwargs):

View file

@ -3,7 +3,7 @@ from unittest.mock import MagicMock, patch
import httpx import httpx
from aider.litellm import litellm from aider.llm import litellm
from aider.sendchat import send_with_retries from aider.sendchat import send_with_retries

View file

@ -288,7 +288,9 @@ after b
files = [file1] files = [file1]
# Initialize the Coder object with the mocked IO and mocked repo # Initialize the Coder object with the mocked IO and mocked repo
coder = Coder.create(self.GPT35, "whole", io=InputOutput(), fnames=files) coder = Coder.create(
self.GPT35, "whole", io=InputOutput(), fnames=files, stream=False, pretty=False
)
# no trailing newline so the response content below doesn't add ANOTHER newline # no trailing newline so the response content below doesn't add ANOTHER newline
new_content = "new\ntwo\nthree" new_content = "new\ntwo\nthree"

View file

@ -1,12 +1,20 @@
import sys import sys
import time
from pathlib import Path
import packaging.version import packaging.version
import requests
import aider import aider
def check_version(print_cmd): def check_version(print_cmd):
fname = Path.home() / ".aider/versioncheck"
day = 60 * 60 * 24
if fname.exists() and time.time() - fname.stat().st_mtime < day:
return
import requests
try: try:
response = requests.get("https://pypi.org/pypi/aider-chat/json") response = requests.get("https://pypi.org/pypi/aider-chat/json")
data = response.json() data = response.json()
@ -27,6 +35,9 @@ def check_version(print_cmd):
else: else:
print_cmd(f"{py} -m pip install --upgrade aider-chat") print_cmd(f"{py} -m pip install --upgrade aider-chat")
if not fname.parent.exists():
fname.parent.mkdir()
fname.touch()
return is_update_available return is_update_available
except Exception as err: except Exception as err:
print_cmd(f"Error checking pypi for new version: {err}") print_cmd(f"Error checking pypi for new version: {err}")

View file

@ -1,11 +1,10 @@
import math
import os import os
import queue import queue
import tempfile import tempfile
import time import time
import numpy as np from aider.llm import litellm
from aider.litellm import litellm
try: try:
import soundfile as sf import soundfile as sf
@ -41,6 +40,8 @@ class Voice:
def callback(self, indata, frames, time, status): def callback(self, indata, frames, time, status):
"""This is called (from a separate thread) for each audio block.""" """This is called (from a separate thread) for each audio block."""
import numpy as np
rms = np.sqrt(np.mean(indata**2)) rms = np.sqrt(np.mean(indata**2))
self.max_rms = max(self.max_rms, rms) self.max_rms = max(self.max_rms, rms)
self.min_rms = min(self.min_rms, rms) self.min_rms = min(self.min_rms, rms)
@ -55,7 +56,7 @@ class Voice:
def get_prompt(self): def get_prompt(self):
num = 10 num = 10
if np.isnan(self.pct) or self.pct < self.threshold: if math.isnan(self.pct) or self.pct < self.threshold:
cnt = 0 cnt = 0
else: else:
cnt = int(self.pct * 10) cnt = int(self.pct * 10)
@ -78,7 +79,7 @@ class Voice:
filename = tempfile.mktemp(suffix=".wav") filename = tempfile.mktemp(suffix=".wav")
try: try:
sample_rate = int(self.sd.query_devices(None, 'input')['default_samplerate']) sample_rate = int(self.sd.query_devices(None, "input")["default_samplerate"])
except (TypeError, ValueError): except (TypeError, ValueError):
sample_rate = 16000 # fallback to 16kHz if unable to query device sample_rate = 16000 # fallback to 16kHz if unable to query device

View file

@ -1,5 +1,5 @@
# #
# pip-compile --output-file=dev-requirements.txt dev-requirements.in # pip-compile --output-file=dev-requirements.txt dev-requirements.in --upgrade
# #
pytest pytest
pip-tools pip-tools

View file

@ -10,7 +10,7 @@ babel==2.15.0
# via sphinx # via sphinx
build==1.2.1 build==1.2.1
# via pip-tools # via pip-tools
certifi==2024.2.2 certifi==2024.6.2
# via requests # via requests
cfgv==3.4.0 cfgv==3.4.0
# via pre-commit # via pre-commit
@ -36,9 +36,9 @@ docutils==0.20.1
# via # via
# sphinx # sphinx
# sphinx-rtd-theme # sphinx-rtd-theme
filelock==3.14.0 filelock==3.15.4
# via virtualenv # via virtualenv
fonttools==4.51.0 fonttools==4.53.0
# via matplotlib # via matplotlib
identify==2.5.36 identify==2.5.36
# via pre-commit # via pre-commit
@ -54,7 +54,7 @@ jinja2==3.1.4
# via sphinx # via sphinx
kiwisolver==1.4.5 kiwisolver==1.4.5
# via matplotlib # via matplotlib
lox==0.11.0 lox==0.12.0
# via -r dev-requirements.in # via -r dev-requirements.in
markdown-it-py==3.0.0 markdown-it-py==3.0.0
# via rich # via rich
@ -66,14 +66,14 @@ mdurl==0.1.2
# via markdown-it-py # via markdown-it-py
multiprocess==0.70.16 multiprocess==0.70.16
# via pathos # via pathos
nodeenv==1.8.0 nodeenv==1.9.1
# via pre-commit # via pre-commit
numpy==1.26.4 numpy==2.0.0
# via # via
# contourpy # contourpy
# matplotlib # matplotlib
# pandas # pandas
packaging==24.0 packaging==24.1
# via # via
# build # build
# matplotlib # matplotlib
@ -107,7 +107,7 @@ pyproject-hooks==1.1.0
# via # via
# build # build
# pip-tools # pip-tools
pytest==8.2.1 pytest==8.2.2
# via -r dev-requirements.in # via -r dev-requirements.in
python-dateutil==2.9.0.post0 python-dateutil==2.9.0.post0
# via # via
@ -117,7 +117,7 @@ pytz==2024.1
# via pandas # via pandas
pyyaml==6.0.1 pyyaml==6.0.1
# via pre-commit # via pre-commit
requests==2.32.0 requests==2.32.3
# via sphinx # via sphinx
rich==13.7.1 rich==13.7.1
# via typer # via typer
@ -149,13 +149,13 @@ sphinxcontrib-serializinghtml==1.1.10
# via sphinx # via sphinx
typer==0.12.3 typer==0.12.3
# via -r dev-requirements.in # via -r dev-requirements.in
typing-extensions==4.11.0 typing-extensions==4.12.2
# via typer # via typer
tzdata==2024.1 tzdata==2024.1
# via pandas # via pandas
urllib3==2.2.1 urllib3==2.2.2
# via requests # via requests
virtualenv==20.26.2 virtualenv==20.26.3
# via pre-commit # via pre-commit
wheel==0.43.0 wheel==0.43.0
# via pip-tools # via pip-tools

View file

@ -1,5 +1,5 @@
# #
# pip-compile requirements.in # pip-compile requirements.in --upgrade
# #
configargparse configargparse
GitPython GitPython

View file

@ -62,7 +62,7 @@ frozenlist==1.4.1
# via # via
# aiohttp # aiohttp
# aiosignal # aiosignal
fsspec==2024.6.0 fsspec==2024.6.1
# via huggingface-hub # via huggingface-hub
gitdb==4.0.11 gitdb==4.0.11
# via gitpython # via gitpython
@ -70,14 +70,14 @@ gitpython==3.1.43
# via # via
# -r requirements.in # -r requirements.in
# streamlit # streamlit
google-ai-generativelanguage==0.6.5 google-ai-generativelanguage==0.6.6
# via google-generativeai # via google-generativeai
google-api-core[grpc]==2.19.1 google-api-core[grpc]==2.19.1
# via # via
# google-ai-generativelanguage # google-ai-generativelanguage
# google-api-python-client # google-api-python-client
# google-generativeai # google-generativeai
google-api-python-client==2.134.0 google-api-python-client==2.135.0
# via google-generativeai # via google-generativeai
google-auth==2.30.0 google-auth==2.30.0
# via # via
@ -88,7 +88,7 @@ google-auth==2.30.0
# google-generativeai # google-generativeai
google-auth-httplib2==0.2.0 google-auth-httplib2==0.2.0
# via google-api-python-client # via google-api-python-client
google-generativeai==0.7.0 google-generativeai==0.7.1
# via -r requirements.in # via -r requirements.in
googleapis-common-protos==1.63.2 googleapis-common-protos==1.63.2
# via # via
@ -139,7 +139,7 @@ jsonschema==4.22.0
# altair # altair
jsonschema-specifications==2023.12.1 jsonschema-specifications==2023.12.1
# via jsonschema # via jsonschema
litellm==1.40.26 litellm==1.41.0
# via -r requirements.in # via -r requirements.in
markdown-it-py==3.0.0 markdown-it-py==3.0.0
# via rich # via rich
@ -164,7 +164,7 @@ numpy==2.0.0
# pydeck # pydeck
# scipy # scipy
# streamlit # streamlit
openai==1.35.3 openai==1.35.7
# via # via
# -r requirements.in # -r requirements.in
# litellm # litellm

View file

@ -6,7 +6,8 @@ docker run \
-v "$PWD/website:/site" \ -v "$PWD/website:/site" \
-p 4000:4000 \ -p 4000:4000 \
-e HISTFILE=/site/.bash_history \ -e HISTFILE=/site/.bash_history \
--entrypoint /bin/bash \
-it \ -it \
my-jekyll-site my-jekyll-site
# --entrypoint /bin/bash \

View file

@ -14,8 +14,8 @@ cog $ARG \
README.md \ README.md \
website/index.md \ website/index.md \
website/HISTORY.md \ website/HISTORY.md \
website/docs/dotenv.md \
website/docs/commands.md \ website/docs/commands.md \
website/docs/languages.md \ website/docs/languages.md \
website/docs/options.md \ website/docs/config/dotenv.md \
website/docs/aider_conf.md website/docs/config/options.md \
website/docs/config/aider_conf.md

View file

@ -12,17 +12,39 @@ cog.out(text)
# Release history # Release history
### v0.40.5 ### Aider v0.42.0
- Performance release:
- 5X faster launch!
- Faster auto-complete in large git repos (users report ~100X speedup)!
### Aider v0.41.0
- [Allow Claude 3.5 Sonnet to stream back >4k tokens!](https://aider.chat/2024/07/01/sonnet-not-lazy.html)
- It is the first model capable of writing such large coherent, useful code edits.
- Do large refactors or generate multiple files of new code in one go.
- Aider now uses `claude-3-5-sonnet-20240620` by default if `ANTHROPIC_API_KEY` is set in the environment.
- [Enabled image support](https://aider.chat/docs/images-urls.html) for 3.5 Sonnet and for GPT-4o & 3.5 Sonnet via OpenRouter (by @yamitzky).
- Added `--attribute-commit-message` to prefix aider's commit messages with "aider:".
- Fixed regression in quality of one-line commit messages.
- Automatically retry on Anthropic `overloaded_error`.
- Bumped dependency versions.
### Aider v0.40.6
- Fixed `/undo` so it works regardless of `--attribute` settings.
### Aider v0.40.5
- Bump versions to pickup latest litellm to fix streaming issue with Gemini - Bump versions to pickup latest litellm to fix streaming issue with Gemini
- https://github.com/BerriAI/litellm/issues/4408 - https://github.com/BerriAI/litellm/issues/4408
### v0.40.1 ### Aider v0.40.1
- Improved context awareness of repomap. - Improved context awareness of repomap.
- Restored proper `--help` functionality. - Restored proper `--help` functionality.
### v0.40.0 ### Aider v0.40.0
- Improved prompting to discourage Sonnet from wasting tokens emitting unchanging code (#705). - Improved prompting to discourage Sonnet from wasting tokens emitting unchanging code (#705).
- Improved error info for token limit errors. - Improved error info for token limit errors.
@ -31,14 +53,14 @@ cog.out(text)
- Improved invocation of flake8 linter for python code. - Improved invocation of flake8 linter for python code.
### v0.39.0 ### Aider v0.39.0
- Use `--sonnet` for Claude 3.5 Sonnet, which is the top model on [aider's LLM code editing leaderboard](https://aider.chat/docs/leaderboards/#claude-35-sonnet-takes-the-top-spot). - Use `--sonnet` for Claude 3.5 Sonnet, which is the top model on [aider's LLM code editing leaderboard](https://aider.chat/docs/leaderboards/#claude-35-sonnet-takes-the-top-spot).
- All `AIDER_xxx` environment variables can now be set in `.env` (by @jpshack-at-palomar). - All `AIDER_xxx` environment variables can now be set in `.env` (by @jpshack-at-palomar).
- Use `--llm-history-file` to log raw messages sent to the LLM (by @daniel-vainsencher). - Use `--llm-history-file` to log raw messages sent to the LLM (by @daniel-vainsencher).
- Commit messages are no longer prefixed with "aider:". Instead the git author and committer names have "(aider)" added. - Commit messages are no longer prefixed with "aider:". Instead the git author and committer names have "(aider)" added.
### v0.38.0 ### Aider v0.38.0
- Use `--vim` for [vim keybindings](https://aider.chat/docs/commands.html#vi) in the chat. - Use `--vim` for [vim keybindings](https://aider.chat/docs/commands.html#vi) in the chat.
- [Add LLM metadata](https://aider.chat/docs/llms/warnings.html#specifying-context-window-size-and-token-costs) via `.aider.models.json` file (by @caseymcc). - [Add LLM metadata](https://aider.chat/docs/llms/warnings.html#specifying-context-window-size-and-token-costs) via `.aider.models.json` file (by @caseymcc).
@ -49,7 +71,7 @@ cog.out(text)
- Documentation updates, moved into website/ subdir. - Documentation updates, moved into website/ subdir.
- Moved tests/ into aider/tests/. - Moved tests/ into aider/tests/.
### v0.37.0 ### Aider v0.37.0
- Repo map is now optimized based on text of chat history as well as files added to chat. - Repo map is now optimized based on text of chat history as well as files added to chat.
- Improved prompts when no files have been added to chat to solicit LLM file suggestions. - Improved prompts when no files have been added to chat to solicit LLM file suggestions.
@ -60,7 +82,7 @@ cog.out(text)
- Detect supported audio sample rates for `/voice`. - Detect supported audio sample rates for `/voice`.
- Other small bug fixes. - Other small bug fixes.
### v0.36.0 ### Aider v0.36.0
- [Aider can now lint your code and fix any errors](https://aider.chat/2024/05/22/linting.html). - [Aider can now lint your code and fix any errors](https://aider.chat/2024/05/22/linting.html).
- Aider automatically lints and fixes after every LLM edit. - Aider automatically lints and fixes after every LLM edit.
@ -73,7 +95,7 @@ cog.out(text)
- Aider will automatically attempt to fix any test failures. - Aider will automatically attempt to fix any test failures.
### v0.35.0 ### Aider v0.35.0
- Aider now uses GPT-4o by default. - Aider now uses GPT-4o by default.
- GPT-4o tops the [aider LLM code editing leaderboard](https://aider.chat/docs/leaderboards/) at 72.9%, versus 68.4% for Opus. - GPT-4o tops the [aider LLM code editing leaderboard](https://aider.chat/docs/leaderboards/) at 72.9%, versus 68.4% for Opus.
@ -82,7 +104,7 @@ cog.out(text)
- Improved reflection feedback to LLMs using the diff edit format. - Improved reflection feedback to LLMs using the diff edit format.
- Improved retries on `httpx` errors. - Improved retries on `httpx` errors.
### v0.34.0 ### Aider v0.34.0
- Updated prompting to use more natural phrasing about files, the git repo, etc. Removed reliance on read-write/read-only terminology. - Updated prompting to use more natural phrasing about files, the git repo, etc. Removed reliance on read-write/read-only terminology.
- Refactored prompting to unify some phrasing across edit formats. - Refactored prompting to unify some phrasing across edit formats.
@ -92,11 +114,11 @@ cog.out(text)
- Bugfix: catch and retry on all litellm exceptions. - Bugfix: catch and retry on all litellm exceptions.
### v0.33.0 ### Aider v0.33.0
- Added native support for [Deepseek models](https://aider.chat/docs/llms.html#deepseek) using `DEEPSEEK_API_KEY` and `deepseek/deepseek-chat`, etc rather than as a generic OpenAI compatible API. - Added native support for [Deepseek models](https://aider.chat/docs/llms.html#deepseek) using `DEEPSEEK_API_KEY` and `deepseek/deepseek-chat`, etc rather than as a generic OpenAI compatible API.
### v0.32.0 ### Aider v0.32.0
- [Aider LLM code editing leaderboards](https://aider.chat/docs/leaderboards/) that rank popular models according to their ability to edit code. - [Aider LLM code editing leaderboards](https://aider.chat/docs/leaderboards/) that rank popular models according to their ability to edit code.
- Leaderboards include GPT-3.5/4 Turbo, Opus, Sonnet, Gemini 1.5 Pro, Llama 3, Deepseek Coder & Command-R+. - Leaderboards include GPT-3.5/4 Turbo, Opus, Sonnet, Gemini 1.5 Pro, Llama 3, Deepseek Coder & Command-R+.
@ -105,31 +127,31 @@ cog.out(text)
- Improved retry handling on errors from model APIs. - Improved retry handling on errors from model APIs.
- Benchmark outputs results in YAML, compatible with leaderboard. - Benchmark outputs results in YAML, compatible with leaderboard.
### v0.31.0 ### Aider v0.31.0
- [Aider is now also AI pair programming in your browser!](https://aider.chat/2024/05/02/browser.html) Use the `--browser` switch to launch an experimental browser based version of aider. - [Aider is now also AI pair programming in your browser!](https://aider.chat/2024/05/02/browser.html) Use the `--browser` switch to launch an experimental browser based version of aider.
- Switch models during the chat with `/model <name>` and search the list of available models with `/models <query>`. - Switch models during the chat with `/model <name>` and search the list of available models with `/models <query>`.
### v0.30.1 ### Aider v0.30.1
- Adding missing `google-generativeai` dependency - Adding missing `google-generativeai` dependency
### v0.30.0 ### Aider v0.30.0
- Added [Gemini 1.5 Pro](https://aider.chat/docs/llms.html#free-models) as a recommended free model. - Added [Gemini 1.5 Pro](https://aider.chat/docs/llms.html#free-models) as a recommended free model.
- Allow repo map for "whole" edit format. - Allow repo map for "whole" edit format.
- Added `--models <MODEL-NAME>` to search the available models. - Added `--models <MODEL-NAME>` to search the available models.
- Added `--no-show-model-warnings` to silence model warnings. - Added `--no-show-model-warnings` to silence model warnings.
### v0.29.2 ### Aider v0.29.2
- Improved [model warnings](https://aider.chat/docs/llms.html#model-warnings) for unknown or unfamiliar models - Improved [model warnings](https://aider.chat/docs/llms.html#model-warnings) for unknown or unfamiliar models
### v0.29.1 ### Aider v0.29.1
- Added better support for groq/llama3-70b-8192 - Added better support for groq/llama3-70b-8192
### v0.29.0 ### Aider v0.29.0
- Added support for [directly connecting to Anthropic, Cohere, Gemini and many other LLM providers](https://aider.chat/docs/llms.html). - Added support for [directly connecting to Anthropic, Cohere, Gemini and many other LLM providers](https://aider.chat/docs/llms.html).
- Added `--weak-model <model-name>` which allows you to specify which model to use for commit messages and chat history summarization. - Added `--weak-model <model-name>` which allows you to specify which model to use for commit messages and chat history summarization.
@ -143,32 +165,32 @@ cog.out(text)
- Fixed crash when operating in a repo in a detached HEAD state. - Fixed crash when operating in a repo in a detached HEAD state.
- Fix: Use the same default model in CLI and python scripting. - Fix: Use the same default model in CLI and python scripting.
### v0.28.0 ### Aider v0.28.0
- Added support for new `gpt-4-turbo-2024-04-09` and `gpt-4-turbo` models. - Added support for new `gpt-4-turbo-2024-04-09` and `gpt-4-turbo` models.
- Benchmarked at 61.7% on Exercism benchmark, comparable to `gpt-4-0613` and worse than the `gpt-4-preview-XXXX` models. See [recent Exercism benchmark results](https://aider.chat/2024/03/08/claude-3.html). - Benchmarked at 61.7% on Exercism benchmark, comparable to `gpt-4-0613` and worse than the `gpt-4-preview-XXXX` models. See [recent Exercism benchmark results](https://aider.chat/2024/03/08/claude-3.html).
- Benchmarked at 34.1% on the refactoring/laziness benchmark, significantly worse than the `gpt-4-preview-XXXX` models. See [recent refactor bencmark results](https://aider.chat/2024/01/25/benchmarks-0125.html). - Benchmarked at 34.1% on the refactoring/laziness benchmark, significantly worse than the `gpt-4-preview-XXXX` models. See [recent refactor bencmark results](https://aider.chat/2024/01/25/benchmarks-0125.html).
- Aider continues to default to `gpt-4-1106-preview` as it performs best on both benchmarks, and significantly better on the refactoring/laziness benchmark. - Aider continues to default to `gpt-4-1106-preview` as it performs best on both benchmarks, and significantly better on the refactoring/laziness benchmark.
### v0.27.0 ### Aider v0.27.0
- Improved repomap support for typescript, by @ryanfreckleton. - Improved repomap support for typescript, by @ryanfreckleton.
- Bugfix: Only /undo the files which were part of the last commit, don't stomp other dirty files - Bugfix: Only /undo the files which were part of the last commit, don't stomp other dirty files
- Bugfix: Show clear error message when OpenAI API key is not set. - Bugfix: Show clear error message when OpenAI API key is not set.
- Bugfix: Catch error for obscure languages without tags.scm file. - Bugfix: Catch error for obscure languages without tags.scm file.
### v0.26.1 ### Aider v0.26.1
- Fixed bug affecting parsing of git config in some environments. - Fixed bug affecting parsing of git config in some environments.
### v0.26.0 ### Aider v0.26.0
- Use GPT-4 Turbo by default. - Use GPT-4 Turbo by default.
- Added `-3` and `-4` switches to use GPT 3.5 or GPT-4 (non-Turbo). - Added `-3` and `-4` switches to use GPT 3.5 or GPT-4 (non-Turbo).
- Bug fix to avoid reflecting local git errors back to GPT. - Bug fix to avoid reflecting local git errors back to GPT.
- Improved logic for opening git repo on launch. - Improved logic for opening git repo on launch.
### v0.25.0 ### Aider v0.25.0
- Issue a warning if user adds too much code to the chat. - Issue a warning if user adds too much code to the chat.
- https://aider.chat/docs/faq.html#how-can-i-add-all-the-files-to-the-chat - https://aider.chat/docs/faq.html#how-can-i-add-all-the-files-to-the-chat
@ -178,18 +200,18 @@ cog.out(text)
- Show the user a FAQ link if edits fail to apply. - Show the user a FAQ link if edits fail to apply.
- Made past articles part of https://aider.chat/blog/ - Made past articles part of https://aider.chat/blog/
### v0.24.1 ### Aider v0.24.1
- Fixed bug with cost computations when --no-steam in effect - Fixed bug with cost computations when --no-steam in effect
### v0.24.0 ### Aider v0.24.0
- New `/web <url>` command which scrapes the url, turns it into fairly clean markdown and adds it to the chat. - New `/web <url>` command which scrapes the url, turns it into fairly clean markdown and adds it to the chat.
- Updated all OpenAI model names, pricing info - Updated all OpenAI model names, pricing info
- Default GPT 3.5 model is now `gpt-3.5-turbo-0125`. - Default GPT 3.5 model is now `gpt-3.5-turbo-0125`.
- Bugfix to the `!` alias for `/run`. - Bugfix to the `!` alias for `/run`.
### v0.23.0 ### Aider v0.23.0
- Added support for `--model gpt-4-0125-preview` and OpenAI's alias `--model gpt-4-turbo-preview`. The `--4turbo` switch remains an alias for `--model gpt-4-1106-preview` at this time. - Added support for `--model gpt-4-0125-preview` and OpenAI's alias `--model gpt-4-turbo-preview`. The `--4turbo` switch remains an alias for `--model gpt-4-1106-preview` at this time.
- New `/test` command that runs a command and adds the output to the chat on non-zero exit status. - New `/test` command that runs a command and adds the output to the chat on non-zero exit status.
@ -199,25 +221,25 @@ cog.out(text)
- Added `--openrouter` as a shortcut for `--openai-api-base https://openrouter.ai/api/v1` - Added `--openrouter` as a shortcut for `--openai-api-base https://openrouter.ai/api/v1`
- Fixed bug preventing use of env vars `OPENAI_API_BASE, OPENAI_API_TYPE, OPENAI_API_VERSION, OPENAI_API_DEPLOYMENT_ID`. - Fixed bug preventing use of env vars `OPENAI_API_BASE, OPENAI_API_TYPE, OPENAI_API_VERSION, OPENAI_API_DEPLOYMENT_ID`.
### v0.22.0 ### Aider v0.22.0
- Improvements for unified diff editing format. - Improvements for unified diff editing format.
- Added ! as an alias for /run. - Added ! as an alias for /run.
- Autocomplete for /add and /drop now properly quotes filenames with spaces. - Autocomplete for /add and /drop now properly quotes filenames with spaces.
- The /undo command asks GPT not to just retry reverted edit. - The /undo command asks GPT not to just retry reverted edit.
### v0.21.1 ### Aider v0.21.1
- Bugfix for unified diff editing format. - Bugfix for unified diff editing format.
- Added --4turbo and --4 aliases for --4-turbo. - Added --4turbo and --4 aliases for --4-turbo.
### v0.21.0 ### Aider v0.21.0
- Support for python 3.12. - Support for python 3.12.
- Improvements to unified diff editing format. - Improvements to unified diff editing format.
- New `--check-update` arg to check if updates are available and exit with status code. - New `--check-update` arg to check if updates are available and exit with status code.
### v0.20.0 ### Aider v0.20.0
- Add images to the chat to automatically use GPT-4 Vision, by @joshuavial - Add images to the chat to automatically use GPT-4 Vision, by @joshuavial
@ -225,22 +247,22 @@ cog.out(text)
- Improved unicode encoding for `/run` command output, by @ctoth - Improved unicode encoding for `/run` command output, by @ctoth
- Prevent false auto-commits on Windows, by @ctoth - Prevent false auto-commits on Windows, by @ctoth
### v0.19.1 ### Aider v0.19.1
- Removed stray debug output. - Removed stray debug output.
### v0.19.0 ### Aider v0.19.0
- [Significantly reduced "lazy" coding from GPT-4 Turbo due to new unified diff edit format](https://aider.chat/docs/unified-diffs.html) - [Significantly reduced "lazy" coding from GPT-4 Turbo due to new unified diff edit format](https://aider.chat/docs/unified-diffs.html)
- Score improves from 20% to 61% on new "laziness benchmark". - Score improves from 20% to 61% on new "laziness benchmark".
- Aider now uses unified diffs by default for `gpt-4-1106-preview`. - Aider now uses unified diffs by default for `gpt-4-1106-preview`.
- New `--4-turbo` command line switch as a shortcut for `--model gpt-4-1106-preview`. - New `--4-turbo` command line switch as a shortcut for `--model gpt-4-1106-preview`.
### v0.18.1 ### Aider v0.18.1
- Upgraded to new openai python client v1.3.7. - Upgraded to new openai python client v1.3.7.
### v0.18.0 ### Aider v0.18.0
- Improved prompting for both GPT-4 and GPT-4 Turbo. - Improved prompting for both GPT-4 and GPT-4 Turbo.
- Far fewer edit errors from GPT-4 Turbo (`gpt-4-1106-preview`). - Far fewer edit errors from GPT-4 Turbo (`gpt-4-1106-preview`).
@ -248,7 +270,7 @@ cog.out(text)
- Fixed bug where in-chat files were marked as both read-only and ready-write, sometimes confusing GPT. - Fixed bug where in-chat files were marked as both read-only and ready-write, sometimes confusing GPT.
- Fixed bug to properly handle repos with submodules. - Fixed bug to properly handle repos with submodules.
### v0.17.0 ### Aider v0.17.0
- Support for OpenAI's new 11/06 models: - Support for OpenAI's new 11/06 models:
- gpt-4-1106-preview with 128k context window - gpt-4-1106-preview with 128k context window
@ -260,19 +282,19 @@ cog.out(text)
- Fixed crash bug when `/add` used on file matching `.gitignore` - Fixed crash bug when `/add` used on file matching `.gitignore`
- Fixed misc bugs to catch and handle unicode decoding errors. - Fixed misc bugs to catch and handle unicode decoding errors.
### v0.16.3 ### Aider v0.16.3
- Fixed repo-map support for C#. - Fixed repo-map support for C#.
### v0.16.2 ### Aider v0.16.2
- Fixed docker image. - Fixed docker image.
### v0.16.1 ### Aider v0.16.1
- Updated tree-sitter dependencies to streamline the pip install process - Updated tree-sitter dependencies to streamline the pip install process
### v0.16.0 ### Aider v0.16.0
- [Improved repository map using tree-sitter](https://aider.chat/docs/repomap.html) - [Improved repository map using tree-sitter](https://aider.chat/docs/repomap.html)
- Switched from "edit block" to "search/replace block", which reduced malformed edit blocks. [Benchmarked](https://aider.chat/docs/benchmarks.html) at 66.2%, no regression. - Switched from "edit block" to "search/replace block", which reduced malformed edit blocks. [Benchmarked](https://aider.chat/docs/benchmarks.html) at 66.2%, no regression.
@ -280,21 +302,21 @@ cog.out(text)
- Bugfix to properly handle malformed `/add` wildcards. - Bugfix to properly handle malformed `/add` wildcards.
### v0.15.0 ### Aider v0.15.0
- Added support for `.aiderignore` file, which instructs aider to ignore parts of the git repo. - Added support for `.aiderignore` file, which instructs aider to ignore parts of the git repo.
- New `--commit` cmd line arg, which just commits all pending changes with a sensible commit message generated by gpt-3.5. - New `--commit` cmd line arg, which just commits all pending changes with a sensible commit message generated by gpt-3.5.
- Added universal ctags and multiple architectures to the [aider docker image](https://aider.chat/docs/docker.html) - Added universal ctags and multiple architectures to the [aider docker image](https://aider.chat/docs/install/docker.html)
- `/run` and `/git` now accept full shell commands, like: `/run (cd subdir; ls)` - `/run` and `/git` now accept full shell commands, like: `/run (cd subdir; ls)`
- Restored missing `--encoding` cmd line switch. - Restored missing `--encoding` cmd line switch.
### v0.14.2 ### Aider v0.14.2
- Easily [run aider from a docker image](https://aider.chat/docs/docker.html) - Easily [run aider from a docker image](https://aider.chat/docs/install/docker.html)
- Fixed bug with chat history summarization. - Fixed bug with chat history summarization.
- Fixed bug if `soundfile` package not available. - Fixed bug if `soundfile` package not available.
### v0.14.1 ### Aider v0.14.1
- /add and /drop handle absolute filenames and quoted filenames - /add and /drop handle absolute filenames and quoted filenames
- /add checks to be sure files are within the git repo (or root) - /add checks to be sure files are within the git repo (or root)
@ -302,14 +324,14 @@ cog.out(text)
- Fixed /add bug in when aider launched in repo subdir - Fixed /add bug in when aider launched in repo subdir
- Show models supported by api/key if requested model isn't available - Show models supported by api/key if requested model isn't available
### v0.14.0 ### Aider v0.14.0
- [Support for Claude2 and other LLMs via OpenRouter](https://aider.chat/docs/faq.html#accessing-other-llms-with-openrouter) by @joshuavial - [Support for Claude2 and other LLMs via OpenRouter](https://aider.chat/docs/faq.html#accessing-other-llms-with-openrouter) by @joshuavial
- Documentation for [running the aider benchmarking suite](https://github.com/paul-gauthier/aider/tree/main/benchmark) - Documentation for [running the aider benchmarking suite](https://github.com/paul-gauthier/aider/tree/main/benchmark)
- Aider now requires Python >= 3.9 - Aider now requires Python >= 3.9
### v0.13.0 ### Aider v0.13.0
- [Only git commit dirty files that GPT tries to edit](https://aider.chat/docs/faq.html#how-did-v0130-change-git-usage) - [Only git commit dirty files that GPT tries to edit](https://aider.chat/docs/faq.html#how-did-v0130-change-git-usage)
- Send chat history as prompt/context for Whisper voice transcription - Send chat history as prompt/context for Whisper voice transcription
@ -317,14 +339,14 @@ cog.out(text)
- Late-bind importing `sounddevice`, as it was slowing down aider startup - Late-bind importing `sounddevice`, as it was slowing down aider startup
- Improved --foo/--no-foo switch handling for command line and yml config settings - Improved --foo/--no-foo switch handling for command line and yml config settings
### v0.12.0 ### Aider v0.12.0
- [Voice-to-code](https://aider.chat/docs/voice.html) support, which allows you to code with your voice. - [Voice-to-code](https://aider.chat/docs/voice.html) support, which allows you to code with your voice.
- Fixed bug where /diff was causing crash. - Fixed bug where /diff was causing crash.
- Improved prompting for gpt-4, refactor of editblock coder. - Improved prompting for gpt-4, refactor of editblock coder.
- [Benchmarked](https://aider.chat/docs/benchmarks.html) at 63.2% for gpt-4/diff, no regression. - [Benchmarked](https://aider.chat/docs/benchmarks.html) at 63.2% for gpt-4/diff, no regression.
### v0.11.1 ### Aider v0.11.1
- Added a progress bar when initially creating a repo map. - Added a progress bar when initially creating a repo map.
- Fixed bad commit message when adding new file to empty repo. - Fixed bad commit message when adding new file to empty repo.
@ -333,7 +355,7 @@ cog.out(text)
- Fixed /commit bug from repo refactor, added test coverage. - Fixed /commit bug from repo refactor, added test coverage.
- [Benchmarked](https://aider.chat/docs/benchmarks.html) at 53.4% for gpt-3.5/whole (no regression). - [Benchmarked](https://aider.chat/docs/benchmarks.html) at 53.4% for gpt-3.5/whole (no regression).
### v0.11.0 ### Aider v0.11.0
- Automatically summarize chat history to avoid exhausting context window. - Automatically summarize chat history to avoid exhausting context window.
- More detail on dollar costs when running with `--no-stream` - More detail on dollar costs when running with `--no-stream`
@ -341,12 +363,12 @@ cog.out(text)
- Defend against GPT-3.5 or non-OpenAI models suggesting filenames surrounded by asterisks. - Defend against GPT-3.5 or non-OpenAI models suggesting filenames surrounded by asterisks.
- Refactored GitRepo code out of the Coder class. - Refactored GitRepo code out of the Coder class.
### v0.10.1 ### Aider v0.10.1
- /add and /drop always use paths relative to the git root - /add and /drop always use paths relative to the git root
- Encourage GPT to use language like "add files to the chat" to ask users for permission to edit them. - Encourage GPT to use language like "add files to the chat" to ask users for permission to edit them.
### v0.10.0 ### Aider v0.10.0
- Added `/git` command to run git from inside aider chats. - Added `/git` command to run git from inside aider chats.
- Use Meta-ENTER (Esc+ENTER in some environments) to enter multiline chat messages. - Use Meta-ENTER (Esc+ENTER in some environments) to enter multiline chat messages.
@ -358,7 +380,7 @@ cog.out(text)
- [Benchmarked](https://aider.chat/docs/benchmarks.html) at 64.7% for gpt-4/diff (no regression) - [Benchmarked](https://aider.chat/docs/benchmarks.html) at 64.7% for gpt-4/diff (no regression)
### v0.9.0 ### Aider v0.9.0
- Support for the OpenAI models in [Azure](https://aider.chat/docs/faq.html#azure) - Support for the OpenAI models in [Azure](https://aider.chat/docs/faq.html#azure)
- Added `--show-repo-map` - Added `--show-repo-map`
@ -367,7 +389,7 @@ cog.out(text)
- Bugfix: recognize and add files in subdirectories mentioned by user or GPT - Bugfix: recognize and add files in subdirectories mentioned by user or GPT
- [Benchmarked](https://aider.chat/docs/benchmarks.html) at 53.8% for gpt-3.5-turbo/whole (no regression) - [Benchmarked](https://aider.chat/docs/benchmarks.html) at 53.8% for gpt-3.5-turbo/whole (no regression)
### v0.8.3 ### Aider v0.8.3
- Added `--dark-mode` and `--light-mode` to select colors optimized for terminal background - Added `--dark-mode` and `--light-mode` to select colors optimized for terminal background
- Install docs link to [NeoVim plugin](https://github.com/joshuavial/aider.nvim) by @joshuavial - Install docs link to [NeoVim plugin](https://github.com/joshuavial/aider.nvim) by @joshuavial
@ -378,11 +400,11 @@ cog.out(text)
- Bugfix/improvement to /add and /drop to recurse selected directories - Bugfix/improvement to /add and /drop to recurse selected directories
- Bugfix for live diff output when using "whole" edit format - Bugfix for live diff output when using "whole" edit format
### v0.8.2 ### Aider v0.8.2
- Disabled general availability of gpt-4 (it's rolling out, not 100% available yet) - Disabled general availability of gpt-4 (it's rolling out, not 100% available yet)
### v0.8.1 ### Aider v0.8.1
- Ask to create a git repo if none found, to better track GPT's code changes - Ask to create a git repo if none found, to better track GPT's code changes
- Glob wildcards are now supported in `/add` and `/drop` commands - Glob wildcards are now supported in `/add` and `/drop` commands
@ -394,7 +416,7 @@ cog.out(text)
- Bugfix for chats with multiple files - Bugfix for chats with multiple files
- Bugfix in editblock coder prompt - Bugfix in editblock coder prompt
### v0.8.0 ### Aider v0.8.0
- [Benchmark comparing code editing in GPT-3.5 and GPT-4](https://aider.chat/docs/benchmarks.html) - [Benchmark comparing code editing in GPT-3.5 and GPT-4](https://aider.chat/docs/benchmarks.html)
- Improved Windows support: - Improved Windows support:
@ -407,15 +429,15 @@ cog.out(text)
- Added `--code-theme` switch to control the pygments styling of code blocks (by @kwmiebach) - Added `--code-theme` switch to control the pygments styling of code blocks (by @kwmiebach)
- Better status messages explaining the reason when ctags is disabled - Better status messages explaining the reason when ctags is disabled
### v0.7.2: ### Aider v0.7.2:
- Fixed a bug to allow aider to edit files that contain triple backtick fences. - Fixed a bug to allow aider to edit files that contain triple backtick fences.
### v0.7.1: ### Aider v0.7.1:
- Fixed a bug in the display of streaming diffs in GPT-3.5 chats - Fixed a bug in the display of streaming diffs in GPT-3.5 chats
### v0.7.0: ### Aider v0.7.0:
- Graceful handling of context window exhaustion, including helpful tips. - Graceful handling of context window exhaustion, including helpful tips.
- Added `--message` to give GPT that one instruction and then exit after it replies and any edits are performed. - Added `--message` to give GPT that one instruction and then exit after it replies and any edits are performed.
@ -429,13 +451,13 @@ cog.out(text)
- Initial experiments show that using functions makes 3.5 less competent at coding. - Initial experiments show that using functions makes 3.5 less competent at coding.
- Limit automatic retries when GPT returns a malformed edit response. - Limit automatic retries when GPT returns a malformed edit response.
### v0.6.2 ### Aider v0.6.2
* Support for `gpt-3.5-turbo-16k`, and all OpenAI chat models * Support for `gpt-3.5-turbo-16k`, and all OpenAI chat models
* Improved ability to correct when gpt-4 omits leading whitespace in code edits * Improved ability to correct when gpt-4 omits leading whitespace in code edits
* Added `--openai-api-base` to support API proxies, etc. * Added `--openai-api-base` to support API proxies, etc.
### v0.5.0 ### Aider v0.5.0
- Added support for `gpt-3.5-turbo` and `gpt-4-32k`. - Added support for `gpt-3.5-turbo` and `gpt-4-32k`.
- Added `--map-tokens` to set a token budget for the repo map, along with a PageRank based algorithm for prioritizing which files and identifiers to include in the map. - Added `--map-tokens` to set a token budget for the repo map, along with a PageRank based algorithm for prioritizing which files and identifiers to include in the map.

View file

@ -28,7 +28,7 @@ aux_links:
"Discord": "Discord":
- "https://discord.gg/Tv2uQnR88V" - "https://discord.gg/Tv2uQnR88V"
"Blog": "Blog":
- "/blog" - "/blog/"
nav_external_links: nav_external_links:
- title: "GitHub" - title: "GitHub"

View file

@ -590,52 +590,29 @@
seconds_per_case: 280.6 seconds_per_case: 280.6
total_cost: 0.0000 total_cost: 0.0000
- dirname: 2024-06-20-15-09-26--claude-3.5-sonnet-whole - dirname: 2024-07-04-14-32-08--claude-3.5-sonnet-diff-continue
test_cases: 133 test_cases: 133
model: claude-3.5-sonnet (whole) model: claude-3.5-sonnet
edit_format: whole edit_format: diff
commit_hash: 068609e commit_hash: 35f21b5
pass_rate_1: 61.7 pass_rate_1: 57.1
pass_rate_2: 78.2 pass_rate_2: 77.4
percent_cases_well_formed: 100.0 percent_cases_well_formed: 99.2
error_outputs: 4 error_outputs: 23
num_malformed_responses: 0 num_malformed_responses: 4
num_with_malformed_responses: 0 num_with_malformed_responses: 1
user_asks: 2 user_asks: 2
lazy_comments: 0 lazy_comments: 0
syntax_errors: 0 syntax_errors: 1
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 0
command: aider --model openrouter/anthropic/claude-3.5-sonnet --edit-format whole
date: 2024-06-20
versions: 0.38.1-dev
seconds_per_case: 15.4
total_cost: 0.0000
- dirname: 2024-06-20-15-16-41--claude-3.5-sonnet-diff
test_cases: 133
model: claude-3.5-sonnet (diff)
edit_format: diff
commit_hash: 068609e-dirty
pass_rate_1: 57.9
pass_rate_2: 74.4
percent_cases_well_formed: 97.0
error_outputs: 48
num_malformed_responses: 11
num_with_malformed_responses: 4
user_asks: 0
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0 indentation_errors: 0
exhausted_context_windows: 0 exhausted_context_windows: 0
test_timeouts: 1 test_timeouts: 1
command: aider --model openrouter/anthropic/claude-3.5-sonnet command: aider --sonnet
date: 2024-06-20 date: 2024-07-04
versions: 0.38.1-dev versions: 0.42.1-dev
seconds_per_case: 21.6 seconds_per_case: 17.6
total_cost: 0.0000 total_cost: 3.6346
- dirname: 2024-06-17-14-45-54--deepseek-coder2-whole - dirname: 2024-06-17-14-45-54--deepseek-coder2-whole
test_cases: 133 test_cases: 133
model: DeepSeek Coder V2 (whole) model: DeepSeek Coder V2 (whole)
@ -681,4 +658,27 @@
versions: 0.39.1-dev versions: 0.39.1-dev
seconds_per_case: 30.2 seconds_per_case: 30.2
total_cost: 0.0857 total_cost: 0.0857
- dirname: 2024-07-01-21-41-48--haiku-whole
test_cases: 133
model: claude-3-haiku-20240307
edit_format: whole
commit_hash: 75f506d
pass_rate_1: 40.6
pass_rate_2: 47.4
percent_cases_well_formed: 100.0
error_outputs: 6
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 0
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 2
command: aider --model claude-3-haiku-20240307
date: 2024-07-01
versions: 0.41.1-dev
seconds_per_case: 7.1
total_cost: 0.1946

View file

@ -143,25 +143,48 @@
seconds_per_case: 67.8 seconds_per_case: 67.8
total_cost: 20.4889 total_cost: 20.4889
- dirname: 2024-07-01-18-30-33--refac-claude-3.5-sonnet-diff-not-lazy
- dirname: 2024-06-20-16-39-18--refac-claude-3.5-sonnet-diff
test_cases: 89 test_cases: 89
model: claude-3.5-sonnet (diff) model: claude-3.5-sonnet (diff)
edit_format: diff edit_format: diff
commit_hash: e5e07f9 commit_hash: 7396e38-dirty
pass_rate_1: 55.1 pass_rate_1: 64.0
percent_cases_well_formed: 70.8 percent_cases_well_formed: 76.4
error_outputs: 240 error_outputs: 176
num_malformed_responses: 54 num_malformed_responses: 39
num_with_malformed_responses: 26 num_with_malformed_responses: 21
user_asks: 10 user_asks: 11
lazy_comments: 2 lazy_comments: 2
syntax_errors: 0 syntax_errors: 4
indentation_errors: 3 indentation_errors: 0
exhausted_context_windows: 0 exhausted_context_windows: 0
test_timeouts: 0 test_timeouts: 0
command: aider --model openrouter/anthropic/claude-3.5-sonnet command: aider --sonnet
date: 2024-06-20 date: 2024-07-01
versions: 0.38.1-dev versions: 0.40.7-dev
seconds_per_case: 51.9 seconds_per_case: 42.8
total_cost: 0.0000 total_cost: 11.5242
- dirname: 2024-07-04-15-06-43--refac-deepseek-coder2-128k
test_cases: 89
model: DeepSeek Coder V2 (128k context)
edit_format: diff
commit_hash: 08868fd
pass_rate_1: 38.2
percent_cases_well_formed: 73.0
error_outputs: 393
num_malformed_responses: 89
num_with_malformed_responses: 24
user_asks: 4
lazy_comments: 2
syntax_errors: 1
indentation_errors: 5
exhausted_context_windows: 3
test_timeouts: 0
command: aider --model deepseek/deepseek-coder
date: 2024-07-04
versions: 0.42.1-dev
seconds_per_case: 82.9
total_cost: 0.2601

View file

@ -1,3 +1,4 @@
You can get started quickly like this: You can get started quickly like this:
``` ```
@ -6,16 +7,11 @@ $ pip install aider-chat
# Change directory into a git repo # Change directory into a git repo
$ cd /to/your/git/repo $ cd /to/your/git/repo
# Work with Claude 3.5 Sonnet on your repo
$ export ANTHROPIC_API_KEY=your-key-goes-here
$ aider
# Work with GPT-4o on your repo # Work with GPT-4o on your repo
$ export OPENAI_API_KEY=your-key-goes-here $ export OPENAI_API_KEY=your-key-goes-here
$ aider $ aider
# Or, work with Anthropic's models
$ export ANTHROPIC_API_KEY=your-key-goes-here
# Claude 3 Opus
$ aider --opus
# Claude 3.5 Sonnet
$ aider --sonnet
``` ```

View file

@ -14,6 +14,9 @@ for that model.
Aider will use an unlimited context window and assume the model is free, Aider will use an unlimited context window and assume the model is free,
so this is not usually a significant problem. so this is not usually a significant problem.
See the docs on
[configuring advanced model settings](/docs/config/adv-model-settings.html)
for details on how to remove this warning.
## Did you mean? ## Did you mean?

View file

@ -6,4 +6,4 @@ have their keys and settings
specified in environment variables. specified in environment variables.
This can be done in your shell, This can be done in your shell,
or by using a or by using a
[`.env` file](/docs/dotenv.html). [`.env` file](/docs/config/dotenv.html).

View file

@ -0,0 +1,126 @@
---
title: Sonnet is the opposite of lazy
excerpt: Claude 3.5 Sonnet can easily write more good code than fits in one 4k token API response.
highlight_image: /assets/sonnet-not-lazy.jpg
nav_exclude: true
---
[![sonnet is the opposite of lazy](/assets/sonnet-not-lazy.jpg)](https://aider.chat/assets/sonnet-not-lazy.jpg)
{% if page.date %}
<p class="post-date">{{ page.date | date: "%B %d, %Y" }}</p>
{% endif %}
# Sonnet is the opposite of lazy
Claude 3.5 Sonnet represents a step change
in AI coding.
It is incredibly industrious, diligent and hard working.
Unexpectedly,
this presented a challenge:
Sonnet
was often writing so much code that
it was hitting the 4k output token limit,
truncating its coding in mid-stream.
Aider now works
around this 4k limit and allows Sonnet to produce
as much code as it wants.
The result is surprisingly powerful.
Sonnet's score on
[aider's refactoring benchmark](https://aider.chat/docs/leaderboards/#code-refactoring-leaderboard)
jumped from 55.1% up to 64.0%.
This moved Sonnet into second place, ahead of GPT-4o and
behind only Opus.
Users who tested Sonnet with a preview of
[aider's latest release](https://aider.chat/HISTORY.html#aider-v0410)
were thrilled:
- *Works like a charm. It is a monster. It refactors files of any size like it is nothing. The continue trick with Sonnet is truly the holy grail. Aider beats [other tools] hands down. I'm going to cancel both subscriptions.* -- [Emasoft](https://github.com/paul-gauthier/aider/issues/705#issuecomment-2200338971)
- *Thanks heaps for this feature - it's a real game changer. I can be more ambitious when asking Claude for larger features.* -- [cngarrison](https://github.com/paul-gauthier/aider/issues/705#issuecomment-2196026656)
- *Fantastic...! It's such an improvement not being constrained by output token length issues. [I refactored] a single JavaScript file into seven smaller files using a single Aider request.* -- [John Galt](https://discord.com/channels/1131200896827654144/1253492379336441907/1256250487934554143)
## Hitting the 4k token output limit
All LLMs have various token limits, the most familiar being their
context window size.
But they also have a limit on how many tokens they can output
in response to a single request.
Sonnet and the majority of other
models are limited to returning 4k tokens.
Sonnet's amazing work ethic caused it to
regularly hit this 4k output token
limit for a few reasons:
1. Sonnet is capable of outputting a very large amount of correct,
complete new code in one response.
2. Similarly, Sonnet can specify long sequences of edits in one go,
like changing a majority of lines while refactoring a large file.
3. Sonnet tends to quote large chunks of a
file when performing a SEARCH & REPLACE edits.
Beyond token limits, this is very wasteful.
## Good problems
Problems (1) and (2) are "good problems"
in the sense that Sonnet is
able to write more high quality code than any other model!
We just don't want it to be interrupted prematurely
by the 4k output limit.
Aider now allows Sonnet to return code in multiple 4k token
responses.
Aider seamlessly combines them so that Sonnet can return arbitrarily
long responses.
This gets all the upsides of Sonnet's prolific coding skills,
without being constrained by the 4k output token limit.
## Wasting tokens
Problem (3) is more complicated, as Sonnet isn't just
being stopped early -- it's actually wasting a lot
of tokens, time and money.
Faced with a few small changes spread far apart in
a source file,
Sonnet would often prefer to do one giant SEARCH/REPLACE
operation of almost the entire file.
It would be far faster and less expensive to instead
do a few surgical edits.
Aider now prompts Sonnet to discourage these long-winded
SEARCH/REPLACE operations
and promotes much more concise edits.
## Aider with Sonnet
[The latest release of aider](https://aider.chat/HISTORY.html#aider-v0410)
has specialized support for Claude 3.5 Sonnet:
- Aider allows Sonnet to produce as much code as it wants,
by automatically and seamlessly spreading the response
out over a sequence of 4k token API responses.
- Aider carefully prompts Sonnet to be concise when proposing
code edits.
This reduces Sonnet's tendency to waste time, tokens and money
returning large chunks of unchanging code.
- Aider now uses Claude 3.5 Sonnet by default if the `ANTHROPIC_API_KEY` is set in the environment.
See
[aider's install instructions](https://aider.chat/docs/install.html)
for more details, but
you can get started quickly with aider and Sonnet like this:
```
$ pip install aider-chat
$ export ANTHROPIC_API_KEY=<key> # Mac/Linux
$ setx ANTHROPIC_API_KEY <key> # Windows
$ aider
```

View file

@ -77,6 +77,7 @@
color: #32FF32; color: #32FF32;
border-top: 1px solid #32FF32; border-top: 1px solid #32FF32;
padding-top: 10px; padding-top: 10px;
text-transform: none;
} }
.chat-transcript h4::before { .chat-transcript h4::before {

View file

@ -13,17 +13,14 @@
####### #######
# Main: # Main:
## Log the conversation with the LLM to this file (for example, .aider.llm.history)
#llm-history-file:
## Specify the OpenAI API key ## Specify the OpenAI API key
#openai-api-key: #openai-api-key:
## Specify the Anthropic API key ## Specify the Anthropic API key
#anthropic-api-key: #anthropic-api-key:
## Specify the model to use for the main chat (default: gpt-4o) ## Specify the model to use for the main chat
#model: gpt-4o #model:
## Use claude-3-opus-20240229 model for the main chat ## Use claude-3-opus-20240229 model for the main chat
#opus: false #opus: false
@ -65,10 +62,10 @@
#openai-organization-id: #openai-organization-id:
## Specify a file with aider model settings for unknown models ## Specify a file with aider model settings for unknown models
#model-settings-file: #model-settings-file: .aider.model.settings.yml
## Specify a file with context window and costs for unknown models ## Specify a file with context window and costs for unknown models
#model-metadata-file: #model-metadata-file: .aider.model.metadata.json
## Verify the SSL cert when connecting to models (default: True) ## Verify the SSL cert when connecting to models (default: True)
#verify-ssl: true #verify-ssl: true
@ -103,6 +100,9 @@
## Restore the previous chat history messages (default: False) ## Restore the previous chat history messages (default: False)
#restore-chat-history: false #restore-chat-history: false
## Log the conversation with the LLM to this file (for example, .aider.llm.history)
#llm-history-file:
################## ##################
# Output Settings: # Output Settings:
@ -160,6 +160,9 @@
## Attribute aider commits in the git committer name (default: True) ## Attribute aider commits in the git committer name (default: True)
#attribute-committer: true #attribute-committer: true
## Prefix commit messages with 'aider: ' (default: False)
#attribute-commit-message: false
## Perform a dry run without modifying files (default: False) ## Perform a dry run without modifying files (default: False)
#dry-run: false #dry-run: false
@ -220,6 +223,9 @@
## Print the system prompts and exit (debug) ## Print the system prompts and exit (debug)
#show-prompts: false #show-prompts: false
## Do all startup activities then exit before accepting user input (debug)
#exit: false
## Specify a single message to send the LLM, process reply then exit (disables chat mode) ## Specify a single message to send the LLM, process reply then exit (disables chat mode)
#message: #message:

View file

@ -21,17 +21,14 @@
####### #######
# Main: # Main:
## Log the conversation with the LLM to this file (for example, .aider.llm.history)
#AIDER_LLM_HISTORY_FILE=
## Specify the OpenAI API key ## Specify the OpenAI API key
#OPENAI_API_KEY= #OPENAI_API_KEY=
## Specify the Anthropic API key ## Specify the Anthropic API key
#ANTHROPIC_API_KEY= #ANTHROPIC_API_KEY=
## Specify the model to use for the main chat (default: gpt-4o) ## Specify the model to use for the main chat
#AIDER_MODEL=gpt-4o #AIDER_MODEL=
## Use claude-3-opus-20240229 model for the main chat ## Use claude-3-opus-20240229 model for the main chat
#AIDER_OPUS= #AIDER_OPUS=
@ -73,10 +70,10 @@
#OPENAI_ORGANIZATION_ID= #OPENAI_ORGANIZATION_ID=
## Specify a file with aider model settings for unknown models ## Specify a file with aider model settings for unknown models
#AIDER_MODEL_SETTINGS_FILE= #AIDER_MODEL_SETTINGS_FILE=.aider.model.settings.yml
## Specify a file with context window and costs for unknown models ## Specify a file with context window and costs for unknown models
#AIDER_MODEL_METADATA_FILE= #AIDER_MODEL_METADATA_FILE=.aider.model.metadata.json
## Verify the SSL cert when connecting to models (default: True) ## Verify the SSL cert when connecting to models (default: True)
#AIDER_VERIFY_SSL=true #AIDER_VERIFY_SSL=true
@ -111,6 +108,9 @@
## Restore the previous chat history messages (default: False) ## Restore the previous chat history messages (default: False)
#AIDER_RESTORE_CHAT_HISTORY=false #AIDER_RESTORE_CHAT_HISTORY=false
## Log the conversation with the LLM to this file (for example, .aider.llm.history)
#AIDER_LLM_HISTORY_FILE=
################## ##################
# Output Settings: # Output Settings:
@ -168,6 +168,9 @@
## Attribute aider commits in the git committer name (default: True) ## Attribute aider commits in the git committer name (default: True)
#AIDER_ATTRIBUTE_COMMITTER=true #AIDER_ATTRIBUTE_COMMITTER=true
## Prefix commit messages with 'aider: ' (default: False)
#AIDER_ATTRIBUTE_COMMIT_MESSAGE=false
## Perform a dry run without modifying files (default: False) ## Perform a dry run without modifying files (default: False)
#AIDER_DRY_RUN=false #AIDER_DRY_RUN=false
@ -225,6 +228,9 @@
## Print the system prompts and exit (debug) ## Print the system prompts and exit (debug)
#AIDER_SHOW_PROMPTS=false #AIDER_SHOW_PROMPTS=false
## Do all startup activities then exit before accepting user input (debug)
#AIDER_EXIT=false
## Specify a single message to send the LLM, process reply then exit (disables chat mode) ## Specify a single message to send the LLM, process reply then exit (disables chat mode)
#AIDER_MESSAGE= #AIDER_MESSAGE=

Binary file not shown.

After

Width:  |  Height:  |  Size: 209 KiB

View file

@ -11,24 +11,31 @@ command line switches.
Most options can also be set in an `.aider.conf.yml` file Most options can also be set in an `.aider.conf.yml` file
which can be placed in your home directory or at the root of which can be placed in your home directory or at the root of
your git repo. your git repo.
Or via environment variables like `AIDER_xxx`, Or by setting environment variables like `AIDER_xxx`
as noted in the [options reference](options.html). either in your shell or a `.env` file.
Here are 3 equivalent ways of setting an option. First, via a command line switch: Here are 4 equivalent ways of setting an option.
With a command line switch:
``` ```
$ aider --dark-mode $ aider --dark-mode
``` ```
Or, via an env variable: Using a `.aider.conf.yml` file:
```
export AIDER_DARK_MODE=true
```
Or in the `.aider.conf.yml` file:
```yaml ```yaml
dark-mode: true dark-mode: true
``` ```
By setting an environgment variable:
```
export AIDER_DARK_MODE=true
```
Using an `.env` file:
```
AIDER_DARK_MODE=true
```

View file

@ -0,0 +1,86 @@
---
parent: Configuration
nav_order: 950
description: Configuring advanced settings for LLMs.
---
# Advanced model settings
## Context window size and token costs
In most cases, you can safely ignore aider's warning about unknown context
window size and model costs.
But, you can register context window limits and costs for models that aren't known
to aider. Create a `.aider.model.metadata.json` file in one of these locations:
- Your home directory.
- The root if your git repo.
- The current directory where you launch aider.
- Or specify a specific file with the `--model-metadata-file <filename>` switch.
If the files above exist, they will be loaded in that order.
Files loaded last will take priority.
The json file should be a dictionary with an entry for each model, as follows:
```
{
"deepseek-chat": {
"max_tokens": 4096,
"max_input_tokens": 32000,
"max_output_tokens": 4096,
"input_cost_per_token": 0.00000014,
"output_cost_per_token": 0.00000028,
"litellm_provider": "deepseek",
"mode": "chat"
}
}
```
See
[litellm's model_prices_and_context_window.json file](https://github.com/BerriAI/litellm/blob/main/model_prices_and_context_window.json) for more examples.
## Model settings
Aider has a number of settings that control how it works with
different models.
These model settings are pre-configured for most popular models.
But it can sometimes be helpful to override them or add settings for
a model that aider doesn't know about.
To do that,
create a `.aider.model.settings.yml` file in one of these locations:
- Your home directory.
- The root if your git repo.
- The current directory where you launch aider.
- Or specify a specific file with the `--model-settings-file <filename>` switch.
If the files above exist, they will be loaded in that order.
Files loaded last will take priority.
The yaml file should be a a list of dictionary objects for each model, as follows:
```
- name: "gpt-3.5-turbo"
edit_format: "whole"
weak_model_name: "gpt-3.5-turbo"
use_repo_map: false
send_undo_reply: false
accepts_images: false
lazy: false
reminder_as_sys_msg: true
examples_as_sys_msg: false
- name: "gpt-4-turbo-2024-04-09"
edit_format: "udiff"
weak_model_name: "gpt-3.5-turbo"
use_repo_map: true
send_undo_reply: true
accepts_images: true
lazy: true
reminder_as_sys_msg: true
examples_as_sys_msg: false
```

View file

@ -41,17 +41,14 @@ cog.outl("```")
####### #######
# Main: # Main:
## Log the conversation with the LLM to this file (for example, .aider.llm.history)
#llm-history-file:
## Specify the OpenAI API key ## Specify the OpenAI API key
#openai-api-key: #openai-api-key:
## Specify the Anthropic API key ## Specify the Anthropic API key
#anthropic-api-key: #anthropic-api-key:
## Specify the model to use for the main chat (default: gpt-4o) ## Specify the model to use for the main chat
#model: gpt-4o #model:
## Use claude-3-opus-20240229 model for the main chat ## Use claude-3-opus-20240229 model for the main chat
#opus: false #opus: false
@ -93,10 +90,10 @@ cog.outl("```")
#openai-organization-id: #openai-organization-id:
## Specify a file with aider model settings for unknown models ## Specify a file with aider model settings for unknown models
#model-settings-file: #model-settings-file: .aider.model.settings.yml
## Specify a file with context window and costs for unknown models ## Specify a file with context window and costs for unknown models
#model-metadata-file: #model-metadata-file: .aider.model.metadata.json
## Verify the SSL cert when connecting to models (default: True) ## Verify the SSL cert when connecting to models (default: True)
#verify-ssl: true #verify-ssl: true
@ -131,6 +128,9 @@ cog.outl("```")
## Restore the previous chat history messages (default: False) ## Restore the previous chat history messages (default: False)
#restore-chat-history: false #restore-chat-history: false
## Log the conversation with the LLM to this file (for example, .aider.llm.history)
#llm-history-file:
################## ##################
# Output Settings: # Output Settings:
@ -188,6 +188,9 @@ cog.outl("```")
## Attribute aider commits in the git committer name (default: True) ## Attribute aider commits in the git committer name (default: True)
#attribute-committer: true #attribute-committer: true
## Prefix commit messages with 'aider: ' (default: False)
#attribute-commit-message: false
## Perform a dry run without modifying files (default: False) ## Perform a dry run without modifying files (default: False)
#dry-run: false #dry-run: false
@ -248,6 +251,9 @@ cog.outl("```")
## Print the system prompts and exit (debug) ## Print the system prompts and exit (debug)
#show-prompts: false #show-prompts: false
## Do all startup activities then exit before accepting user input (debug)
#exit: false
## Specify a single message to send the LLM, process reply then exit (disables chat mode) ## Specify a single message to send the LLM, process reply then exit (disables chat mode)
#message: #message:

View file

@ -54,17 +54,14 @@ cog.outl("```")
####### #######
# Main: # Main:
## Log the conversation with the LLM to this file (for example, .aider.llm.history)
#AIDER_LLM_HISTORY_FILE=
## Specify the OpenAI API key ## Specify the OpenAI API key
#OPENAI_API_KEY= #OPENAI_API_KEY=
## Specify the Anthropic API key ## Specify the Anthropic API key
#ANTHROPIC_API_KEY= #ANTHROPIC_API_KEY=
## Specify the model to use for the main chat (default: gpt-4o) ## Specify the model to use for the main chat
#AIDER_MODEL=gpt-4o #AIDER_MODEL=
## Use claude-3-opus-20240229 model for the main chat ## Use claude-3-opus-20240229 model for the main chat
#AIDER_OPUS= #AIDER_OPUS=
@ -106,10 +103,10 @@ cog.outl("```")
#OPENAI_ORGANIZATION_ID= #OPENAI_ORGANIZATION_ID=
## Specify a file with aider model settings for unknown models ## Specify a file with aider model settings for unknown models
#AIDER_MODEL_SETTINGS_FILE= #AIDER_MODEL_SETTINGS_FILE=.aider.model.settings.yml
## Specify a file with context window and costs for unknown models ## Specify a file with context window and costs for unknown models
#AIDER_MODEL_METADATA_FILE= #AIDER_MODEL_METADATA_FILE=.aider.model.metadata.json
## Verify the SSL cert when connecting to models (default: True) ## Verify the SSL cert when connecting to models (default: True)
#AIDER_VERIFY_SSL=true #AIDER_VERIFY_SSL=true
@ -144,6 +141,9 @@ cog.outl("```")
## Restore the previous chat history messages (default: False) ## Restore the previous chat history messages (default: False)
#AIDER_RESTORE_CHAT_HISTORY=false #AIDER_RESTORE_CHAT_HISTORY=false
## Log the conversation with the LLM to this file (for example, .aider.llm.history)
#AIDER_LLM_HISTORY_FILE=
################## ##################
# Output Settings: # Output Settings:
@ -201,6 +201,9 @@ cog.outl("```")
## Attribute aider commits in the git committer name (default: True) ## Attribute aider commits in the git committer name (default: True)
#AIDER_ATTRIBUTE_COMMITTER=true #AIDER_ATTRIBUTE_COMMITTER=true
## Prefix commit messages with 'aider: ' (default: False)
#AIDER_ATTRIBUTE_COMMIT_MESSAGE=false
## Perform a dry run without modifying files (default: False) ## Perform a dry run without modifying files (default: False)
#AIDER_DRY_RUN=false #AIDER_DRY_RUN=false
@ -258,6 +261,9 @@ cog.outl("```")
## Print the system prompts and exit (debug) ## Print the system prompts and exit (debug)
#AIDER_SHOW_PROMPTS=false #AIDER_SHOW_PROMPTS=false
## Do all startup activities then exit before accepting user input (debug)
#AIDER_EXIT=false
## Specify a single message to send the LLM, process reply then exit (disables chat mode) ## Specify a single message to send the LLM, process reply then exit (disables chat mode)
#AIDER_MESSAGE= #AIDER_MESSAGE=

View file

@ -20,35 +20,35 @@ from aider.args import get_md_help
cog.out(get_md_help()) cog.out(get_md_help())
]]]--> ]]]-->
``` ```
usage: aider [-h] [--llm-history-file] [--openai-api-key] usage: aider [-h] [--openai-api-key] [--anthropic-api-key] [--model]
[--anthropic-api-key] [--model] [--opus] [--sonnet] [--opus] [--sonnet] [--4] [--4o] [--4-turbo]
[--4] [--4o] [--4-turbo] [--35turbo] [--models] [--35turbo] [--models] [--openai-api-base]
[--openai-api-base] [--openai-api-type] [--openai-api-type] [--openai-api-version]
[--openai-api-version] [--openai-api-deployment-id] [--openai-api-deployment-id] [--openai-organization-id]
[--openai-organization-id] [--model-settings-file] [--model-settings-file] [--model-metadata-file]
[--model-metadata-file]
[--verify-ssl | --no-verify-ssl] [--edit-format] [--verify-ssl | --no-verify-ssl] [--edit-format]
[--weak-model] [--weak-model]
[--show-model-warnings | --no-show-model-warnings] [--show-model-warnings | --no-show-model-warnings]
[--map-tokens] [--max-chat-history-tokens] [--env-file] [--map-tokens] [--max-chat-history-tokens] [--env-file]
[--input-history-file] [--chat-history-file] [--input-history-file] [--chat-history-file]
[--restore-chat-history | --no-restore-chat-history] [--restore-chat-history | --no-restore-chat-history]
[--dark-mode] [--light-mode] [--pretty | --no-pretty] [--llm-history-file] [--dark-mode] [--light-mode]
[--stream | --no-stream] [--user-input-color] [--pretty | --no-pretty] [--stream | --no-stream]
[--tool-output-color] [--tool-error-color] [--user-input-color] [--tool-output-color]
[--assistant-output-color] [--code-theme] [--tool-error-color] [--assistant-output-color]
[--show-diffs] [--git | --no-git] [--code-theme] [--show-diffs] [--git | --no-git]
[--gitignore | --no-gitignore] [--aiderignore] [--gitignore | --no-gitignore] [--aiderignore]
[--auto-commits | --no-auto-commits] [--auto-commits | --no-auto-commits]
[--dirty-commits | --no-dirty-commits] [--dirty-commits | --no-dirty-commits]
[--attribute-author | --no-attribute-author] [--attribute-author | --no-attribute-author]
[--attribute-committer | --no-attribute-committer] [--attribute-committer | --no-attribute-committer]
[--attribute-commit-message | --no-attribute-commit-message]
[--dry-run | --no-dry-run] [--commit] [--lint] [--dry-run | --no-dry-run] [--commit] [--lint]
[--lint-cmd] [--auto-lint | --no-auto-lint] [--lint-cmd] [--auto-lint | --no-auto-lint]
[--test-cmd] [--auto-test | --no-auto-test] [--test] [--test-cmd] [--auto-test | --no-auto-test] [--test]
[--vim] [--voice-language] [--version] [--check-update] [--vim] [--voice-language] [--version] [--check-update]
[--skip-check-update] [--apply] [--yes] [-v] [--skip-check-update] [--apply] [--yes] [-v]
[--show-repo-map] [--show-prompts] [--message] [--show-repo-map] [--show-prompts] [--exit] [--message]
[--message-file] [--encoding] [-c] [--gui] [--message-file] [--encoding] [-c] [--gui]
``` ```
@ -63,10 +63,6 @@ Aliases:
## Main: ## Main:
### `--llm-history-file LLM_HISTORY_FILE`
Log the conversation with the LLM to this file (for example, .aider.llm.history)
Environment variable: `AIDER_LLM_HISTORY_FILE`
### `--openai-api-key OPENAI_API_KEY` ### `--openai-api-key OPENAI_API_KEY`
Specify the OpenAI API key Specify the OpenAI API key
Environment variable: `OPENAI_API_KEY` Environment variable: `OPENAI_API_KEY`
@ -76,8 +72,7 @@ Specify the Anthropic API key
Environment variable: `ANTHROPIC_API_KEY` Environment variable: `ANTHROPIC_API_KEY`
### `--model MODEL` ### `--model MODEL`
Specify the model to use for the main chat (default: gpt-4o) Specify the model to use for the main chat
Default: gpt-4o
Environment variable: `AIDER_MODEL` Environment variable: `AIDER_MODEL`
### `--opus` ### `--opus`
@ -140,10 +135,12 @@ Environment variable: `OPENAI_ORGANIZATION_ID`
### `--model-settings-file MODEL_SETTINGS_FILE` ### `--model-settings-file MODEL_SETTINGS_FILE`
Specify a file with aider model settings for unknown models Specify a file with aider model settings for unknown models
Default: .aider.model.settings.yml
Environment variable: `AIDER_MODEL_SETTINGS_FILE` Environment variable: `AIDER_MODEL_SETTINGS_FILE`
### `--model-metadata-file MODEL_METADATA_FILE` ### `--model-metadata-file MODEL_METADATA_FILE`
Specify a file with context window and costs for unknown models Specify a file with context window and costs for unknown models
Default: .aider.model.metadata.json
Environment variable: `AIDER_MODEL_METADATA_FILE` Environment variable: `AIDER_MODEL_METADATA_FILE`
### `--verify-ssl` ### `--verify-ssl`
@ -204,6 +201,10 @@ Aliases:
- `--restore-chat-history` - `--restore-chat-history`
- `--no-restore-chat-history` - `--no-restore-chat-history`
### `--llm-history-file LLM_HISTORY_FILE`
Log the conversation with the LLM to this file (for example, .aider.llm.history)
Environment variable: `AIDER_LLM_HISTORY_FILE`
## Output Settings: ## Output Settings:
### `--dark-mode` ### `--dark-mode`
@ -316,6 +317,14 @@ Aliases:
- `--attribute-committer` - `--attribute-committer`
- `--no-attribute-committer` - `--no-attribute-committer`
### `--attribute-commit-message`
Prefix commit messages with 'aider: ' (default: False)
Default: False
Environment variable: `AIDER_ATTRIBUTE_COMMIT_MESSAGE`
Aliases:
- `--attribute-commit-message`
- `--no-attribute-commit-message`
### `--dry-run` ### `--dry-run`
Perform a dry run without modifying files (default: False) Perform a dry run without modifying files (default: False)
Default: False Default: False
@ -349,7 +358,7 @@ Aliases:
- `--auto-lint` - `--auto-lint`
- `--no-auto-lint` - `--no-auto-lint`
### `--test-cmd` ### `--test-cmd VALUE`
Specify command to run tests Specify command to run tests
Default: [] Default: []
Environment variable: `AIDER_TEST_CMD` Environment variable: `AIDER_TEST_CMD`
@ -418,6 +427,11 @@ Print the system prompts and exit (debug)
Default: False Default: False
Environment variable: `AIDER_SHOW_PROMPTS` Environment variable: `AIDER_SHOW_PROMPTS`
### `--exit`
Do all startup activities then exit before accepting user input (debug)
Default: False
Environment variable: `AIDER_EXIT`
### `--message COMMAND` ### `--message COMMAND`
Specify a single message to send the LLM, process reply then exit (disables chat mode) Specify a single message to send the LLM, process reply then exit (disables chat mode)
Environment variable: `AIDER_MESSAGE` Environment variable: `AIDER_MESSAGE`

View file

@ -44,3 +44,6 @@ Aider marks commits that it either authored or committed.
You can use `--no-attribute-author` and `--no-attribute-committer` to disable You can use `--no-attribute-author` and `--no-attribute-committer` to disable
modification of the git author and committer name fields. modification of the git author and committer name fields.
Additionally, you can use `--attribute-commit-message` to prefix commit messages with 'aider: '.
This option is disabled by default, but can be useful for easily identifying commits made by aider.

View file

@ -0,0 +1,42 @@
---
parent: Usage
nav_order: 700
description: Add images and URLs to the aider coding chat.
---
# Images & URLs
You can add images and URLs to the aider chat.
## Images
Aider supports working with image files for many vision-capable models
like GPT-4o and Claude 3.5 Sonnet.
Adding images to a chat can be helpful in many situations:
- Add screenshots of web pages or UIs that you want aider to build or modify.
- Show aider a mockup of a UI you want to build.
- Screenshot an error message that is otherwise hard to copy & paste as text.
- Etc.
You can add images to the chat just like you would
add any other file:
- Use `/add <image-filename>` from within the chat
- Launch aider with image filenames on the command line: `aider <image-filename>` along with any other command line arguments you need.
## URLs
Aider can scrape the text from URLs and add it to the chat.
This can be helpful to:
- Include documentation pages for less popular APIs.
- Include the latest docs for libraries or packages that are newer than the model's training cutoff date.
- Etc.
To add URLs to the chat:
- Use `/web <url>`
- Just paste the URL into the chat and aider will ask if you want to add it.

View file

@ -24,6 +24,14 @@ Note that this is different than being a "ChatGPT Plus" subscriber.
To work with Anthropic's models like Claude 3 Opus you need a paid To work with Anthropic's models like Claude 3 Opus you need a paid
[Anthropic API key](https://docs.anthropic.com/claude/reference/getting-started-with-the-api). [Anthropic API key](https://docs.anthropic.com/claude/reference/getting-started-with-the-api).
## Manage your python environment
Using a Python
[virtual environment](https://docs.python.org/3/library/venv.html)
is recommended.
Or, you could consider
[installing aider using pipx](/docs/install/pipx.html).
## Windows install ## Windows install
``` ```

View file

@ -12,20 +12,10 @@ The steps below are completely optional.
{:toc} {:toc}
## Store your api key ## Store your api keys
You can place your api key in an environment variable:
* `export OPENAI_API_KEY=sk-...` on Linux or Mac
* `setx OPENAI_API_KEY sk-...` in Windows PowerShell
Or you can create a `.aider.conf.yml` file in your home directory.
Put a line in it like this to specify your api key:
```
openai-api-key: sk-...
```
You can place your [api keys in a `.env` file](/docs/config/dotenv.html)
and they will be loaded automatically whenever you run aider.
## Enable Playwright ## Enable Playwright

View file

@ -16,20 +16,6 @@ While [aider can connect to almost any LLM](/docs/llms.html),
it works best with models that score well on the benchmarks. it works best with models that score well on the benchmarks.
## Claude 3.5 Sonnet takes the top spot
Claude 3.5 Sonnet is now the top ranked model on aider's code editing leaderboard.
DeepSeek Coder V2 only spent 4 days in the top spot.
The new Sonnet came in 3rd on aider's refactoring leaderboard, behind GPT-4o and Opus.
Sonnet ranked #1 when using the "whole" editing format,
but it also scored very well with
aider's "diff" editing format.
This format allows it to return code changes as diffs -- saving time and token costs,
and making it practical to work with larger source files.
As such, aider uses "diff" by default with this new Sonnet model.
## Code editing leaderboard ## Code editing leaderboard
[Aider's code editing benchmark](/docs/benchmarks.html#the-benchmark) asks the LLM to edit python source files to complete 133 small coding exercises. This benchmark measures the LLM's coding ability, but also whether it can consistently emit code edits in the format specified in the system prompt. [Aider's code editing benchmark](/docs/benchmarks.html#the-benchmark) asks the LLM to edit python source files to complete 133 small coding exercises. This benchmark measures the LLM's coding ability, but also whether it can consistently emit code edits in the format specified in the system prompt.

View file

@ -27,7 +27,6 @@ Aider works best with these models, which are skilled at editing code:
Aider works with a number of **free** API providers: Aider works with a number of **free** API providers:
- The [DeepSeek Coder V2](/docs/llms/deepseek.html) model gets the top score on aider's code editing benchmark. DeepSeek has been offering 5M free tokens of API usage.
- Google's [Gemini 1.5 Pro](/docs/llms/gemini.html) works with aider, with - Google's [Gemini 1.5 Pro](/docs/llms/gemini.html) works with aider, with
code editing capabilities similar to GPT-3.5. code editing capabilities similar to GPT-3.5.
- You can use [Llama 3 70B on Groq](/docs/llms/groq.html) which is comparable to GPT-3.5 in code editing performance. - You can use [Llama 3 70B on Groq](/docs/llms/groq.html) which is comparable to GPT-3.5 in code editing performance.

View file

@ -19,12 +19,12 @@ pip install aider-chat
export ANTHROPIC_API_KEY=<key> # Mac/Linux export ANTHROPIC_API_KEY=<key> # Mac/Linux
setx ANTHROPIC_API_KEY <key> # Windows setx ANTHROPIC_API_KEY <key> # Windows
# Aider uses Claude 3.5 Sonnet by default (or use --sonnet)
aider
# Claude 3 Opus # Claude 3 Opus
aider --opus aider --opus
# Claude 3.5 Sonnet
aider --sonnet
# List models available from Anthropic # List models available from Anthropic
aider --models anthropic/ aider --models anthropic/
``` ```

View file

@ -7,7 +7,6 @@ nav_order: 500
Aider can connect to the DeepSeek.com API. Aider can connect to the DeepSeek.com API.
The DeepSeek Coder V2 model gets the top score on aider's code editing benchmark. The DeepSeek Coder V2 model gets the top score on aider's code editing benchmark.
DeepSeek appears to grant 5M tokens of free API usage to new accounts.
``` ```
pip install aider-chat pip install aider-chat

View file

@ -19,7 +19,7 @@ pip install aider-chat
export OPENAI_API_KEY=<key> # Mac/Linux export OPENAI_API_KEY=<key> # Mac/Linux
setx OPENAI_API_KEY <key> # Windows setx OPENAI_API_KEY <key> # Windows
# GPT-4o is the best model, used by default # Aider uses gpt-4o by default (or use --4o)
aider aider
# GPT-4 Turbo (1106) # GPT-4 Turbo (1106)

View file

@ -8,70 +8,3 @@ nav_order: 900
{% include model-warnings.md %} {% include model-warnings.md %}
## Adding settings for missing models
You can register model settings used by aider for unknown models.
Create a `.aider.models.yml` file in one of these locations:
- Your home directory.
- The root if your git repo.
- The current directory where you launch aider.
- Or specify a specific file with the `--model-settings-file <filename>` switch.
If the files above exist, they will be loaded in that order.
Files loaded last will take priority.
The yaml file should be a a list of dictionary objects for each model, as follows:
```
- name: "gpt-3.5-turbo"
edit_format: "whole"
weak_model_name: "gpt-3.5-turbo"
use_repo_map: false
send_undo_reply: false
accepts_images: false
lazy: false
reminder_as_sys_msg: true
examples_as_sys_msg: false
- name: "gpt-4-turbo-2024-04-09"
edit_format: "udiff"
weak_model_name: "gpt-3.5-turbo"
use_repo_map: true
send_undo_reply: true
accepts_images: true
lazy: true
reminder_as_sys_msg: true
examples_as_sys_msg: false
```
## Specifying context window size and token costs
You can register context window limits and costs for models that aren't known
to aider. Create a `.aider.litellm.models.json` file in one of these locations:
- Your home directory.
- The root if your git repo.
- The current directory where you launch aider.
- Or specify a specific file with the `--model-metadata-file <filename>` switch.
If the files above exist, they will be loaded in that order.
Files loaded last will take priority.
The json file should be a dictionary with an entry for each model, as follows:
```
{
"deepseek-chat": {
"max_tokens": 4096,
"max_input_tokens": 32000,
"max_output_tokens": 4096,
"input_cost_per_token": 0.00000014,
"output_cost_per_token": 0.00000028,
"litellm_provider": "deepseek",
"mode": "chat"
}
}
```
See
[litellm's model_prices_and_context_window.json file](https://github.com/BerriAI/litellm/blob/main/model_prices_and_context_window.json) for more examples.

View file

@ -86,8 +86,13 @@ files which have dependencies.
Aider optimizes the repo map by Aider optimizes the repo map by
selecting the most important parts of the codebase selecting the most important parts of the codebase
which will which will
fit into the token budget assigned by the user fit into the active token budget.
(via the `--map-tokens` switch, which defaults to 1k tokens).
The token budget is
influenced by the `--map-tokens` switch, which defaults to 1k tokens.
Aider adjusts the size of the repo map dynamically based on the state of the chat. It will usually stay within that setting's value. But it does expand the repo map
significantly at times, especially when no files have been added to the chat and aider needs to understand the entire repo as best as possible.
The sample map shown above doesn't contain *every* class, method and function from those The sample map shown above doesn't contain *every* class, method and function from those
files. files.

View file

@ -80,7 +80,7 @@ See the
[Coder.create() and Coder.__init__() methods](https://github.com/paul-gauthier/aider/blob/main/aider/coders/base_coder.py) [Coder.create() and Coder.__init__() methods](https://github.com/paul-gauthier/aider/blob/main/aider/coders/base_coder.py)
for all the supported arguments. for all the supported arguments.
It can also be helpful to set the equivalend of `--yes` by doing this: It can also be helpful to set the equivalent of `--yes` by doing this:
``` ```
from aider.io import InputOutput from aider.io import InputOutput

View file

@ -21,8 +21,8 @@ In these cases, here are some things you might try.
## Use a capable model ## Use a capable model
If possible try using GPT-4o or Opus, as they are the strongest and most If possible try using GPT-4o, Claude 3.5 Sonnet or Claude 3 Opus,
capable models. as they are the strongest and most capable models.
Weaker models Weaker models
are more prone to are more prone to

View file

@ -9,6 +9,7 @@ description: Intro and tutorial videos made by aider users.
Here are a few tutorial videos made by aider users: Here are a few tutorial videos made by aider users:
- [Aider tips and Example use](https://www.youtube.com/watch?v=OsChkvGGDgw) -- techfren - [Aider tips and Example use](https://www.youtube.com/watch?v=OsChkvGGDgw) -- techfren
- [Generate application with just one prompt using Aider](https://www.youtube.com/watch?v=Y-_0VkMUiPc&t=78s) -- AICodeKing
- [Aider : the production ready AI coding assistant you've been waiting for](https://www.youtube.com/watch?v=zddJofosJuM) -- Learn Code With JV - [Aider : the production ready AI coding assistant you've been waiting for](https://www.youtube.com/watch?v=zddJofosJuM) -- Learn Code With JV
- [Holy Grail: FREE Coding Assistant That Can Build From EXISTING CODE BASE](https://www.youtube.com/watch?v=df8afeb1FY8) -- Matthew Berman - [Holy Grail: FREE Coding Assistant That Can Build From EXISTING CODE BASE](https://www.youtube.com/watch?v=df8afeb1FY8) -- Matthew Berman
- [Aider: This AI Coder Can Create AND Update Git Codebases](https://www.youtube.com/watch?v=EqLyFT78Sig) -- Ian Wootten - [Aider: This AI Coder Can Create AND Update Git Codebases](https://www.youtube.com/watch?v=EqLyFT78Sig) -- Ian Wootten

View file

@ -5,7 +5,8 @@ highlight_image: /assets/benchmarks-udiff.jpg
nav_exclude: true nav_exclude: true
--- ---
{% if page.date %} {% if page.date %}
<p class="post-date">{{ page.date | date: "%B %d, %Y" }}</p> <p class="post-date">{{ page.date | date: "%B %d, %Y" }}, by Paul Gauthier
</p>
{% endif %} {% endif %}
# Unified diffs make GPT-4 Turbo 3X less lazy # Unified diffs make GPT-4 Turbo 3X less lazy

View file

@ -37,26 +37,31 @@ Use /help to see in-chat commands, run with --help to see cmd line args
## Adding files ## Adding files
Just add the files that the aider will need to *edit*. Add the files that the aider will need to *edit*.
Don't add a bunch of extra files.
If you add too many files, the LLM can get overwhelmed If you add too many files, the LLM can get overwhelmed
and confused (and it costs more tokens). and confused (and it costs more tokens).
Aider will automatically Aider will automatically
pull in content from related files so that it can pull in content from related files so that it can
[understand the rest of your code base](https://aider.chat/docs/repomap.html). [understand the rest of your code base](https://aider.chat/docs/repomap.html).
You can also run aider without naming any files and use the in-chat You add files to the chat by naming them on the aider command line.
Or, you can use the in-chat
`/add` command to add files. `/add` command to add files.
Or you can skip adding files completely, and aider You can use aider without adding any files,
will try to figure out which files need to be edited based and it will try to figure out which files need to be edited based
on your requests. on your requests.
But you'll get the best results if you add the files that need
to edited.
## LLMs ## LLMs
Aider uses GPT-4o by default, but you can Aider uses GPT-4o by default, but you can
[connect to many different LLMs](/docs/llms.html). [connect to many different LLMs](/docs/llms.html).
Claude 3 Opus is another model which works very well with aider, Claude 3.5 Sonnet also works very well with aider,
which you can use by running `aider --opus`. which you can use by running `aider --sonnet`.
You can run `aider --model XXX` to launch aider with You can run `aider --model XXX` to launch aider with
a specific model. a specific model.
@ -68,8 +73,8 @@ Or, during your chat you can switch models with the in-chat
Ask aider to make changes to your code. Ask aider to make changes to your code.
It will show you some diffs of the changes it is making to It will show you some diffs of the changes it is making to
complete you request. complete you request.
Aider will git commit all of its changes, [Aider will git commit all of its changes](/docs/git.html),
so they are easy to track and undo. so they are easy to track and undo.
You can always use the `/undo` command to undo changes you don't You can always use the `/undo` command to undo AI changes that you don't
like. like.

View file

@ -45,6 +45,7 @@ and works best with GPT-4o, Claude 3.5 Sonnet, Claude 3 Opus and DeepSeek Coder
# Because this page is rendered by GitHub as the repo README # Because this page is rendered by GitHub as the repo README
cog.out(open("website/_includes/get-started.md").read()) cog.out(open("website/_includes/get-started.md").read())
--> -->
You can get started quickly like this: You can get started quickly like this:
``` ```
@ -53,18 +54,13 @@ $ pip install aider-chat
# Change directory into a git repo # Change directory into a git repo
$ cd /to/your/git/repo $ cd /to/your/git/repo
# Work with Claude 3.5 Sonnet on your repo
$ export ANTHROPIC_API_KEY=your-key-goes-here
$ aider
# Work with GPT-4o on your repo # Work with GPT-4o on your repo
$ export OPENAI_API_KEY=your-key-goes-here $ export OPENAI_API_KEY=your-key-goes-here
$ aider $ aider
# Or, work with Anthropic's models
$ export ANTHROPIC_API_KEY=your-key-goes-here
# Claude 3 Opus
$ aider --opus
# Claude 3.5 Sonnet
$ aider --sonnet
``` ```
<!-- NOOP --> <!-- NOOP -->
@ -93,8 +89,8 @@ and can [connect to almost any LLM](https://aider.chat/docs/llms.html).
- Edit files in your editor while chatting with aider, - Edit files in your editor while chatting with aider,
and it will always use the latest version. and it will always use the latest version.
Pair program with AI. Pair program with AI.
- Add images to the chat (GPT-4o, GPT-4 Turbo, etc). - [Add images to the chat](https://aider.chat/docs/images-urls.html) (GPT-4o, Claude 3.5 Sonnet, etc).
- Add URLs to the chat and aider will read their content. - [Add URLs to the chat](https://aider.chat/docs/images-urls.html) and aider will read their content.
- [Code with your voice](https://aider.chat/docs/voice.html). - [Code with your voice](https://aider.chat/docs/voice.html).
@ -139,5 +135,6 @@ projects like django, scikitlearn, matplotlib, etc.
- *I am an aider addict. I'm getting so much more work done, but in less time.* -- [dandandan](https://discord.com/channels/1131200896827654144/1131200896827654149/1135913253483069470) - *I am an aider addict. I'm getting so much more work done, but in less time.* -- [dandandan](https://discord.com/channels/1131200896827654144/1131200896827654149/1135913253483069470)
- *After wasting $100 on tokens trying to find something better, I'm back to Aider. It blows everything else out of the water hands down, there's no competition whatsoever.* -- [SystemSculpt](https://discord.com/channels/1131200896827654144/1131200896827654149/1178736602797846548) - *After wasting $100 on tokens trying to find something better, I'm back to Aider. It blows everything else out of the water hands down, there's no competition whatsoever.* -- [SystemSculpt](https://discord.com/channels/1131200896827654144/1131200896827654149/1178736602797846548)
- *Hands down, this is the best AI coding assistant tool so far.* -- [IndyDevDan](https://www.youtube.com/watch?v=MPYFPvxfGZs) - *Hands down, this is the best AI coding assistant tool so far.* -- [IndyDevDan](https://www.youtube.com/watch?v=MPYFPvxfGZs)
- *[Aider] changed my daily coding workflows. It's mind-blowing how a single Python application can change your life.* -- [maledorak](https://discord.com/channels/1131200896827654144/1131200896827654149/1258453375620747264)
- *Best agent for actual dev work in existing codebases.* -- [Nick Dobos](https://twitter.com/NickADobos/status/1690408967963652097?s=20) - *Best agent for actual dev work in existing codebases.* -- [Nick Dobos](https://twitter.com/NickADobos/status/1690408967963652097?s=20)
<!--[[[end]]]--> <!--[[[end]]]-->