Merge branch 'main' into feature/litellm-mcp

This commit is contained in:
Quinlan Jager 2025-05-12 08:08:55 -07:00
commit c1a5e8d0d5
76 changed files with 3366 additions and 1403 deletions

View file

@ -15,7 +15,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.9", "3.10", "3.11", "3.12"]
python-version: ["3.10", "3.11", "3.12"]
steps:
- name: Set up Python ${{ matrix.python-version }}

48
.github/workflows/pre-commit.yml vendored Normal file
View file

@ -0,0 +1,48 @@
---
name: pre-commit
on:
pull_request:
push:
workflow_dispatch:
jobs:
pre-commit:
runs-on: ubuntu-latest
env:
RAW_LOG: pre-commit.log
CS_XML: pre-commit.xml
steps:
- run: sudo apt-get update && sudo apt-get install cppcheck uncrustify
if: false
- uses: actions/checkout@v4
- run: python -m pip install pre-commit
- uses: actions/cache/restore@v4
with:
path: ~/.cache/pre-commit/
key: pre-commit-4|${{ env.pythonLocation }}|${{ hashFiles('.pre-commit-config.yaml') }}
- name: Run pre-commit hooks
env:
SKIP: no-commit-to-branch
run: |
set -o pipefail
pre-commit gc
pre-commit run --show-diff-on-failure --color=always --all-files | tee ${RAW_LOG}
- name: Convert Raw Log to Checkstyle format (launch action)
uses: mdeweerd/logToCheckStyle@v2025.1.1
if: ${{ failure() }}
with:
in: ${{ env.RAW_LOG }}
# out: ${{ env.CS_XML }}
- uses: actions/cache/save@v4
if: ${{ ! cancelled() }}
with:
path: ~/.cache/pre-commit/
key: pre-commit-4|${{ env.pythonLocation }}|${{ hashFiles('.pre-commit-config.yaml') }}
- name: Provide log as artifact
uses: actions/upload-artifact@v4
if: ${{ ! cancelled() }}
with:
name: precommit-logs
path: |
${{ env.RAW_LOG }}
${{ env.CS_XML }}
retention-days: 2

View file

@ -25,7 +25,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.9", "3.10", "3.11", "3.12"]
python-version: ["3.10", "3.11", "3.12"]
steps:
- name: Check out repository

View file

@ -25,7 +25,7 @@ jobs:
runs-on: windows-latest
strategy:
matrix:
python-version: ["3.9", "3.10", "3.11", "3.12"]
python-version: ["3.10", "3.11", "3.12"]
steps:
- name: Check out repository

View file

@ -15,7 +15,7 @@ jobs:
runs-on: windows-latest
strategy:
matrix:
python-version: ["3.9", "3.10", "3.11", "3.12"]
python-version: ["3.10", "3.11", "3.12"]
defaults:
run:
shell: pwsh # Use PowerShell for all run steps

View file

@ -1,13 +1,23 @@
# Release history
### main branch
### Aider v0.83.1
- Improved user language detection by correctly normalizing hyphenated language codes (e.g., `en-US` to `en`) and enhancing the validation of locale results.
- Prevented Aider from instructing the LLM to reply in 'C' or 'POSIX' when these are detected as the system locale.
- Displayed a spinner with the model name when generating commit messages.
- Aider wrote 74% of the code in this release.
### Aider v0.83.0
- Added support for `qwen3-235b` models, including `openrouter/qwen/qwen3-235b-a22b`.
- Added support for `gemini-2.5-pro-preview-05-06` models.
- Added repomap support for OCaml and OCaml interface files, by Andrey Popp.
- Added support for `qwen3-235b` models.
- Added repo-map support for OCaml and OCaml interface files, by Andrey Popp.
- Added a spinner animation while waiting for the LLM to start streaming its response.
- Updated the spinner animation to a Knight Rider style.
- Introduced `--attribute-co-authored-by` option to add co-author trailer to commit messages, by Andrew Grigorev.
- Updated Gemini model aliases (e.g., `gemini`, `gemini-2.5-pro`) to point to the `05-06` preview versions.
- Marked Gemini 2.5 Pro preview models as `overeager` by default.
- Commit message prompt specifies the user's language.
- Updated the default weak model for Gemini 2.5 Pro models to `gemini/gemini-2.5-flash-preview-04-17`.
- Corrected `gemini-2.5-pro-exp-03-25` model settings to reflect its lack of support for `thinking_budget`.
- Ensured model-specific system prompt prefixes are placed on a new line before the main system prompt.
@ -20,7 +30,18 @@
- The `aider scrape` command-line tool will now use Playwright for web scraping if it is available, by Jon Keys.
- Fixed linter command execution on Windows by adopting `oslex` for argument quoting, by Titusz Pan.
- Improved cross-platform display of shell commands by using `oslex` for robust argument quoting, by Titusz Pan.
- Aider wrote 46% of the code in this release.
- Improved `/ask` mode to instruct the LLM to elide unchanging code in its responses.
- Ensured web scraping in the GUI also respects Playwright availability and the `--disable-playwright` flag.
- Improved display of filenames in the prompt header using rich Text formatting.
- Enabled `reasoning_effort` for Gemini 2.5 Flash models.
- Added a `--shell-completions` argument to generate shell completion scripts (e.g., for bash, zsh).
- Explicit `--attribute-author` or `--attribute-committer` flags now override the default behavior when `--attribute-co-authored-by` is used, allowing finer control over commit attribution, by Andrew Grigorev.
- Fixed an issue where read-only status of files might not be preserved correctly by some commands (e.g. `/drop` after adding a read-only file).
- The `aider-args` utility (or `python -m aider.args`) now defaults to printing a sample YAML configuration if no arguments are provided.
- Displayed token count progress and the name of the file or identifier being processed during repo map updates.
- Extended the waiting spinner to also show for non-streaming responses and further enhanced its animation with console width clipping, cursor hiding, and a more continuous appearance.
- Dropped support for Python 3.9.
- Aider wrote 55% of the code in this release.
### Aider v0.82.3

View file

@ -33,7 +33,7 @@ src="https://img.shields.io/badge/📈%20Tokens%2Fweek-15B-3498db?style=flat-squ
<a href="https://openrouter.ai/#options-menu"><img alt="OpenRouter Ranking" title="Aider's ranking among applications on the OpenRouter platform"
src="https://img.shields.io/badge/🏆%20OpenRouter-Top%2020-9b59b6?style=flat-square&labelColor=555555"/></a>
<a href="https://aider.chat/HISTORY.html"><img alt="Singularity" title="Percentage of the new code in Aider's last release written by Aider itself"
src="https://img.shields.io/badge/🔄%20Singularity-92%25-e74c3c?style=flat-square&labelColor=555555"/></a>
src="https://img.shields.io/badge/🔄%20Singularity-54%25-e74c3c?style=flat-square&labelColor=555555"/></a>
<!--[[[end]]]-->
</p>
@ -135,43 +135,44 @@ See the [installation instructions](https://aider.chat/docs/install.html) and [u
### Community & Resources
- [LLM Leaderboards](https://aider.chat/docs/leaderboards/)
- [GitHub Repository](https://github.com/Aider-AI/aider)
- [Discord Community](https://discord.gg/Tv2uQnR88V)
- [Discord Community](https://discord.gg/Y7X7bhMQFV)
- [Blog](https://aider.chat/blog/)
## Kind Words From Users
- *"My life has changed... There's finally an AI coding tool that's good enough to keep up with me... Aider... It's going to rock your world."* — [Eric S. Raymond](https://x.com/esrtweet/status/1910809356381413593)
- *"The best free open source AI coding assistant."* — [IndyDevDan](https://youtu.be/YALpX8oOn78)
- *"The best AI coding assistant so far."* — [Matthew Berman](https://www.youtube.com/watch?v=df8afeb1FY8)
- *"Aider ... has easily quadrupled my coding productivity."* — [SOLAR_FIELDS](https://news.ycombinator.com/item?id=36212100)
- *"It's a cool workflow... Aider's ergonomics are perfect for me."* — [qup](https://news.ycombinator.com/item?id=38185326)
- *"It's really like having your senior developer live right in your Git repo - truly amazing!"* — [rappster](https://github.com/Aider-AI/aider/issues/124)
- *"What an amazing tool. It's incredible."* — [valyagolev](https://github.com/Aider-AI/aider/issues/6#issue-1722897858)
- *"Aider is such an astounding thing!"* — [cgrothaus](https://github.com/Aider-AI/aider/issues/82#issuecomment-1631876700)
- *"It was WAY faster than I would be getting off the ground and making the first few working versions."* — [Daniel Feldman](https://twitter.com/d_feldman/status/1662295077387923456)
- *"THANK YOU for Aider! It really feels like a glimpse into the future of coding."* — [derwiki](https://news.ycombinator.com/item?id=38205643)
- *"It's just amazing. It is freeing me to do things I felt were out my comfort zone before."* — [Dougie](https://discord.com/channels/1131200896827654144/1174002618058678323/1174084556257775656)
- *"This project is stellar."* — [funkytaco](https://github.com/Aider-AI/aider/issues/112#issuecomment-1637429008)
- *"Amazing project, definitely the best AI coding assistant I've used."* — [joshuavial](https://github.com/Aider-AI/aider/issues/84)
- *"I absolutely love using Aider ... It makes software development feel so much lighter as an experience."* — [principalideal0](https://discord.com/channels/1131200896827654144/1133421607499595858/1229689636012691468)
- *"I have been recovering from multiple shoulder surgeries ... and have used aider extensively. It has allowed me to continue productivity."* — [codeninja](https://www.reddit.com/r/OpenAI/s/nmNwkHy1zG)
- *"I am an aider addict. I'm getting so much more work done, but in less time."* — [dandandan](https://discord.com/channels/1131200896827654144/1131200896827654149/1135913253483069470)
- *"After wasting $100 on tokens trying to find something better, I'm back to Aider. It blows everything else out of the water hands down, there's no competition whatsoever."* — [SystemSculpt](https://discord.com/channels/1131200896827654144/1131200896827654149/1178736602797846548)
- *"Aider is amazing, coupled with Sonnet 3.5 it's quite mind blowing."* — [Josh Dingus](https://discord.com/channels/1131200896827654144/1133060684540813372/1262374225298198548)
- *"Hands down, this is the best AI coding assistant tool so far."* — [IndyDevDan](https://www.youtube.com/watch?v=MPYFPvxfGZs)
- *"[Aider] changed my daily coding workflows. It's mind-blowing how a single Python application can change your life."* — [maledorak](https://discord.com/channels/1131200896827654144/1131200896827654149/1258453375620747264)
- *"Best agent for actual dev work in existing codebases."* — [Nick Dobos](https://twitter.com/NickADobos/status/1690408967963652097?s=20)
- *"One of my favorite pieces of software. Blazing trails on new paradigms!"* — [Chris Wall](https://x.com/chris65536/status/1905053299251798432)
- *"Aider has been revolutionary for me and my work."* — [Starry Hope](https://x.com/starryhopeblog/status/1904985812137132056)
- *"Try aider! One of the best ways to vibe code."* — [Chris Wall](https://x.com/Chris65536/status/1905053418961391929)
- *"Aider is hands down the best. And it's free and opensource."* — [AriyaSavakaLurker](https://www.reddit.com/r/ChatGPTCoding/comments/1ik16y6/whats_your_take_on_aider/mbip39n/)
- *"Aider is also my best friend."* — [jzn21](https://www.reddit.com/r/ChatGPTCoding/comments/1heuvuo/aider_vs_cline_vs_windsurf_vs_cursor/m27dcnb/)
- *"Try Aider, it's worth it."* — [jorgejhms](https://www.reddit.com/r/ChatGPTCoding/comments/1heuvuo/aider_vs_cline_vs_windsurf_vs_cursor/m27cp99/)
- *"I like aider :)"* — [Chenwei Cui](https://x.com/ccui42/status/1904965344999145698)
- *"Aider is the precision tool of LLM code gen... Minimal, thoughtful and capable of surgical changes to your codebase all while keeping the developer in control."* — [Reilly Sweetland](https://x.com/rsweetland/status/1904963807237259586)
- *"Cannot believe aider vibe coded a 650 LOC feature across service and cli today in 1 shot."* - [autopoietist](https://discord.com/channels/1131200896827654144/1131200896827654149/1355675042259796101)
- *"Oh no the secret is out! Yes, Aider is the best coding tool around. I highly, highly recommend it to anyone."* — [Joshua D Vander Hook](https://x.com/jodavaho/status/1911154899057795218)
- *"thanks to aider, i have started and finished three personal projects within the last two days"* — [joseph stalzyn](https://x.com/anitaheeder/status/1908338609645904160)
- *"Been using aider as my daily driver for over a year ... I absolutely love the tool, like beyond words."* — [koleok](https://discord.com/channels/1131200896827654144/1273248471394291754/1356727448372252783)
- *"aider is really cool"* — [kache (@yacineMTB)](https://x.com/yacineMTB/status/1911224442430124387)
- *"My life has changed... Aider... It's going to rock your world."* — [Eric S. Raymond on X](https://x.com/esrtweet/status/1910809356381413593)
- *"The best free open source AI coding assistant."* — [IndyDevDan on YouTube](https://youtu.be/YALpX8oOn78)
- *"The best AI coding assistant so far."* — [Matthew Berman on YouTube](https://www.youtube.com/watch?v=df8afeb1FY8)
- *"Aider ... has easily quadrupled my coding productivity."* — [SOLAR_FIELDS on Hacker News](https://news.ycombinator.com/item?id=36212100)
- *"It's a cool workflow... Aider's ergonomics are perfect for me."* — [qup on Hacker News](https://news.ycombinator.com/item?id=38185326)
- *"It's really like having your senior developer live right in your Git repo - truly amazing!"* — [rappster on GitHub](https://github.com/Aider-AI/aider/issues/124)
- *"What an amazing tool. It's incredible."* — [valyagolev on GitHub](https://github.com/Aider-AI/aider/issues/6#issue-1722897858)
- *"Aider is such an astounding thing!"* — [cgrothaus on GitHub](https://github.com/Aider-AI/aider/issues/82#issuecomment-1631876700)
- *"It was WAY faster than I would be getting off the ground and making the first few working versions."* — [Daniel Feldman on X](https://twitter.com/d_feldman/status/1662295077387923456)
- *"THANK YOU for Aider! It really feels like a glimpse into the future of coding."* — [derwiki on Hacker News](https://news.ycombinator.com/item?id=38205643)
- *"It's just amazing. It is freeing me to do things I felt were out my comfort zone before."* — [Dougie on Discord](https://discord.com/channels/1131200896827654144/1174002618058678323/1174084556257775656)
- *"This project is stellar."* — [funkytaco on GitHub](https://github.com/Aider-AI/aider/issues/112#issuecomment-1637429008)
- *"Amazing project, definitely the best AI coding assistant I've used."* — [joshuavial on GitHub](https://github.com/Aider-AI/aider/issues/84)
- *"I absolutely love using Aider ... It makes software development feel so much lighter as an experience."* — [principalideal0 on Discord](https://discord.com/channels/1131200896827654144/1133421607499595858/1229689636012691468)
- *"I have been recovering from ... surgeries ... aider ... has allowed me to continue productivity."* — [codeninja on Reddit](https://www.reddit.com/r/OpenAI/s/nmNwkHy1zG)
- *"I am an aider addict. I'm getting so much more work done, but in less time."* — [dandandan on Discord](https://discord.com/channels/1131200896827654144/1131200896827654149/1135913253483069470)
- *"Aider... blows everything else out of the water hands down, there's no competition whatsoever."* — [SystemSculpt on Discord](https://discord.com/channels/1131200896827654144/1131200896827654149/1178736602797846548)
- *"Aider is amazing, coupled with Sonnet 3.5 it's quite mind blowing."* — [Josh Dingus on Discord](https://discord.com/channels/1131200896827654144/1133060684540813372/1262374225298198548)
- *"Hands down, this is the best AI coding assistant tool so far."* — [IndyDevDan on YouTube](https://www.youtube.com/watch?v=MPYFPvxfGZs)
- *"[Aider] changed my daily coding workflows. It's mind-blowing how ...(it)... can change your life."* — [maledorak on Discord](https://discord.com/channels/1131200896827654144/1131200896827654149/1258453375620747264)
- *"Best agent for actual dev work in existing codebases."* — [Nick Dobos on X](https://twitter.com/NickADobos/status/1690408967963652097?s=20)
- *"One of my favorite pieces of software. Blazing trails on new paradigms!"* — [Chris Wall on X](https://x.com/chris65536/status/1905053299251798432)
- *"Aider has been revolutionary for me and my work."* — [Starry Hope on X](https://x.com/starryhopeblog/status/1904985812137132056)
- *"Try aider! One of the best ways to vibe code."* — [Chris Wall on X](https://x.com/Chris65536/status/1905053418961391929)
- *"Aider is hands down the best. And it's free and opensource."* — [AriyaSavakaLurker on Reddit](https://www.reddit.com/r/ChatGPTCoding/comments/1ik16y6/whats_your_take_on_aider/mbip39n/)
- *"Aider is also my best friend."* — [jzn21 on Reddit](https://www.reddit.com/r/ChatGPTCoding/comments/1heuvuo/aider_vs_cline_vs_windsurf_vs_cursor/m27dcnb/)
- *"Try Aider, it's worth it."* — [jorgejhms on Reddit](https://www.reddit.com/r/ChatGPTCoding/comments/1heuvuo/aider_vs_cline_vs_windsurf_vs_cursor/m27cp99/)
- *"I like aider :)"* — [Chenwei Cui on X](https://x.com/ccui42/status/1904965344999145698)
- *"Aider is the precision tool of LLM code gen... Minimal, thoughtful and capable of surgical changes ... while keeping the developer in control."* — [Reilly Sweetland on X](https://x.com/rsweetland/status/1904963807237259586)
- *"Cannot believe aider vibe coded a 650 LOC feature across service and cli today in 1 shot."* - [autopoietist on Discord](https://discord.com/channels/1131200896827654144/1131200896827654149/1355675042259796101)
- *"Oh no the secret is out! Yes, Aider is the best coding tool around. I highly, highly recommend it to anyone."* — [Joshua D Vander Hook on X](https://x.com/jodavaho/status/1911154899057795218)
- *"thanks to aider, i have started and finished three personal projects within the last two days"* — [joseph stalzyn on X](https://x.com/anitaheeder/status/1908338609645904160)
- *"Been using aider as my daily driver for over a year ... I absolutely love the tool, like beyond words."* — [koleok on Discord](https://discord.com/channels/1131200896827654144/1273248471394291754/1356727448372252783)
- *"Aider ... is the tool to benchmark against."* — [BeetleB on Hacker News](https://news.ycombinator.com/item?id=43930201)
- *"aider is really cool"* — [kache on X](https://x.com/yacineMTB/status/1911224442430124387)

View file

@ -1,6 +1,6 @@
from packaging import version
__version__ = "0.82.4.dev"
__version__ = "0.83.2.dev"
safe_version = __version__
try:

View file

@ -6,6 +6,7 @@ import sys
from pathlib import Path
import configargparse
import shtab
from aider import __version__
from aider.args_formatter import (
@ -39,10 +40,22 @@ def get_parser(default_config_files, git_root):
config_file_parser_class=configargparse.YAMLConfigFileParser,
auto_env_var_prefix="AIDER_",
)
# List of valid edit formats for argparse validation & shtab completion.
# Dynamically gather them from the registered coder classes so the list
# stays in sync if new formats are added.
from aider import coders as _aider_coders
edit_format_choices = sorted(
{
c.edit_format
for c in _aider_coders.__all__
if hasattr(c, "edit_format") and c.edit_format is not None
}
)
group = parser.add_argument_group("Main model")
group.add_argument(
"files", metavar="FILE", nargs="*", help="files to edit with an LLM (optional)"
)
).complete = shtab.FILE
group.add_argument(
"--model",
metavar="MODEL",
@ -109,13 +122,13 @@ def get_parser(default_config_files, git_root):
metavar="MODEL_SETTINGS_FILE",
default=".aider.model.settings.yml",
help="Specify a file with aider model settings for unknown models",
)
).complete = shtab.FILE
group.add_argument(
"--model-metadata-file",
metavar="MODEL_METADATA_FILE",
default=".aider.model.metadata.json",
help="Specify a file with context window and costs for unknown models",
)
).complete = shtab.FILE
group.add_argument(
"--alias",
action="append",
@ -148,6 +161,7 @@ def get_parser(default_config_files, git_root):
"--edit-format",
"--chat-mode",
metavar="EDIT_FORMAT",
choices=edit_format_choices,
default=None,
help="Specify what edit format the LLM should use (default depends on model)",
)
@ -182,6 +196,7 @@ def get_parser(default_config_files, git_root):
group.add_argument(
"--editor-edit-format",
metavar="EDITOR_EDIT_FORMAT",
choices=edit_format_choices,
default=None,
help="Specify the edit format for the editor model (default: depends on editor model)",
)
@ -261,13 +276,13 @@ def get_parser(default_config_files, git_root):
metavar="INPUT_HISTORY_FILE",
default=default_input_history_file,
help=f"Specify the chat input history file (default: {default_input_history_file})",
)
).complete = shtab.FILE
group.add_argument(
"--chat-history-file",
metavar="CHAT_HISTORY_FILE",
default=default_chat_history_file,
help=f"Specify the chat history file (default: {default_chat_history_file})",
)
).complete = shtab.FILE
group.add_argument(
"--restore-chat-history",
action=argparse.BooleanOptionalAction,
@ -279,7 +294,7 @@ def get_parser(default_config_files, git_root):
metavar="LLM_HISTORY_FILE",
default=None,
help="Log the conversation with the LLM to this file (for example, .aider.llm.history)",
)
).complete = shtab.FILE
##########
group = parser.add_argument_group("Output settings")
@ -405,7 +420,7 @@ def get_parser(default_config_files, git_root):
type=lambda path_str: resolve_aiderignore_path(path_str, git_root),
default=default_aiderignore_file,
help="Specify the aider ignore file (default: .aiderignore in git root)",
)
).complete = shtab.FILE
group.add_argument(
"--subtree-only",
action="store_true",
@ -551,7 +566,7 @@ def get_parser(default_config_files, git_root):
"--analytics-log",
metavar="ANALYTICS_LOG_FILE",
help="Specify a file to log analytics events",
)
).complete = shtab.FILE
group.add_argument(
"--analytics-disable",
action="store_true",
@ -618,7 +633,7 @@ def get_parser(default_config_files, git_root):
"Specify a file containing the message to send the LLM, process reply, then exit"
" (disables chat mode)"
),
)
).complete = shtab.FILE
group.add_argument(
"--gui",
"--browser",
@ -636,7 +651,7 @@ def get_parser(default_config_files, git_root):
"--apply",
metavar="FILE",
help="Apply the changes from the given file instead of running the chat (debug)",
)
).complete = shtab.FILE
group.add_argument(
"--apply-clipboard-edits",
action="store_true",
@ -697,13 +712,13 @@ def get_parser(default_config_files, git_root):
action="append",
metavar="FILE",
help="specify a file to edit (can be used multiple times)",
)
).complete = shtab.FILE
group.add_argument(
"--read",
action="append",
metavar="FILE",
help="specify a read-only file (can be used multiple times)",
)
).complete = shtab.FILE
group.add_argument(
"--vim",
action="store_true",
@ -733,7 +748,7 @@ def get_parser(default_config_files, git_root):
"--load",
metavar="LOAD_FILE",
help="Load and execute /commands from a file on launch",
)
).complete = shtab.FILE
group.add_argument(
"--encoding",
default="utf-8",
@ -766,7 +781,7 @@ def get_parser(default_config_files, git_root):
"Specify the config file (default: search for .aider.conf.yml in git root, cwd"
" or home directory)"
),
)
).complete = shtab.FILE
# This is a duplicate of the argument in the preparser and is a no-op by this time of
# argument parsing, but it's here so that the help is displayed as expected.
group.add_argument(
@ -774,7 +789,7 @@ def get_parser(default_config_files, git_root):
metavar="ENV_FILE",
default=default_env_file(git_root),
help="Specify the .env file to load (default: .env in git root)",
)
).complete = shtab.FILE
group.add_argument(
"--suggest-shell-commands",
action=argparse.BooleanOptionalAction,
@ -822,6 +837,17 @@ def get_parser(default_config_files, git_root):
help="Specify which editor to use for the /editor command",
)
supported_shells_list = sorted(list(shtab.SUPPORTED_SHELLS))
group.add_argument(
"--shell-completions",
metavar="SHELL",
choices=supported_shells_list,
help=(
"Print shell completion script for the specified SHELL and exit. Supported shells:"
f" {', '.join(supported_shells_list)}. Example: aider --shell-completions bash"
),
)
##########
group = parser.add_argument_group("Deprecated model settings")
# Add deprecated model shortcut arguments
@ -870,13 +896,34 @@ def get_sample_dotenv():
def main():
arg = sys.argv[1] if len(sys.argv[1:]) else None
if arg == "md":
print(get_md_help())
elif arg == "dotenv":
print(get_sample_dotenv())
if len(sys.argv) > 1:
command = sys.argv[1]
else:
command = "yaml" # Default to yaml if no command is given
if command == "md":
print(get_md_help())
elif command == "dotenv":
print(get_sample_dotenv())
elif command == "yaml":
print(get_sample_yaml())
elif command == "completion":
if len(sys.argv) > 2:
shell = sys.argv[2]
if shell not in shtab.SUPPORTED_SHELLS:
print(f"Error: Unsupported shell '{shell}'.", file=sys.stderr)
print(f"Supported shells are: {', '.join(shtab.SUPPORTED_SHELLS)}", file=sys.stderr)
sys.exit(1)
parser = get_parser([], None)
parser.prog = "aider" # Set the program name on the parser
print(shtab.complete(parser, shell=shell))
else:
print("Error: Please specify a shell for completion.", file=sys.stderr)
print(f"Usage: python {sys.argv[0]} completion <shell_name>", file=sys.stderr)
print(f"Supported shells are: {', '.join(shtab.SUPPORTED_SHELLS)}", file=sys.stderr)
sys.exit(1)
else:
# Default to YAML for any other unrecognized argument, or if 'yaml' was explicitly passed
print(get_sample_yaml())

View file

@ -8,7 +8,7 @@ class AskPrompts(CoderPrompts):
Answer questions about the supplied code.
Always reply to the user in {language}.
Describe code changes however you like. Don't use SEARCH/REPLACE blocks!
If you need to describe code changes, do so *briefly*.
"""
example_messages = []

View file

@ -28,6 +28,7 @@ from pathlib import Path
from typing import List
from litellm import experimental_mcp_client
from rich.console import Console
from aider import __version__, models, prompts, urls, utils
from aider.analytics import Analytics
@ -48,6 +49,7 @@ from aider.repo import ANY_GIT_ERROR, GitRepo
from aider.repomap import RepoMap
from aider.run_cmd import run_cmd
from aider.utils import format_content, format_messages, format_tokens, is_image_file
from aider.waiting import WaitingSpinner
from ..dump import dump # noqa: F401
from .chat_chunks import ChatChunks
@ -590,6 +592,15 @@ class Coder:
return True
def _stop_waiting_spinner(self):
"""Stop and clear the waiting spinner if it is running."""
spinner = getattr(self, "waiting_spinner", None)
if spinner:
try:
spinner.stop()
finally:
self.waiting_spinner = None
def get_abs_fnames_content(self):
for fname in list(self.abs_fnames):
content = self.io.read_text(fname)
@ -982,6 +993,9 @@ class Coder:
return inp
def keyboard_interrupt(self):
# Ensure cursor is visible on exit
Console().show_cursor(True)
now = time.time()
thresh = 2 # seconds
@ -1050,6 +1064,9 @@ class Coder:
if not lang_code:
return None
if lang_code.upper() in ("C", "POSIX"):
return None
# Probably already a language name
if (
len(lang_code) > 3
@ -1080,7 +1097,8 @@ class Coder:
"ko": "Korean",
"ru": "Russian",
}
return fallback.get(lang_code.split("_")[0].lower(), lang_code)
primary_lang_code = lang_code.replace("-", "_").split("_")[0].lower()
return fallback.get(primary_lang_code, lang_code)
def get_user_language(self):
"""
@ -1091,6 +1109,7 @@ class Coder:
2. ``locale.getlocale()``
3. ``LANG`` / ``LANGUAGE`` / ``LC_ALL`` / ``LC_MESSAGES`` environment variables
"""
# Explicit override
if self.chat_language:
return self.normalize_language(self.chat_language)
@ -1099,9 +1118,11 @@ class Coder:
try:
lang = locale.getlocale()[0]
if lang:
return self.normalize_language(lang)
lang = self.normalize_language(lang)
if lang:
return lang
except Exception:
pass # pragma: no cover
pass
# Environment variables
for env_var in ("LANG", "LANGUAGE", "LC_ALL", "LC_MESSAGES"):
@ -1184,10 +1205,10 @@ class Coder:
)
rename_with_shell = ""
if self.chat_language:
language = self.chat_language
if user_lang: # user_lang is the result of self.get_user_language()
language = user_lang
else:
language = "the same language they are using"
language = "the same language they are using" # Default if no specific lang detected
if self.fence[0] == "`" * 4:
quad_backtick_reminder = (
@ -1429,8 +1450,13 @@ class Coder:
utils.show_messages(messages, functions=self.functions)
self.multi_response_content = ""
if self.show_pretty() and self.stream:
self.mdstream = self.io.get_assistant_mdstream()
if self.show_pretty():
self.waiting_spinner = WaitingSpinner("Waiting for " + self.main_model.name)
self.waiting_spinner.start()
if self.stream:
self.mdstream = self.io.get_assistant_mdstream()
else:
self.mdstream = None
else:
self.mdstream = None
@ -1503,6 +1529,9 @@ class Coder:
self.live_incremental_response(True)
self.mdstream = None
# Ensure any waiting spinner is stopped
self._stop_waiting_spinner()
self.partial_response_content = self.get_multi_response_content_in_progress(True)
self.remove_reasoning_content()
self.multi_response_content = ""
@ -1994,6 +2023,9 @@ class Coder:
self.io.ai_output(json.dumps(args, indent=4))
def show_send_output(self, completion):
# Stop spinner once we have a response
self._stop_waiting_spinner()
if self.verbose:
print(completion)
@ -2113,6 +2145,8 @@ class Coder:
except AttributeError:
pass
if received_content:
self._stop_waiting_spinner()
self.partial_response_content += text
if self.show_pretty():

View file

@ -5,5 +5,6 @@ from .editblock_fenced_prompts import EditBlockFencedPrompts
class EditBlockFencedCoder(EditBlockCoder):
"""A coder that uses fenced search/replace blocks for code modifications."""
edit_format = "diff-fenced"
gpt_prompts = EditBlockFencedPrompts()

View file

@ -5,6 +5,7 @@ from .help_prompts import HelpPrompts
class HelpCoder(Coder):
"""Interactive help and documentation about aider."""
edit_format = "help"
gpt_prompts = HelpPrompts()

View file

@ -22,4 +22,4 @@ Don't leave out any lines or the diff patch won't apply correctly.
To make a new file, show a diff from `--- /dev/null` to `+++ path/to/new/file.ext`.
{final_reminders}
""" # noqa
""" # noqa

View file

@ -47,6 +47,7 @@ class Commands:
parser=self.parser,
verbose=self.verbose,
editor=self.editor,
original_read_only_fnames=self.original_read_only_fnames,
)
def __init__(
@ -1391,7 +1392,30 @@ class Commands:
"Print out the current settings"
settings = format_settings(self.parser, self.args)
announcements = "\n".join(self.coder.get_announcements())
# Build metadata for the active models (main, editor, weak)
model_sections = []
active_models = [
("Main model", self.coder.main_model),
("Editor model", getattr(self.coder.main_model, "editor_model", None)),
("Weak model", getattr(self.coder.main_model, "weak_model", None)),
]
for label, model in active_models:
if not model:
continue
info = getattr(model, "info", {}) or {}
if not info:
continue
model_sections.append(f"{label} ({model.name}):")
for k, v in sorted(info.items()):
model_sections.append(f" {k}: {v}")
model_sections.append("") # blank line between models
model_metadata = "\n".join(model_sections)
output = f"{announcements}\n{settings}"
if model_metadata:
output += "\n" + model_metadata
self.io.tool_output(output)
def completions_raw_load(self, document, complete_event):

View file

@ -1144,18 +1144,19 @@ class InputOutput:
ro_paths = []
for rel_path in read_only_files:
abs_path = os.path.abspath(os.path.join(self.root, rel_path))
ro_paths.append(abs_path if len(abs_path) < len(rel_path) else rel_path)
ro_paths.append(Text(abs_path if len(abs_path) < len(rel_path) else rel_path))
files_with_label = ["Readonly:"] + ro_paths
files_with_label = [Text("Readonly:")] + ro_paths
read_only_output = StringIO()
Console(file=read_only_output, force_terminal=False).print(Columns(files_with_label))
read_only_lines = read_only_output.getvalue().splitlines()
console.print(Columns(files_with_label))
if editable_files:
files_with_label = editable_files
text_editable_files = [Text(f) for f in editable_files]
files_with_label = text_editable_files
if read_only_files:
files_with_label = ["Editable:"] + editable_files
files_with_label = [Text("Editable:")] + text_editable_files
editable_output = StringIO()
Console(file=editable_output, force_terminal=False).print(Columns(files_with_label))
editable_lines = editable_output.getvalue().splitlines()

View file

@ -4,10 +4,10 @@ import subprocess
import sys
import traceback
import warnings
import oslex
from dataclasses import dataclass
from pathlib import Path
import oslex
from grep_ast import TreeContext, filename_to_lang
from grep_ast.tsl import get_parser # noqa: E402

View file

@ -14,6 +14,7 @@ except ImportError:
git = None
import importlib_resources
import shtab
from dotenv import load_dotenv
from prompt_toolkit.enums import EditingMode
@ -503,6 +504,12 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
# Parse again to include any arguments that might have been defined in .env
args = parser.parse_args(argv)
if args.shell_completions:
# Ensure parser.prog is set for shtab, though it should be by default
parser.prog = "aider"
print(shtab.complete(parser, shell=args.shell_completions))
sys.exit(0)
if git is None:
args.git = False
@ -905,7 +912,7 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
commit_prompt=args.commit_prompt,
subtree_only=args.subtree_only,
git_commit_verify=args.git_commit_verify,
attribute_co_authored_by=args.attribute_co_authored_by, # Pass the arg
attribute_co_authored_by=args.attribute_co_authored_by, # Pass the arg
)
except FileNotFoundError:
pass

View file

@ -115,9 +115,9 @@ class MarkdownStream:
else:
self.mdargs = dict()
# Initialize rich Live display with empty text
self.live = Live(Text(""), refresh_per_second=1.0 / self.min_delay)
self.live.start()
# Defer Live creation until the first update.
self.live = None
self._live_started = False
def _render_markdown_to_lines(self, text):
"""Render markdown text to a list of lines.
@ -163,6 +163,12 @@ class MarkdownStream:
Markdown going to the console works better in terminal scrollback buffers.
The live window doesn't play nice with terminal scrollback.
"""
# On the first call, stop the spinner and start the Live renderer
if not getattr(self, "_live_started", False):
self.live = Live(Text(""), refresh_per_second=1.0 / self.min_delay)
self.live.start()
self._live_started = True
now = time.time()
# Throttle updates to maintain smooth rendering
if not final and now - self.when < self.min_delay:

View file

@ -17,6 +17,7 @@ from PIL import Image
from aider.dump import dump # noqa: F401
from aider.llm import litellm
from aider.openrouter import OpenRouterModelManager
from aider.sendchat import ensure_alternating_roles, sanity_check_messages
from aider.utils import check_pip_install_extra
@ -149,8 +150,13 @@ class ModelInfoManager:
self.verify_ssl = True
self._cache_loaded = False
# Manager for the cached OpenRouter model database
self.openrouter_manager = OpenRouterModelManager()
def set_verify_ssl(self, verify_ssl):
self.verify_ssl = verify_ssl
if hasattr(self, "openrouter_manager"):
self.openrouter_manager.set_verify_ssl(verify_ssl)
def _load_cache(self):
if self._cache_loaded:
@ -232,6 +238,12 @@ class ModelInfoManager:
return litellm_info
if not cached_info and model.startswith("openrouter/"):
# First try using the locally cached OpenRouter model database
openrouter_info = self.openrouter_manager.get_model_info(model)
if openrouter_info:
return openrouter_info
# Fallback to legacy web-scraping if the API cache does not contain the model
openrouter_info = self.fetch_openrouter_model_info(model)
if openrouter_info:
return openrouter_info
@ -526,6 +538,9 @@ class Model(ModelSettings):
if "qwen3" in model and "235b" in model:
self.edit_format = "diff"
self.use_repo_map = True
self.system_prompt_prefix = "/no_think"
self.use_temperature = 0.7
self.extra_params = {"top_p": 0.8, "top_k": 20, "min_p": 0.0}
return # <--
# use the defaults
@ -910,6 +925,9 @@ class Model(ModelSettings):
messages = ensure_alternating_roles(messages)
retry_delay = 0.125
if self.verbose:
dump(messages)
while True:
try:
kwargs = {

128
aider/openrouter.py Normal file
View file

@ -0,0 +1,128 @@
"""
OpenRouter model metadata caching and lookup.
This module keeps a local cached copy of the OpenRouter model list
(downloaded from ``https://openrouter.ai/api/v1/models``) and exposes a
helper class that returns metadata for a given model in a format compatible
with litellms ``get_model_info``.
"""
from __future__ import annotations
import json
import time
from pathlib import Path
from typing import Dict
import requests
def _cost_per_token(val: str | None) -> float | None:
"""Convert a per-million price string to a per-token float."""
if val in (None, "", "0"):
return 0.0 if val == "0" else None
try:
return float(val) / 1_000_000
except Exception: # noqa: BLE001
return None
class OpenRouterModelManager:
MODELS_URL = "https://openrouter.ai/api/v1/models"
CACHE_TTL = 60 * 60 * 24 # 24 h
def __init__(self) -> None:
self.cache_dir = Path.home() / ".aider" / "caches"
self.cache_file = self.cache_dir / "openrouter_models.json"
self.content: Dict | None = None
self.verify_ssl: bool = True
self._cache_loaded = False
# ------------------------------------------------------------------ #
# Public API #
# ------------------------------------------------------------------ #
def set_verify_ssl(self, verify_ssl: bool) -> None:
"""Enable/disable SSL verification for API requests."""
self.verify_ssl = verify_ssl
def get_model_info(self, model: str) -> Dict:
"""
Return metadata for *model* or an empty ``dict`` when unknown.
``model`` should use the aider naming convention, e.g.
``openrouter/nousresearch/deephermes-3-mistral-24b-preview:free``.
"""
self._ensure_content()
if not self.content or "data" not in self.content:
return {}
route = self._strip_prefix(model)
# Consider both the exact id and id without any “:suffix”.
candidates = {route}
if ":" in route:
candidates.add(route.split(":", 1)[0])
record = next((item for item in self.content["data"] if item.get("id") in candidates), None)
if not record:
return {}
context_len = (
record.get("top_provider", {}).get("context_length")
or record.get("context_length")
or None
)
pricing = record.get("pricing", {})
return {
"max_input_tokens": context_len,
"max_tokens": context_len,
"max_output_tokens": context_len,
"input_cost_per_token": _cost_per_token(pricing.get("prompt")),
"output_cost_per_token": _cost_per_token(pricing.get("completion")),
"litellm_provider": "openrouter",
}
# ------------------------------------------------------------------ #
# Internal helpers #
# ------------------------------------------------------------------ #
def _strip_prefix(self, model: str) -> str:
return model[len("openrouter/") :] if model.startswith("openrouter/") else model
def _ensure_content(self) -> None:
self._load_cache()
if not self.content:
self._update_cache()
def _load_cache(self) -> None:
if self._cache_loaded:
return
try:
self.cache_dir.mkdir(parents=True, exist_ok=True)
if self.cache_file.exists():
cache_age = time.time() - self.cache_file.stat().st_mtime
if cache_age < self.CACHE_TTL:
try:
self.content = json.loads(self.cache_file.read_text())
except json.JSONDecodeError:
self.content = None
except OSError:
# Cache directory might be unwritable; ignore.
pass
self._cache_loaded = True
def _update_cache(self) -> None:
try:
response = requests.get(self.MODELS_URL, timeout=10, verify=self.verify_ssl)
if response.status_code == 200:
self.content = response.json()
try:
self.cache_file.write_text(json.dumps(self.content, indent=2))
except OSError:
pass # Non-fatal if we cant write the cache
except Exception as ex: # noqa: BLE001
print(f"Failed to fetch OpenRouter model list: {ex}")
try:
self.cache_file.write_text("{}")
except OSError:
pass

View file

@ -13,11 +13,13 @@ Generate a one-line commit message for those changes.
The commit message should be structured as follows: <type>: <description>
Use these for <type>: fix, feat, build, chore, ci, docs, style, refactor, perf, test
Ensure the commit message:
Ensure the commit message:{language_instruction}
- Starts with the appropriate prefix.
- Is in the imperative mood (e.g., \"add feature\" not \"added feature\" or \"adding feature\").
- Does not exceed 72 characters.
Reply only with the one-line commit message, without any additional text, explanations, or line breaks.
Reply only with the one-line commit message, without any additional text, explanations, \
or line breaks.
"""

View file

@ -21,6 +21,7 @@ import pathspec
from aider import prompts, utils
from .dump import dump # noqa: F401
from .waiting import WaitingSpinner
ANY_GIT_ERROR += [
OSError,
@ -72,7 +73,7 @@ class GitRepo:
commit_prompt=None,
subtree_only=False,
git_commit_verify=True,
attribute_co_authored_by=False, # Added parameter
attribute_co_authored_by=False, # Added parameter
):
self.io = io
self.models = models
@ -84,7 +85,7 @@ class GitRepo:
self.attribute_committer = attribute_committer
self.attribute_commit_message_author = attribute_commit_message_author
self.attribute_commit_message_committer = attribute_commit_message_committer
self.attribute_co_authored_by = attribute_co_authored_by # Assign from parameter
self.attribute_co_authored_by = attribute_co_authored_by # Assign from parameter
self.commit_prompt = commit_prompt
self.subtree_only = subtree_only
self.git_commit_verify = git_commit_verify
@ -134,15 +135,16 @@ class GitRepo:
Args:
fnames (list, optional): List of filenames to commit. Defaults to None (commit all
dirty files).
context (str, optional): Context for generating the commit message. Defaults to None.
context (str, optional): Context for generating commit message. Defaults to None.
message (str, optional): Explicit commit message. Defaults to None (generate message).
aider_edits (bool, optional): Whether the changes were made by Aider. Defaults to False.
This affects attribution logic.
coder (Coder, optional): The Coder instance, used to access config and model info.
coder (Coder, optional): The Coder instance, used for config and model info.
Defaults to None.
Returns:
tuple(str, str) or None: The commit hash and commit message if successful, else None.
tuple(str, str) or None: The commit hash and commit message if successful,
else None.
Attribution Logic:
------------------
@ -154,16 +156,16 @@ class GitRepo:
- Committer: The person who last applied the commit to the repository.
- aider_edits=True: Changes were generated by Aider (LLM).
- aider_edits=False: Commit is user-driven (e.g., /commit manually staged changes).
- Explicit Setting: A flag (--attribute-...) is set to True or False via command line
or config file.
- Explicit Setting: A flag (--attribute-...) is set to True or False
via command line or config file.
- Implicit Default: A flag is not explicitly set, defaulting to None in args, which is
interpreted as True unless overridden by other logic.
Flags:
- --attribute-author: Modify Author name to "User Name (aider)".
- --attribute-committer: Modify Committer name to "User Name (aider)".
- --attribute-co-authored-by: Add "Co-authored-by: aider (<model>) <noreply@aider.chat>"
trailer to the commit message.
- --attribute-co-authored-by: Add
"Co-authored-by: aider (<model>) <noreply@aider.chat>" trailer to commit message.
Behavior Summary:
@ -171,8 +173,8 @@ class GitRepo:
- If --attribute-co-authored-by=True:
- Co-authored-by trailer IS ADDED.
- Author/Committer names are NOT modified by default (co-authored-by takes precedence).
- EXCEPTION: If --attribute-author/--attribute-committer is EXPLICITLY True,
the respective name IS modified (explicit overrides precedence).
- EXCEPTION: If --attribute-author/--attribute-committer is EXPLICITLY True, the
respective name IS modified (explicit overrides precedence).
- If --attribute-co-authored-by=False:
- Co-authored-by trailer is NOT added.
- Author/Committer names ARE modified by default (implicit True).
@ -186,11 +188,15 @@ class GitRepo:
- EXCEPTION: If --attribute-committer is EXPLICITLY False, the name is NOT modified.
Resulting Scenarios:
- Standard AI edit (defaults): Co-authored-by=False -> Author=You(aider), Committer=You(aider)
- AI edit with Co-authored-by (default): Co-authored-by=True -> Author=You, Committer=You, Trailer added
- AI edit with Co-authored-by + Explicit Author: Co-authored-by=True, --attribute-author -> Author=You(aider), Committer=You, Trailer added
- Standard AI edit (defaults): Co-authored-by=False -> Author=You(aider),
Committer=You(aider)
- AI edit with Co-authored-by (default): Co-authored-by=True -> Author=You,
Committer=You, Trailer added
- AI edit with Co-authored-by + Explicit Author: Co-authored-by=True,
--attribute-author -> Author=You(aider), Committer=You, Trailer added
- User commit (defaults): aider_edits=False -> Author=You, Committer=You(aider)
- User commit with explicit no-committer: aider_edits=False, --no-attribute-committer -> Author=You, Committer=You
- User commit with explicit no-committer: aider_edits=False,
--no-attribute-committer -> Author=You, Committer=You
"""
if not fnames and not self.repo.is_dirty():
return
@ -202,7 +208,10 @@ class GitRepo:
if message:
commit_message = message
else:
commit_message = self.get_commit_message(diffs, context)
user_language = None
if coder:
user_language = coder.get_user_language()
commit_message = self.get_commit_message(diffs, context, user_language)
# Retrieve attribute settings, prioritizing coder.args if available
if coder and hasattr(coder, "args"):
@ -227,7 +236,6 @@ class GitRepo:
effective_author = True if attribute_author is None else attribute_author
effective_committer = True if attribute_committer is None else attribute_committer
# Determine commit message prefixing
prefix_commit_message = aider_edits and (
attribute_commit_message_author or attribute_commit_message_committer
@ -245,20 +253,19 @@ class GitRepo:
# Determine if author/committer names should be modified
# Author modification applies only to aider edits.
# It's used if effective_author is True AND (co-authored-by is False OR author was explicitly set).
# It's used if effective_author is True AND
# (co-authored-by is False OR author was explicitly set).
use_attribute_author = (
aider_edits
and effective_author
and (not attribute_co_authored_by or author_explicit)
aider_edits and effective_author and (not attribute_co_authored_by or author_explicit)
)
# Committer modification applies regardless of aider_edits (based on tests).
# It's used if effective_committer is True AND (it's not an aider edit with co-authored-by OR committer was explicitly set).
# It's used if effective_committer is True AND
# (it's not an aider edit with co-authored-by OR committer was explicitly set).
use_attribute_committer = effective_committer and (
not (aider_edits and attribute_co_authored_by) or committer_explicit
)
if not commit_message:
commit_message = "(no commit message provided)"
@ -291,7 +298,9 @@ class GitRepo:
with contextlib.ExitStack() as stack:
if use_attribute_committer:
stack.enter_context(
set_git_env("GIT_COMMITTER_NAME", committer_name, original_committer_name_env)
set_git_env(
"GIT_COMMITTER_NAME", committer_name, original_committer_name_env
)
)
if use_attribute_author:
stack.enter_context(
@ -314,7 +323,7 @@ class GitRepo:
except (ValueError, OSError):
return self.repo.git_dir
def get_commit_message(self, diffs, context):
def get_commit_message(self, diffs, context, user_language=None):
diffs = "# Diffs:\n" + diffs
content = ""
@ -323,6 +332,11 @@ class GitRepo:
content += diffs
system_content = self.commit_prompt or prompts.commit_system
language_instruction = ""
if user_language:
language_instruction = f"\n- Is written in {user_language}."
system_content = system_content.format(language_instruction=language_instruction)
messages = [
dict(role="system", content=system_content),
dict(role="user", content=content),
@ -330,13 +344,15 @@ class GitRepo:
commit_message = None
for model in self.models:
num_tokens = model.token_count(messages)
max_tokens = model.info.get("max_input_tokens") or 0
if max_tokens and num_tokens > max_tokens:
continue
commit_message = model.simple_send_with_retries(messages)
if commit_message:
break
spinner_text = f"Generating commit message with {model.name}"
with WaitingSpinner(spinner_text):
num_tokens = model.token_count(messages)
max_tokens = model.info.get("max_input_tokens") or 0
if max_tokens and num_tokens > max_tokens:
continue
commit_message = model.simple_send_with_retries(messages)
if commit_message:
break # Found a model that could generate the message
if not commit_message:
self.io.tool_error("Failed to generate commit message!")

View file

@ -19,7 +19,7 @@ from tqdm import tqdm
from aider.dump import dump
from aider.special import filter_important_files
from aider.utils import Spinner
from aider.waiting import Spinner
# tree_sitter is throwing a FutureWarning
warnings.simplefilter("ignore", category=FutureWarning)
@ -35,6 +35,8 @@ CACHE_VERSION = 3
if USING_TSL_PACK:
CACHE_VERSION = 4
UPDATING_REPO_MAP_MESSAGE = "Updating repo map"
class RepoMap:
TAGS_CACHE_DIR = f".aider.tags.cache.v{CACHE_VERSION}"
@ -380,7 +382,7 @@ class RepoMap:
if self.verbose:
self.io.tool_output(f"Processing {fname}")
if progress and not showing_bar:
progress()
progress(f"{UPDATING_REPO_MAP_MESSAGE}: {fname}")
try:
file_ok = Path(fname).is_file()
@ -459,7 +461,7 @@ class RepoMap:
for ident in idents:
if progress:
progress()
progress(f"{UPDATING_REPO_MAP_MESSAGE}: {ident}")
definers = defines[ident]
@ -512,7 +514,7 @@ class RepoMap:
ranked_definitions = defaultdict(float)
for src in G.nodes:
if progress:
progress()
progress(f"{UPDATING_REPO_MAP_MESSAGE}: {src}")
src_rank = ranked[src]
total_weight = sum(data["weight"] for _src, _dst, data in G.out_edges(src, data=True))
@ -621,7 +623,7 @@ class RepoMap:
if not mentioned_idents:
mentioned_idents = set()
spin = Spinner("Updating repo map")
spin = Spinner(UPDATING_REPO_MAP_MESSAGE)
ranked_tags = self.get_ranked_tags(
chat_fnames,
@ -655,7 +657,11 @@ class RepoMap:
while lower_bound <= upper_bound:
# dump(lower_bound, middle, upper_bound)
spin.step()
if middle > 1500:
show_tokens = f"{middle / 1000.0:.1f}K"
else:
show_tokens = str(middle)
spin.step(f"{UPDATING_REPO_MAP_MESSAGE}: {show_tokens} tokens")
tree = self.to_tree(ranked_tags[:middle], chat_rel_fnames)
num_tokens = self.token_count(tree)

View file

@ -312,7 +312,7 @@
"supports_tool_choice": true,
"source": "https://cloud.google.com/vertex-ai/generative-ai/pricing"
},
"openrouter/google/gemini-2.5-pro-exp-03-25:free": {
"openrouter/google/gemini-2.5-pro-exp-03-25": {
"max_tokens": 8192,
"max_input_tokens": 1048576,
"max_output_tokens": 64000,
@ -461,4 +461,8 @@
"supported_output_modalities": ["text"],
"source": "https://ai.google.dev/gemini-api/docs/pricing#gemini-2.5-pro-preview"
},
"together_ai/Qwen/Qwen3-235B-A22B-fp8-tput": {
"input_cost_per_token": 0.0000002,
"output_cost_per_token": 0.0000006,
}
}

View file

@ -969,7 +969,7 @@
overeager: true
weak_model_name: gemini/gemini-2.5-flash-preview-04-17
- name: openrouter/google/gemini-2.5-pro-exp-03-25:free
- name: openrouter/google/gemini-2.5-pro-exp-03-25
edit_format: diff-fenced
overeager: true
use_repo_map: true
@ -1411,4 +1411,28 @@
edit_format: diff-fenced
use_repo_map: true
weak_model_name: openrouter/google/gemini-2.0-flash-001
#- name: openrouter/qwen/qwen3-235b-a22b
# system_prompt_prefix: "/no_think"
# use_temperature: 0.7
# extra_params:
# max_tokens: 24000
# top_p: 0.8
# top_k: 20
# min_p: 0.0
# temperature: 0.7
# extra_body:
# provider:
# order: ["Together"]
#- name: together_ai/Qwen/Qwen3-235B-A22B-fp8-tput
# system_prompt_prefix: "/no_think"
# use_temperature: 0.7
# reasoning_tag: think
# extra_params:
# max_tokens: 24000
# top_p: 0.8
# top_k: 20
# min_p: 0.0
# temperature: 0.7

View file

@ -1,14 +1,14 @@
import itertools
import os
import platform
import oslex
import subprocess
import sys
import tempfile
import time
from pathlib import Path
import oslex
from aider.dump import dump # noqa: F401
from aider.waiting import Spinner
IMAGE_EXTENSIONS = {".png", ".jpg", ".jpeg", ".gif", ".bmp", ".tiff", ".webp", ".pdf"}
@ -250,55 +250,6 @@ def run_install(cmd):
return False, output
class Spinner:
unicode_spinner = ["", "", "", "", "", "", "", "", "", ""]
ascii_spinner = ["|", "/", "-", "\\"]
def __init__(self, text):
self.text = text
self.start_time = time.time()
self.last_update = 0
self.visible = False
self.is_tty = sys.stdout.isatty()
self.tested = False
def test_charset(self):
if self.tested:
return
self.tested = True
# Try unicode first, fall back to ascii if needed
try:
# Test if we can print unicode characters
print(self.unicode_spinner[0], end="", flush=True)
print("\r", end="", flush=True)
self.spinner_chars = itertools.cycle(self.unicode_spinner)
except UnicodeEncodeError:
self.spinner_chars = itertools.cycle(self.ascii_spinner)
def step(self):
if not self.is_tty:
return
current_time = time.time()
if not self.visible and current_time - self.start_time >= 0.5:
self.visible = True
self._step()
elif self.visible and current_time - self.last_update >= 0.1:
self._step()
self.last_update = current_time
def _step(self):
if not self.visible:
return
self.test_charset()
print(f"\r{self.text} {next(self.spinner_chars)}\r{self.text} ", end="", flush=True)
def end(self):
if self.visible and self.is_tty:
print("\r" + " " * (len(self.text) + 3))
def find_common_root(abs_fnames):
try:
if len(abs_fnames) == 1:
@ -385,15 +336,3 @@ def printable_shell_command(cmd_list):
str: Shell-escaped command string.
"""
return oslex.join(cmd_list)
def main():
spinner = Spinner("Running spinner...")
for _ in range(40): # 40 steps * 0.25 seconds = 10 seconds
time.sleep(0.25)
spinner.step()
spinner.end()
if __name__ == "__main__":
main()

221
aider/waiting.py Normal file
View file

@ -0,0 +1,221 @@
#!/usr/bin/env python
"""
Thread-based, killable spinner utility.
Use it like:
from aider.waiting import WaitingSpinner
spinner = WaitingSpinner("Waiting for LLM")
spinner.start()
... # long task
spinner.stop()
"""
import sys
import threading
import time
from rich.console import Console
class Spinner:
"""
Minimal spinner that scans a single marker back and forth across a line.
The animation is pre-rendered into a list of frames. If the terminal
cannot display unicode the frames are converted to plain ASCII.
"""
last_frame_idx = 0 # Class variable to store the last frame index
def __init__(self, text: str, width: int = 7):
self.text = text
self.start_time = time.time()
self.last_update = 0.0
self.visible = False
self.is_tty = sys.stdout.isatty()
self.console = Console()
# Pre-render the animation frames using pure ASCII so they will
# always display, even on very limited terminals.
ascii_frames = [
"#= ", # C1 C2 space(8)
"=# ", # C2 C1 space(8)
" =# ", # space(1) C2 C1 space(7)
" =# ", # space(2) C2 C1 space(6)
" =# ", # space(3) C2 C1 space(5)
" =# ", # space(4) C2 C1 space(4)
" =# ", # space(5) C2 C1 space(3)
" =# ", # space(6) C2 C1 space(2)
" =# ", # space(7) C2 C1 space(1)
" =#", # space(8) C2 C1
" #=", # space(8) C1 C2
" #= ", # space(7) C1 C2 space(1)
" #= ", # space(6) C1 C2 space(2)
" #= ", # space(5) C1 C2 space(3)
" #= ", # space(4) C1 C2 space(4)
" #= ", # space(3) C1 C2 space(5)
" #= ", # space(2) C1 C2 space(6)
" #= ", # space(1) C1 C2 space(7)
]
self.unicode_palette = "░█"
xlate_from, xlate_to = ("=#", self.unicode_palette)
# If unicode is supported, swap the ASCII chars for nicer glyphs.
if self._supports_unicode():
translation_table = str.maketrans(xlate_from, xlate_to)
frames = [f.translate(translation_table) for f in ascii_frames]
self.scan_char = xlate_to[xlate_from.find("#")]
else:
frames = ascii_frames
self.scan_char = "#"
# Bounce the scanner back and forth.
self.frames = frames
self.frame_idx = Spinner.last_frame_idx # Initialize from class variable
self.width = len(frames[0]) - 2 # number of chars between the brackets
self.animation_len = len(frames[0])
self.last_display_len = 0 # Length of the last spinner line (frame + text)
def _supports_unicode(self) -> bool:
if not self.is_tty:
return False
try:
out = self.unicode_palette
out += "\b" * len(self.unicode_palette)
out += " " * len(self.unicode_palette)
out += "\b" * len(self.unicode_palette)
sys.stdout.write(out)
sys.stdout.flush()
return True
except UnicodeEncodeError:
return False
except Exception:
return False
def _next_frame(self) -> str:
frame = self.frames[self.frame_idx]
self.frame_idx = (self.frame_idx + 1) % len(self.frames)
Spinner.last_frame_idx = self.frame_idx # Update class variable
return frame
def step(self, text: str = None) -> None:
if text is not None:
self.text = text
if not self.is_tty:
return
now = time.time()
if not self.visible and now - self.start_time >= 0.5:
self.visible = True
self.last_update = 0.0
if self.is_tty:
self.console.show_cursor(False)
if not self.visible or now - self.last_update < 0.1:
return
self.last_update = now
frame_str = self._next_frame()
# Determine the maximum width for the spinner line
# Subtract 2 as requested, to leave a margin or prevent cursor wrapping issues
max_spinner_width = self.console.width - 2
if max_spinner_width < 0: # Handle extremely narrow terminals
max_spinner_width = 0
current_text_payload = f" {self.text}"
line_to_display = f"{frame_str}{current_text_payload}"
# Truncate the line if it's too long for the console width
if len(line_to_display) > max_spinner_width:
line_to_display = line_to_display[:max_spinner_width]
len_line_to_display = len(line_to_display)
# Calculate padding to clear any remnants from a longer previous line
padding_to_clear = " " * max(0, self.last_display_len - len_line_to_display)
# Write the spinner frame, text, and any necessary clearing spaces
sys.stdout.write(f"\r{line_to_display}{padding_to_clear}")
self.last_display_len = len_line_to_display
# Calculate number of backspaces to position cursor at the scanner character
scan_char_abs_pos = frame_str.find(self.scan_char)
# Total characters written to the line (frame + text + padding)
total_chars_written_on_line = len_line_to_display + len(padding_to_clear)
# num_backspaces will be non-positive if scan_char_abs_pos is beyond
# total_chars_written_on_line (e.g., if the scan char itself was truncated).
# (e.g., if the scan char itself was truncated).
# In such cases, (effectively) 0 backspaces are written,
# and the cursor stays at the end of the line.
num_backspaces = total_chars_written_on_line - scan_char_abs_pos
sys.stdout.write("\b" * num_backspaces)
sys.stdout.flush()
def end(self) -> None:
if self.visible and self.is_tty:
clear_len = self.last_display_len # Use the length of the last displayed content
sys.stdout.write("\r" + " " * clear_len + "\r")
sys.stdout.flush()
self.console.show_cursor(True)
self.visible = False
class WaitingSpinner:
"""Background spinner that can be started/stopped safely."""
def __init__(self, text: str = "Waiting for LLM", delay: float = 0.15):
self.spinner = Spinner(text)
self.delay = delay
self._stop_event = threading.Event()
self._thread = threading.Thread(target=self._spin, daemon=True)
def _spin(self):
while not self._stop_event.is_set():
self.spinner.step()
time.sleep(self.delay)
self.spinner.end()
def start(self):
"""Start the spinner in a background thread."""
if not self._thread.is_alive():
self._thread.start()
def stop(self):
"""Request the spinner to stop and wait briefly for the thread to exit."""
self._stop_event.set()
if self._thread.is_alive():
self._thread.join(timeout=self.delay)
self.spinner.end()
# Allow use as a context-manager
def __enter__(self):
self.start()
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.stop()
def main():
spinner = Spinner("Running spinner...")
try:
for _ in range(100):
time.sleep(0.15)
spinner.step()
print("Success!")
except KeyboardInterrupt:
print("\nInterrupted by user.")
finally:
spinner.end()
if __name__ == "__main__":
main()

View file

@ -24,14 +24,24 @@ cog.out(text)
]]]-->
### main branch
### Aider v0.83.1
- Improved user language detection by correctly normalizing hyphenated language codes (e.g., `en-US` to `en`) and enhancing the validation of locale results.
- Prevented Aider from instructing the LLM to reply in 'C' or 'POSIX' when these are detected as the system locale.
- Displayed a spinner with the model name when generating commit messages.
- Aider wrote 74% of the code in this release.
### Aider v0.83.0
- Added support for `qwen3-235b` models, including `openrouter/qwen/qwen3-235b-a22b`.
- Added support for `gemini-2.5-pro-preview-05-06` models.
- Added repomap support for OCaml and OCaml interface files, by Andrey Popp.
- Added support for `qwen3-235b` models.
- Added repo-map support for OCaml and OCaml interface files, by Andrey Popp.
- Added a spinner animation while waiting for the LLM to start streaming its response.
- Updated the spinner animation to a Knight Rider style.
- Introduced `--attribute-co-authored-by` option to add co-author trailer to commit messages, by Andrew Grigorev.
- Updated Gemini model aliases (e.g., `gemini`, `gemini-2.5-pro`) to point to the `05-06` preview versions.
- Marked Gemini 2.5 Pro preview models as `overeager` by default.
- Commit message prompt specifies the user's language.
- Updated the default weak model for Gemini 2.5 Pro models to `gemini/gemini-2.5-flash-preview-04-17`.
- Corrected `gemini-2.5-pro-exp-03-25` model settings to reflect its lack of support for `thinking_budget`.
- Ensured model-specific system prompt prefixes are placed on a new line before the main system prompt.
@ -44,7 +54,18 @@ cog.out(text)
- The `aider scrape` command-line tool will now use Playwright for web scraping if it is available, by Jon Keys.
- Fixed linter command execution on Windows by adopting `oslex` for argument quoting, by Titusz Pan.
- Improved cross-platform display of shell commands by using `oslex` for robust argument quoting, by Titusz Pan.
- Aider wrote 46% of the code in this release.
- Improved `/ask` mode to instruct the LLM to elide unchanging code in its responses.
- Ensured web scraping in the GUI also respects Playwright availability and the `--disable-playwright` flag.
- Improved display of filenames in the prompt header using rich Text formatting.
- Enabled `reasoning_effort` for Gemini 2.5 Flash models.
- Added a `--shell-completions` argument to generate shell completion scripts (e.g., for bash, zsh).
- Explicit `--attribute-author` or `--attribute-committer` flags now override the default behavior when `--attribute-co-authored-by` is used, allowing finer control over commit attribution, by Andrew Grigorev.
- Fixed an issue where read-only status of files might not be preserved correctly by some commands (e.g. `/drop` after adding a read-only file).
- The `aider-args` utility (or `python -m aider.args`) now defaults to printing a sample YAML configuration if no arguments are provided.
- Displayed token count progress and the name of the file or identifier being processed during repo map updates.
- Extended the waiting spinner to also show for non-streaming responses and further enhanced its animation with console width clipping, cursor hiding, and a more continuous appearance.
- Dropped support for Python 3.9.
- Aider wrote 55% of the code in this release.
### Aider v0.82.3

View file

@ -32,7 +32,7 @@ aux_links:
"GitHub":
- "https://github.com/Aider-AI/aider"
"Discord":
- "https://discord.gg/Tv2uQnR88V"
- "https://discord.gg/Y7X7bhMQFV"
"Blog":
- "/blog/"
@ -40,7 +40,7 @@ nav_external_links:
- title: "GitHub"
url: "https://github.com/Aider-AI/aider"
- title: "Discord"
url: "https://discord.gg/Tv2uQnR88V"
url: "https://discord.gg/Y7X7bhMQFV"
repository: Aider-AI/aider

View file

@ -4500,3 +4500,162 @@
Paul Gauthier (aider): 1567
start_tag: v0.81.0
total_lines: 1706
- aider_percentage: 54.32
aider_total: 1409
end_date: '2025-05-09'
end_tag: v0.83.0
file_counts:
.github/workflows/check_pypi_version.yml:
Paul Gauthier (aider): 1
.github/workflows/pre-commit.yml:
MDW: 48
.github/workflows/ubuntu-tests.yml:
Paul Gauthier (aider): 1
.github/workflows/windows-tests.yml:
Paul Gauthier (aider): 1
.github/workflows/windows_check_pypi_version.yml:
Paul Gauthier (aider): 1
aider/__init__.py:
Paul Gauthier: 1
aider/args.py:
Andrew Grigorev: 21
Andrew Grigorev (aider): 5
Paul Gauthier (aider): 38
aider/coders/__init__.py:
Paul Gauthier (aider): 2
aider/coders/base_coder.py:
Andrew Grigorev (aider): 2
Paul Gauthier: 60
Paul Gauthier (aider): 104
aider/coders/editblock_coder.py:
Paul Gauthier: 10
Paul Gauthier (aider): 7
zjy1412: 2
aider/coders/editblock_fenced_coder.py:
MDW: 1
aider/coders/help_coder.py:
MDW: 1
aider/coders/patch_coder.py:
Paul Gauthier (aider): 38
aider/coders/shell.py:
Paul Gauthier: 37
aider/coders/udiff_coder.py:
Paul Gauthier: 2
Paul Gauthier (aider): 9
aider/coders/udiff_simple.py:
Paul Gauthier (aider): 14
aider/commands.py:
Andrew Grigorev: 10
Paul Gauthier: 7
Paul Gauthier (aider): 1
aider/gui.py:
Jon Keys: 2
aider/io.py:
Kay Gosho: 1
Paul Gauthier (aider): 5
aider/linter.py:
Paul Gauthier: 1
Titusz Pan: 1
aider/main.py:
Paul Gauthier (aider): 9
aider/mdstream.py:
Paul Gauthier (aider): 11
aider/models.py:
Paul Gauthier: 4
Paul Gauthier (aider): 66
Stefan Hladnik: 4
Stefan Hladnik (aider): 41
aider/queries/tree-sitter-language-pack/ocaml_interface-tags.scm:
Andrey Popp: 98
aider/queries/tree-sitter-languages/ocaml_interface-tags.scm:
Andrey Popp: 98
aider/repo.py:
Andrew Grigorev: 115
Andrew Grigorev (aider): 21
Paul Gauthier: 6
Paul Gauthier (aider): 33
aider/repomap.py:
Paul Gauthier: 5
Paul Gauthier (aider): 6
aider/resources/model-settings.yml:
Paul Gauthier: 183
Paul Gauthier (aider): 175
cantalupo555: 1
aider/scrape.py:
Jon Keys: 12
aider/utils.py:
Paul Gauthier: 13
Paul Gauthier (aider): 131
Titusz Pan: 1
aider/waiting.py:
Paul Gauthier: 1
Paul Gauthier (aider): 54
aider/watch.py:
Paul Gauthier: 6
Paul Gauthier (aider): 7
aider/website/_includes/leaderboard_table.js:
Paul Gauthier: 2
Paul Gauthier (aider): 18
aider/website/docs/leaderboards/index.md:
Paul Gauthier: 1
Paul Gauthier (aider): 2
aider/website/index.html:
Paul Gauthier: 13
benchmark/benchmark.py:
Paul Gauthier: 3
Paul Gauthier (aider): 42
benchmark/docker.sh:
Paul Gauthier: 2
benchmark/refactor_tools.py:
MDW: 1
scripts/30k-image.py:
MDW: 1
scripts/clean_metadata.py:
Paul Gauthier (aider): 258
scripts/update-history.py:
Paul Gauthier: 2
Paul Gauthier (aider): 7
tests/basic/test_coder.py:
Paul Gauthier (aider): 3
tests/basic/test_commands.py:
Paul Gauthier: 2
Paul Gauthier (aider): 90
tests/basic/test_editblock.py:
Paul Gauthier: 10
zjy1412: 52
tests/basic/test_io.py:
Paul Gauthier (aider): 132
tests/basic/test_linter.py:
Paul Gauthier: 22
Titusz Pan: 10
tests/basic/test_repo.py:
Andrew Grigorev: 75
Andrew Grigorev (aider): 65
Paul Gauthier: 79
Paul Gauthier (aider): 6
tests/basic/test_repomap.py:
Andrey Popp: 7
tests/basic/test_watch.py:
MDW: 1
tests/fixtures/languages/ocaml_interface/test.mli:
Andrey Popp: 14
tests/scrape/test_playwright_disable.py:
Andrew Grigorev: 111
Paul Gauthier: 25
Paul Gauthier (aider): 3
grand_total:
Andrew Grigorev: 332
Andrew Grigorev (aider): 93
Andrey Popp: 217
Jon Keys: 14
Kay Gosho: 1
MDW: 53
Paul Gauthier: 497
Paul Gauthier (aider): 1275
Stefan Hladnik: 4
Stefan Hladnik (aider): 41
Titusz Pan: 12
cantalupo555: 1
zjy1412: 54
start_tag: v0.82.0
total_lines: 2594

View file

@ -1279,30 +1279,31 @@
seconds_per_case: 372.2
total_cost: 0.7603
- dirname: 2025-05-08-03-22-37--qwen3-235b-defaults
- dirname: 2025-05-09-17-02-02--qwen3-235b-a22b.unthink_16k_diff
test_cases: 225
model: Qwen3 235B A22B
model: Qwen3 235B A22B diff, no think, Alibaba API
edit_format: diff
commit_hash: aaacee5-dirty
pass_rate_1: 17.3
pass_rate_2: 49.8
pass_num_1: 39
pass_num_2: 112
percent_cases_well_formed: 91.6
error_outputs: 58
num_malformed_responses: 29
num_with_malformed_responses: 19
user_asks: 102
commit_hash: 91d7fbd-dirty
pass_rate_1: 28.9
pass_rate_2: 59.6
pass_num_1: 65
pass_num_2: 134
percent_cases_well_formed: 92.9
error_outputs: 22
num_malformed_responses: 22
num_with_malformed_responses: 16
user_asks: 111
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
prompt_tokens: 0
completion_tokens: 0
prompt_tokens: 2816192
completion_tokens: 342062
test_timeouts: 1
total_tests: 225
command: aider --model openrouter/qwen/qwen3-235b-a22b
date: 2025-05-08
command: aider --model openai/qwen3-235b-a22b
date: 2025-05-09
versions: 0.82.4.dev
seconds_per_case: 428.1
total_cost: 1.8037
seconds_per_case: 45.4
total_cost: 0.0000

View file

@ -0,0 +1,272 @@
- dirname: 2025-05-08-03-20-24--qwen3-32b-default
test_cases: 225
model: Qwen3 32B diff on OpenRouter, all providers, default settings (thinking)
edit_format: diff
commit_hash: aaacee5-dirty, aeaf259
pass_rate_1: 14.2
pass_rate_2: 40.0
pass_num_1: 32
pass_num_2: 90
percent_cases_well_formed: 83.6
error_outputs: 119
num_malformed_responses: 50
num_with_malformed_responses: 37
user_asks: 97
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 12
prompt_tokens: 317591
completion_tokens: 120418
test_timeouts: 5
total_tests: 225
command: aider --model openrouter/qwen/qwen3-32b
date: 2025-05-08
versions: 0.82.4.dev
seconds_per_case: 372.2
total_cost: 0.7603
- dirname: 2025-05-08-03-22-37--qwen3-235b-defaults
test_cases: 225
model: Qwen3 235B A22B diff on OpenRouter, all providers, default settings (thinking)
edit_format: diff
commit_hash: aaacee5-dirty
pass_rate_1: 17.3
pass_rate_2: 49.8
pass_num_1: 39
pass_num_2: 112
percent_cases_well_formed: 91.6
error_outputs: 58
num_malformed_responses: 29
num_with_malformed_responses: 19
user_asks: 102
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
prompt_tokens: 0
completion_tokens: 0
test_timeouts: 1
total_tests: 225
command: aider --model openrouter/qwen/qwen3-235b-a22b
date: 2025-05-08
versions: 0.82.4.dev
seconds_per_case: 428.1
total_cost: 1.8037
- dirname: 2025-05-08-17-39-14--qwen3-235b-or-together-only
test_cases: 225
model: Qwen3 235B A22B diff on OpenRouter only TogetherAI, recommended /no_think settings
edit_format: diff
commit_hash: 328584e
pass_rate_1: 28.0
pass_rate_2: 54.7
pass_num_1: 63
pass_num_2: 123
percent_cases_well_formed: 90.7
error_outputs: 39
num_malformed_responses: 32
num_with_malformed_responses: 21
user_asks: 106
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
prompt_tokens: 2816606
completion_tokens: 362346
test_timeouts: 2
total_tests: 225
command: aider --model openrouter/qwen/qwen3-235b-a22b
date: 2025-05-08
versions: 0.82.4.dev
seconds_per_case: 77.2
total_cost: 0.6399
- dirname: 2025-04-30-04-49-37--Qwen3-235B-A22B-whole-nothink
test_cases: 225
model: Qwen3-235B-A22B whole with VLLM, bfloat16, recommended /no_think settings
edit_format: whole
commit_hash: 0c383df-dirty
pass_rate_1: 28.0
pass_rate_2: 65.3
pass_num_1: 63
pass_num_2: 147
percent_cases_well_formed: 100.0
error_outputs: 3
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 166
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 3
test_timeouts: 0
total_tests: 225
command: aider --model openai/Qwen3-235B-A22B
date: 2025-04-30
versions: 0.81.4.dev
seconds_per_case: 166.0
total_cost: 0.0000
- dirname: 2025-04-30-04-49-50--Qwen3-235B-A22B-diff-nothink
test_cases: 225
model: Qwen3-235B-A22B diff with VLLM, bfloat16, recommended /no_think settings
edit_format: diff
commit_hash: 0c383df-dirty
pass_rate_1: 29.8
pass_rate_2: 61.3
pass_num_1: 67
pass_num_2: 138
percent_cases_well_formed: 94.7
error_outputs: 25
num_malformed_responses: 25
num_with_malformed_responses: 12
user_asks: 97
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 2
total_tests: 225
command: aider --model openai/Qwen3-235B-A22B
date: 2025-04-30
versions: 0.81.4.dev
seconds_per_case: 158.2
total_cost: 0.0000
- dirname: 2025-04-30-04-08-41--Qwen3-32B-whole-nothink
test_cases: 225
model: Qwen3-32B whole with VLLM, bfloat16, recommended /no_think settings
edit_format: whole
commit_hash: 0c383df-dirty
pass_rate_1: 20.4
pass_rate_2: 45.8
pass_num_1: 46
pass_num_2: 103
percent_cases_well_formed: 100.0
error_outputs: 3
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 94
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 3
test_timeouts: 5
total_tests: 225
command: aider --model openai/Qwen3-32B
date: 2025-04-30
versions: 0.81.4.dev
seconds_per_case: 48.1
total_cost: 0.0000
- dirname: 2025-04-30-04-08-51--Qwen3-32B-diff-nothink
test_cases: 225
model: Qwen3-32B diff with VLLM, bfloat16, recommended /no_think settings
edit_format: diff
commit_hash: 0c383df-dirty
pass_rate_1: 20.4
pass_rate_2: 41.3
pass_num_1: 46
pass_num_2: 93
percent_cases_well_formed: 94.2
error_outputs: 17
num_malformed_responses: 14
num_with_malformed_responses: 13
user_asks: 83
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 3
test_timeouts: 4
total_tests: 225
command: aider --model openai/Qwen3-32B
date: 2025-04-30
versions: 0.81.4.dev
seconds_per_case: 59.4
total_cost: 0.0000
- dirname: 2025-05-07-03-15-59--Qwen3-235B-A22B-Q5_K_M-whole-nothink
test_cases: 225
model: Qwen3-235B-A22B whole with llama.cpp, Q5_K_M (unsloth), recommended /no_think settings
edit_format: whole
commit_hash: 8159cbf
pass_rate_1: 27.1
pass_rate_2: 59.1
pass_num_1: 61
pass_num_2: 133
percent_cases_well_formed: 100.0
error_outputs: 1
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 169
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
total_tests: 225
command: aider --model openai/Qwen3-235B-A22B-Q5_K_M
date: 2025-05-07
versions: 0.82.4.dev
seconds_per_case: 635.2
total_cost: 0.0000
- dirname: 2025-05-09-17-02-02--qwen3-235b-a22b.unthink_16k_diff
test_cases: 225
model: Qwen3 235B A22B diff, no think, via official Alibaba API
edit_format: diff
commit_hash: 91d7fbd-dirty
pass_rate_1: 28.9
pass_rate_2: 59.6
pass_num_1: 65
pass_num_2: 134
percent_cases_well_formed: 92.9
error_outputs: 22
num_malformed_responses: 22
num_with_malformed_responses: 16
user_asks: 111
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
prompt_tokens: 2816192
completion_tokens: 342062
test_timeouts: 1
total_tests: 225
command: aider --model openai/qwen3-235b-a22b
date: 2025-05-09
versions: 0.82.4.dev
seconds_per_case: 45.4
total_cost: 0.0000
- dirname: 2025-05-09-23-01-22--qwen3-235b-a22b.unthink_16k_whole
test_cases: 225
model: Qwen3 235B A22B whole, no think, via official Alibaba API
edit_format: whole
commit_hash: 425fb6d
pass_rate_1: 26.7
pass_rate_2: 61.8
pass_num_1: 60
pass_num_2: 139
percent_cases_well_formed: 100.0
error_outputs: 0
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 175
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
prompt_tokens: 2768173
completion_tokens: 384000
test_timeouts: 1
total_tests: 225
command: aider --model openai/qwen3-235b-a22b
date: 2025-05-09
versions: 0.82.4.dev
seconds_per_case: 50.8
total_cost: 0.0000

View file

@ -2,7 +2,7 @@ If you need more help, please check our
[GitHub issues](https://github.com/Aider-AI/aider/issues)
and file a new issue if your problem isn't discussed.
Or drop into our
[Discord](https://discord.gg/Tv2uQnR88V)
[Discord](https://discord.gg/Y7X7bhMQFV)
to chat with us.
When reporting problems, it is very helpful if you can provide:

View file

@ -188,10 +188,15 @@ document.addEventListener('DOMContentLoaded', function() {
// Update the leaderboard title based on mode and selection
if (leaderboardTitle) {
if (currentMode === 'view' && selectedRows.size > 0) {
leaderboardTitle.textContent = filteredTitle;
// Check if a custom title is provided globally
if (typeof LEADERBOARD_CUSTOM_TITLE !== 'undefined' && LEADERBOARD_CUSTOM_TITLE) {
leaderboardTitle.textContent = LEADERBOARD_CUSTOM_TITLE;
} else {
leaderboardTitle.textContent = defaultTitle;
if (currentMode === 'view' && selectedRows.size > 0) {
leaderboardTitle.textContent = filteredTitle;
} else {
leaderboardTitle.textContent = defaultTitle;
}
}
}

View file

@ -3,5 +3,5 @@
Aider is on
<a href="https://github.com/Aider-AI/aider">GitHub</a>
and
<a href="https://discord.gg/Tv2uQnR88V">Discord</a>.
<a href="https://discord.gg/Y7X7bhMQFV">Discord</a>.
</footer>

View file

@ -0,0 +1,365 @@
---
layout: post
title: Qwen3 benchmark results
excerpt: "Benchmark results for Qwen3 models using the Aider polyglot coding benchmark."
highlight_image: /assets/2025-05-08-qwen3.jpg
date: 2025-05-08
---
# Qwen3 results on the aider polyglot benchmark
As [previously discussed when Qwen2.5 was released](/2024/11/21/quantization.html),
details matter when working with open source models for AI coding.
Proprietary models are served by their creators or trusted providers with stable inference settings.
Open source models are wonderful because anyone can serve them,
but API providers can use very different inference settings, quantizations, etc.
Below are collection of aider polyglot benchmark results for the new Qwen3 models.
Results are presented using both "diff" and "whole"
[edit formats](https://aider.chat/docs/more/edit-formats.html),
with various models settings, against various API providers.
See details on the
[model settings](https://aider.chat/docs/config/adv-model-settings.html#model-settings)
used after the results table.
{: .note }
This article is being updated as new results become available.
Also, some results were submitted by aider users and have not been verified.
<h2 id="leaderboard-title">Qwen3 results on the aider polyglot benchmark</h2>
<div id="controls-container" style="display: flex; align-items: center; width: 100%; max-width: 800px; margin: 10px auto; gap: 10px; box-sizing: border-box; padding: 0 5px; position: relative;">
<input type="text" id="editSearchInput" placeholder="Search..." style="flex-grow: 1; padding: 8px; border: 1px solid #ddd; border-radius: 4px;">
<div id="view-mode-toggle" style="display: inline-flex; border: 1px solid #ccc; border-radius: 4px;">
<button id="mode-view-btn" class="mode-button active" data-mode="view" style="padding: 8px 8px; border: none; border-radius: 3px 0 0 3px; cursor: pointer; font-size: 14px; line-height: 1.5; min-width: 50px;">View</button>
<button id="mode-select-btn" class="mode-button" data-mode="select" style="padding: 8px 8px; border: none; background-color: #f8f9fa; border-radius: 0; cursor: pointer; border-left: 1px solid #ccc; font-size: 14px; line-height: 1.5; min-width: 50px;">Select</button>
<button id="mode-detail-btn" class="mode-button" data-mode="detail" style="padding: 8px 8px; border: none; background-color: #f8f9fa; border-radius: 0 3px 3px 0; cursor: pointer; border-left: 1px solid #ccc; font-size: 14px; line-height: 1.5; min-width: 50px;">Detail</button>
</div>
<button id="close-controls-btn" style="width: 18px; height: 18px; padding: 0; border: 1px solid #ddd; border-radius: 50%; background-color: transparent; cursor: pointer; display: flex; align-items: center; justify-content: center; font-size: 12px; margin-left: 4px; color: #999;">×</button>
</div>
<table style="width: 100%; max-width: 800px; margin: auto; border-collapse: collapse; box-shadow: 0 2px 4px rgba(0,0,0,0.1); font-size: 14px;">
<thead style="background-color: #f2f2f2;">
<tr>
<th style="padding: 8px; width: 40px; text-align: center; vertical-align: middle;">
<input type="checkbox" id="select-all-checkbox" style="display: none; cursor: pointer; vertical-align: middle;">
</th> <!-- Header checkbox added here -->
<th style="padding: 8px; text-align: left;">Model</th>
<th style="padding: 8px; text-align: center; width: 25%">Percent correct</th>
<th style="padding: 8px; text-align: center; width: 25%">Cost</th>
<th style="padding: 8px; text-align: left;" class="col-command">Command</th>
<th style="padding: 8px; text-align: center; width: 10%" class="col-conform">Correct edit format</th>
<th style="padding: 8px; text-align: left; width: 10%" class="col-edit-format">Edit Format</th>
</tr>
</thead>
<tbody>
{% assign max_cost = 0 %}
{% for row in site.data.qwen3_leaderboard %}
{% if row.total_cost > max_cost %}
{% assign max_cost = row.total_cost %}
{% endif %}
{% endfor %}
{% if max_cost == 0 %}{% assign max_cost = 1 %}{% endif %}
{% assign edit_sorted = site.data.qwen3_leaderboard | sort: 'pass_rate_2' | reverse %}
{% for row in edit_sorted %} {% comment %} Add loop index for unique IDs {% endcomment %}
{% assign row_index = forloop.index0 %}
<tr id="main-row-{{ row_index }}">
<td style="padding: 8px; text-align: center; vertical-align: middle;">
<button class="toggle-details" data-target="details-{{ row_index }}" style="background: none; border: none; cursor: pointer; font-size: 16px; padding: 0; vertical-align: middle;"></button>
<input type="checkbox" class="row-selector" data-row-index="{{ row_index }}" style="display: none; cursor: pointer; vertical-align: middle;">
</td>
<td style="padding: 8px;"><span>{{ row.model }}</span></td>
<td class="bar-cell">
<div class="bar-viz" style="width: {{ row.pass_rate_2 }}%; background-color: rgba(40, 167, 69, 0.3); border-right: 1px solid rgba(40, 167, 69, 0.5);"></div>
<span>{{ row.pass_rate_2 }}%</span>
</td>
<td class="bar-cell cost-bar-cell">
{% if row.total_cost > 0 %}
<div class="bar-viz cost-bar" data-cost="{{ row.total_cost }}" data-max-cost="{{ max_cost }}" style="width: 0%; background-color: rgba(13, 110, 253, 0.3); border-right: 1px solid rgba(13, 110, 253, 0.5);"></div>
{% endif %}
{% assign rounded_cost = row.total_cost | times: 1.0 | round: 2 %}
<span>{% if row.total_cost == 0 or rounded_cost == 0.00 %}{% else %}${{ rounded_cost }}{% endif %}</span>
</td>
<td style="padding: 8px;" class="col-command"><span><code>{{ row.command }}</code></span></td>
<td style="padding: 8px; text-align: center;" class="col-conform"><span>{{ row.percent_cases_well_formed }}%</span></td>
<td style="padding: 8px;" class="col-edit-format"><span>{{ row.edit_format }}</span></td>
</tr>
<tr class="details-row" id="details-{{ row_index }}" style="display: none; background-color: #f9f9f9;">
<td colspan="7" style="padding: 15px; border-bottom: 1px solid #ddd;">
<ul style="margin: 0; padding-left: 20px; list-style: none; border-bottom: 1px solid #ddd;">
{% for pair in row %}
{% if pair[1] != "" and pair[1] != nil %}
<li><strong>
{% if pair[0] == 'percent_cases_well_formed' %}
Percent cases well formed
{% else %}
{{ pair[0] | replace: '_', ' ' | capitalize }}
{% endif %}
:</strong>
{% if pair[0] == 'command' %}<code>{{ pair[1] }}</code>{% else %}{{ pair[1] }}{% endif %}
</li>
{% endif %}
{% endfor %}
</ul>
</td>
</tr>
{% endfor %}
</tbody>
</table>
<style>
#leaderboard-title {
margin-bottom: 20px; /* Add space below the title */
}
tr.selected {
color: #0056b3;
}
table {
table-layout: fixed;
}
thead {
border-top: 1px solid #ddd; /* Add top border to header */
}
td, th {
border: none; /* Remove internal cell borders */
word-wrap: break-word;
overflow-wrap: break-word;
vertical-align: middle; /* Ensure consistent vertical alignment */
}
tbody tr {
height: 50px; /* Set a minimum height for all data rows */
}
td.col-command { /* Command column */
font-size: 12px; /* Keep font size adjustment for command column if desired, or remove */
}
/* Hide new columns first on smaller screens */
@media screen and (max-width: 991px) {
th.col-conform, td.col-conform,
th.col-edit-format, td.col-edit-format {
display: none;
}
/* Increase width of Percent correct and Cost columns when others are hidden */
th:nth-child(3), td:nth-child(3), /* Percent correct */
th:nth-child(4), td:nth-child(4) { /* Cost */
width: 33% !important; /* Override inline style */
}
}
/* Hide command column on even smaller screens */
@media screen and (max-width: 767px) {
th.col-command, td.col-command { /* Command column */
display: none;
}
}
/* --- Control Styles --- */
#controls-container {
margin-bottom: 20px; /* Add some space below controls */
}
#editSearchInput, #view-mode-select {
padding: 8px 12px; /* Consistent padding */
border: 1px solid #ccc; /* Slightly softer border */
border-radius: 4px;
font-size: 14px; /* Match table font size */
height: 38px; /* Match height */
box-sizing: border-box; /* Include padding/border in height */
}
.bar-cell {
position: relative; /* Positioning context for the bar */
padding: 8px;
/* text-align: center; Removed */
overflow: hidden; /* Prevent bar from overflowing cell boundaries if needed */
}
.cost-bar-cell {
background-image: none; /* Remove default gradient for cost cells */
}
.percent-tick, .cost-tick {
position: absolute;
top: 50%;
transform: translateY(10px);
height: 8px; /* Short tick */
width: 1px;
background-color: rgba(170, 170, 170, 0.5);
z-index: 2; /* Above the bar but below the text */
}
.bar-viz {
position: absolute;
left: 0;
top: 50%; /* Position at the middle of the cell */
transform: translateY(-50%); /* Center the bar vertically */
z-index: 1; /* Above background, below ticks and text */
height: 36px;
border-radius: 0 2px 2px 0; /* Slightly rounded end corners */
/* Width and colors are set inline via style attribute */
}
/* Add a tooltip class for showing cost information on hover */
.cost-bar-cell:hover .bar-viz[style*="background-image"] {
animation: stripe-animation 2s linear infinite;
}
@keyframes stripe-animation {
0% { background-position: 0 0; }
100% { background-position: 20px 0; }
}
.bar-cell span {
position: absolute; /* Position relative to the cell */
left: 5px; /* Position slightly inside the left edge */
top: 50%; /* Center vertically */
transform: translateY(-50%); /* Adjust vertical centering */
z-index: 3; /* Ensure text is above everything else */
background-color: rgba(255, 255, 255, 0.7); /* Semi-transparent white background */
padding: 0 4px; /* Add padding around the text */
border-radius: 3px; /* Rounded corners for the text background */
font-size: 14px; /* Adjust font size for the numbers */
}
.toggle-details {
color: #888; /* Make toggle symbol more subtle */
transition: color 0.2s; /* Smooth transition on hover */
}
/* Style for selected rows */
tr.row-selected > td {
background-color: #e7f3ff; /* Example light blue highlight */
}
/* Ensure checkbox is vertically aligned if needed */
.row-selector {
vertical-align: middle;
}
/* Hide rows not matching the filter */
tr.hidden-by-mode {
display: none !important; /* Use important to override other display styles if necessary */
}
tr.hidden-by-search {
display: none !important;
}
/* --- Mode Toggle Button Styles --- */
#view-mode-toggle {
height: 38px; /* Match input height */
box-sizing: border-box;
flex-shrink: 0; /* Prevent toggle from shrinking on small screens */
}
.mode-button {
transition: background-color 0.2s ease-in-out, color 0.2s ease-in-out;
white-space: nowrap; /* Prevent text wrapping */
}
.mode-button:not(.active) {
background-color: #f8f9fa; /* Light grey background */
color: #495057; /* Dark grey text */
}
.mode-button:not(.active):hover {
background-color: #e2e6ea; /* Slightly darker grey on hover */
}
/* Style for highlighted rows in view mode */
tr.view-highlighted > td {
background-color: #fffef5; /* Very light yellow/cream */
/* Border moved to specific cell below */
}
/* Apply border and adjust padding ONLY for the first *visible* cell (Model name) in view mode */
tr.view-highlighted > td:nth-child(2) {
border-left: 4px solid #ffc107; /* Warning yellow border */
/* Original padding is 8px. Subtract border width. */
padding-left: 4px;
}
</style>
<script>
const LEADERBOARD_CUSTOM_TITLE = "Qwen3 results on the aider polyglot benchmark";
{% include leaderboard_table.js %}
</script>
## No think, via official Alibaba API
These results were obtained running against `https://dashscope.aliyuncs.com/compatible-mode/v1`
with no thinking.
```bash
export OPENAI_API_BASE=https://dashscope.aliyuncs.com/compatible-mode/v1
export OPENAI_API_KEY=<key>
```
```yaml
- name: openai/qwen3-235b-a22b
use_temperature: 0.7
streaming: false
extra_params:
stream: false
max_tokens: 16384
top_p: 0.8
top_k: 20
temperature: 0.7
enable_thinking: false
extra_body:
enable_thinking: false
```
## OpenRouter only TogetherAI, recommended /no_think settings
These results were obtained with the
[recommended](https://huggingface.co/Qwen/Qwen3-235B-A22B#best-practices)
non-thinking model settings in `.aider.model.settings.yml`:
```yaml
- name: openrouter/qwen/qwen3-235b-a22b
system_prompt_prefix: "/no_think"
use_temperature: 0.7
extra_params:
max_tokens: 24000
top_p: 0.8
top_k: 20
min_p: 0.0
temperature: 0.7
extra_body:
provider:
order: ["Together"]
```
And then running aider:
```bash
aider --model openrouter/qwen/qwen3-235b-a22b
```
## OpenRouter, all providers, default settings (thinking)
These results were obtained by simply running aider as shown below, without any model specific settings.
This should have enabled thinking, assuming upstream API providers honor that convention for Qwen3.
```bash
aider --model openrouter/qwen/qwen3-xxx
```
## VLLM, bfloat16, recommended /no_think
These [benchmarks results were obtained by GitHub user AlongWY](https://github.com/Aider-AI/aider/pull/3908)
with the
[recommended](https://huggingface.co/Qwen/Qwen3-235B-A22B#best-practices)
non-thinking model settings in `.aider.model.settings.yml`:
```yaml
- name: openai/<model-name>
system_prompt_prefix: "/no_think"
use_temperature: 0.7
extra_params:
max_tokens: 24000
top_p: 0.8
top_k: 20
min_p: 0.0
temperature: 0.7
```
And then running aider:
```bash
aider --model openai/<model-name> --openai-api-base <url>
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 221 KiB

File diff suppressed because it is too large Load diff

View file

@ -428,6 +428,9 @@
## Specify which editor to use for the /editor command
#editor: xxx
## Print shell completion script for the specified SHELL and exit. Supported shells: bash, tcsh, zsh. Example: aider --shell-completions bash
#shell-completions: xxx
############################
# Deprecated model settings:

View file

@ -396,6 +396,9 @@
## Specify which editor to use for the /editor command
#AIDER_EDITOR=
## Print shell completion script for the specified SHELL and exit. Supported shells: bash, tcsh, zsh. Example: aider --shell-completions bash
#AIDER_SHELL_COMPLETIONS=
############################
# Deprecated model settings:

View file

@ -1224,7 +1224,7 @@ cog.out("```\n")
max_tokens: 8192
caches_by_default: true
- name: openrouter/google/gemini-2.5-pro-exp-03-25:free
- name: openrouter/google/gemini-2.5-pro-exp-03-25
edit_format: diff-fenced
weak_model_name: openrouter/google/gemini-2.0-flash-exp:free
use_repo_map: true

View file

@ -482,6 +482,9 @@ cog.outl("```")
## Specify which editor to use for the /editor command
#editor: xxx
## Print shell completion script for the specified SHELL and exit. Supported shells: bash, tcsh, zsh. Example: aider --shell-completions bash
#shell-completions: xxx
############################
# Deprecated model settings:

View file

@ -436,6 +436,9 @@ cog.outl("```")
## Specify which editor to use for the /editor command
#AIDER_EDITOR=
## Print shell completion script for the specified SHELL and exit. Supported shells: bash, tcsh, zsh. Example: aider --shell-completions bash
#AIDER_SHELL_COMPLETIONS=
############################
# Deprecated model settings:

View file

@ -82,9 +82,10 @@ usage: aider [-h] [--model] [--openai-api-key] [--anthropic-api-key]
[--multiline | --no-multiline]
[--notifications | --no-notifications]
[--notifications-command]
[--detect-urls | --no-detect-urls] [--editor] [--opus]
[--sonnet] [--haiku] [--4] [--4o] [--mini] [--4-turbo]
[--35turbo] [--deepseek] [--o1-mini] [--o1-preview]
[--detect-urls | --no-detect-urls] [--editor]
[--shell-completions] [--opus] [--sonnet] [--haiku]
[--4] [--4o] [--mini] [--4-turbo] [--35turbo]
[--deepseek] [--o1-mini] [--o1-preview]
```
@ -767,6 +768,10 @@ Aliases:
Specify which editor to use for the /editor command
Environment variable: `AIDER_EDITOR`
### `--shell-completions SHELL`
Print shell completion script for the specified SHELL and exit. Supported shells: bash, tcsh, zsh. Example: aider --shell-completions bash
Environment variable: `AIDER_SHELL_COMPLETIONS`
## Deprecated model settings:
### `--opus`

View file

@ -264,17 +264,9 @@ tr:hover { background-color: #f5f5f5; }
</style>
<table>
<tr><th>Model Name</th><th class='right'>Total Tokens</th><th class='right'>Percent</th></tr>
<tr><td>gemini/gemini-2.5-pro-exp-03-25</td><td class='right'>668,483</td><td class='right'>55.7%</td></tr>
<tr><td>gemini/gemini-2.5-pro-preview-03-25</td><td class='right'>442,019</td><td class='right'>36.9%</td></tr>
<tr><td>o3</td><td class='right'>52,666</td><td class='right'>4.4%</td></tr>
<tr><td>gemini/gemini-2.5-pro-preview-05-06</td><td class='right'>18,654</td><td class='right'>1.6%</td></tr>
<tr><td>openrouter/REDACTED</td><td class='right'>15,587</td><td class='right'>1.3%</td></tr>
<tr><td>gemini/REDACTED</td><td class='right'>1,989</td><td class='right'>0.2%</td></tr>
<tr><td>gemini/gemini-2.5-pro-exp-03-25</td><td class='right'>1,063,656</td><td class='right'>86.6%</td></tr>
<tr><td>o3</td><td class='right'>164,724</td><td class='right'>13.4%</td></tr>
</table>
{: .note :}
Some models show as REDACTED, because they are new or unpopular models.
Aider's analytics only records the names of "well known" LLMs.
<!--[[[end]]]-->
## How are the "aider wrote xx% of code" stats computed?

View file

@ -28,12 +28,6 @@ These one-liners will install aider, along with python 3.12 if needed.
They are based on the
[uv installers](https://docs.astral.sh/uv/getting-started/installation/).
#### Windows
```powershell
powershell -ExecutionPolicy ByPass -c "irm https://aider.chat/install.ps1 | iex"
```
#### Mac & Linux
Use curl to download the script and execute it with sh:
@ -48,6 +42,12 @@ If your system doesn't have curl, you can use wget:
wget -qO- https://aider.chat/install.sh | sh
```
#### Windows
```powershell
powershell -ExecutionPolicy ByPass -c "irm https://aider.chat/install.ps1 | iex"
```
## Install with uv
@ -55,7 +55,7 @@ You can install aider with uv:
```bash
python -m pip install uv # If you need to install uv
uv tool install --force --python python3.12 aider-chat@latest
uv tool install --force --python python3.12 --with pip aider-chat@latest
```
This will install uv using your existing python version 3.8-3.13,

View file

@ -285,6 +285,6 @@ mod_dates = [get_last_modified_date(file) for file in files]
latest_mod_date = max(mod_dates)
cog.out(f"{latest_mod_date.strftime('%B %d, %Y.')}")
]]]-->
May 08, 2025.
May 09, 2025.
<!--[[[end]]]-->
</p>

View file

@ -9,8 +9,7 @@ nav_order: 800
All pricing information is the cost to run the benchmark at the time it was
run.
Providers change their pricing, and every benchmark run ends up with a slightly
different cost.
Providers change their pricing and sometimes introduce entirely novel pricing structures.
Pricing is provided on a *best efforts* basis, and may not always be current
or fully accurate.

View file

@ -0,0 +1,105 @@
---
parent: Connecting to LLMs
nav_order: 510
---
# GitHub Copilot
Aider can connect to GitHub Copilots LLMs because Copilot exposes a standard **OpenAI-style**
endpoint at:
```
https://api.githubcopilot.com
```
First, install aider:
{% include install.md %}
---
## Configure your environment
```bash
# macOS/Linux
export OPENAI_API_BASE=https://api.githubcopilot.com
export OPENAI_API_KEY=<oauth_token>
# Windows (PowerShell)
setx OPENAI_API_BASE https://api.githubcopilot.com
setx OPENAI_API_KEY <oauth_token>
# …restart the shell after setx commands
```
---
### Where do I get the token?
The easiest path is to sign in to Copilot from any JetBrains IDE (PyCharm, GoLand, etc).
After you authenticate a file appears:
```
~/.config/github-copilot/apps.json
```
Copy the `oauth_token` value that string is your `OPENAI_API_KEY`.
*Note:* tokens created by the Neovim **copilot.lua** plugin (old `hosts.json`) sometimes lack the
needed scopes. If you see “access to this endpoint is forbidden”, regenerate the token with a
JetBrains IDE or the VS Code Copilot extension.
---
## Discover available models
Copilot hosts many models (OpenAI, Anthropic, Google, etc).
List the models your subscription allows with:
```bash
curl -s https://api.githubcopilot.com/models \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-H "Copilot-Integration-Id: vscode-chat" | jq -r '.data[].id'
```
Each returned ID can be used with aider by **prefixing it with `openai/`**:
```bash
aider --model openai/gpt-4o
# or
aider --model openai/claude-3.7-sonnet-thought
```
---
## Quick start
```bash
# change into your project
cd /to/your/project
# talk to Copilot
aider --model openai/gpt-4o
```
---
## Optional config file (`~/.aider.conf.yml`)
```yaml
openai-api-base: https://api.githubcopilot.com
openai-api-key: "<oauth_token>"
model: openai/gpt-4o
weak-model: openai/gpt-4o-mini
show-model-warnings: false
```
---
## FAQ
* Calls made through aider are billed through your Copilot subscription
(aider will still print *estimated* costs).
* The Copilot docs explicitly allow third-party “agents” that hit this API aider is playing by
the rules.
* Aider talks directly to the REST endpoint—no web-UI scraping or browser automation.

View file

@ -27,7 +27,7 @@ layout: none
<a href="#features">Features</a>
<a href="#getting-started">Getting Started</a>
<a href="/docs/">Documentation</a>
<a href="https://discord.gg/Tv2uQnR88V">Discord</a>
<a href="https://discord.gg/Y7X7bhMQFV">Discord</a>
<a href="https://github.com/Aider-AI/aider">GitHub</a>
</div>
</nav>
@ -85,7 +85,7 @@ cog.out(text)
</a>
<a href="/HISTORY.html" class="github-badge badge-coded" title="Percentage of the new code in Aider's last release written by Aider itself">
<span class="badge-label">🔄 Singularity</span>
<span class="badge-value">92%</span>
<span class="badge-value">54%</span>
</a>
<!--[[[end]]]-->
</div>
@ -269,173 +269,178 @@ cog.out(text)
<script>
const testimonials = [
{
text: "My life has changed... There's finally an AI coding tool that's good enough to keep up with me... Aider... It's going to rock your world.",
author: "Eric S. Raymond",
text: "My life has changed... Aider... It's going to rock your world.",
author: "Eric S. Raymond on X",
link: "https://x.com/esrtweet/status/1910809356381413593"
},
{
text: "The best free open source AI coding assistant.",
author: "IndyDevDan",
author: "IndyDevDan on YouTube",
link: "https://youtu.be/YALpX8oOn78"
},
{
text: "The best AI coding assistant so far.",
author: "Matthew Berman",
author: "Matthew Berman on YouTube",
link: "https://www.youtube.com/watch?v=df8afeb1FY8"
},
{
text: "Aider ... has easily quadrupled my coding productivity.",
author: "SOLAR_FIELDS",
author: "SOLAR_FIELDS on Hacker News",
link: "https://news.ycombinator.com/item?id=36212100"
},
{
text: "It's a cool workflow... Aider's ergonomics are perfect for me.",
author: "qup",
author: "qup on Hacker News",
link: "https://news.ycombinator.com/item?id=38185326"
},
{
text: "It's really like having your senior developer live right in your Git repo - truly amazing!",
author: "rappster",
author: "rappster on GitHub",
link: "https://github.com/Aider-AI/aider/issues/124"
},
{
text: "What an amazing tool. It's incredible.",
author: "valyagolev",
author: "valyagolev on GitHub",
link: "https://github.com/Aider-AI/aider/issues/6#issue-1722897858"
},
{
text: "Aider is such an astounding thing!",
author: "cgrothaus",
author: "cgrothaus on GitHub",
link: "https://github.com/Aider-AI/aider/issues/82#issuecomment-1631876700"
},
{
text: "It was WAY faster than I would be getting off the ground and making the first few working versions.",
author: "Daniel Feldman",
author: "Daniel Feldman on X",
link: "https://twitter.com/d_feldman/status/1662295077387923456"
},
{
text: "THANK YOU for Aider! It really feels like a glimpse into the future of coding.",
author: "derwiki",
author: "derwiki on Hacker News",
link: "https://news.ycombinator.com/item?id=38205643"
},
{
text: "It's just amazing. It is freeing me to do things I felt were out my comfort zone before.",
author: "Dougie",
author: "Dougie on Discord",
link: "https://discord.com/channels/1131200896827654144/1174002618058678323/1174084556257775656"
},
{
text: "This project is stellar.",
author: "funkytaco",
author: "funkytaco on GitHub",
link: "https://github.com/Aider-AI/aider/issues/112#issuecomment-1637429008"
},
{
text: "Amazing project, definitely the best AI coding assistant I've used.",
author: "joshuavial",
author: "joshuavial on GitHub",
link: "https://github.com/Aider-AI/aider/issues/84"
},
{
text: "I absolutely love using Aider ... It makes software development feel so much lighter as an experience.",
author: "principalideal0",
author: "principalideal0 on Discord",
link: "https://discord.com/channels/1131200896827654144/1133421607499595858/1229689636012691468"
},
{
text: "I have been recovering from multiple shoulder surgeries ... and have used aider extensively. It has allowed me to continue productivity.",
author: "codeninja",
text: "I have been recovering from ... surgeries ... aider ... has allowed me to continue productivity.",
author: "codeninja on Reddit",
link: "https://www.reddit.com/r/OpenAI/s/nmNwkHy1zG"
},
{
text: "I am an aider addict. I'm getting so much more work done, but in less time.",
author: "dandandan",
author: "dandandan on Discord",
link: "https://discord.com/channels/1131200896827654144/1131200896827654149/1135913253483069470"
},
{
text: "After wasting $100 on tokens trying to find something better, I'm back to Aider. It blows everything else out of the water hands down, there's no competition whatsoever.",
author: "SystemSculpt",
text: "Aider... blows everything else out of the water hands down, there's no competition whatsoever.",
author: "SystemSculpt on Discord",
link: "https://discord.com/channels/1131200896827654144/1131200896827654149/1178736602797846548"
},
{
text: "Aider is amazing, coupled with Sonnet 3.5 it's quite mind blowing.",
author: "Josh Dingus",
author: "Josh Dingus on Discord",
link: "https://discord.com/channels/1131200896827654144/1133060684540813372/1262374225298198548"
},
{
text: "Hands down, this is the best AI coding assistant tool so far.",
author: "IndyDevDan",
author: "IndyDevDan on YouTube",
link: "https://www.youtube.com/watch?v=MPYFPvxfGZs"
},
{
text: "[Aider] changed my daily coding workflows. It's mind-blowing how a single Python application can change your life.",
author: "maledorak",
text: "[Aider] changed my daily coding workflows. It's mind-blowing how ...(it)... can change your life.",
author: "maledorak on Discord",
link: "https://discord.com/channels/1131200896827654144/1131200896827654149/1258453375620747264"
},
{
text: "Best agent for actual dev work in existing codebases.",
author: "Nick Dobos",
author: "Nick Dobos on X",
link: "https://twitter.com/NickADobos/status/1690408967963652097?s=20"
},
{
text: "One of my favorite pieces of software. Blazing trails on new paradigms!",
author: "Chris Wall",
author: "Chris Wall on X",
link: "https://x.com/chris65536/status/1905053299251798432"
},
{
text: "Aider has been revolutionary for me and my work.",
author: "Starry Hope",
author: "Starry Hope on X",
link: "https://x.com/starryhopeblog/status/1904985812137132056"
},
{
text: "Try aider! One of the best ways to vibe code.",
author: "Chris Wall",
author: "Chris Wall on X",
link: "https://x.com/Chris65536/status/1905053418961391929"
},
{
text: "Aider is hands down the best. And it's free and opensource.",
author: "AriyaSavakaLurker",
author: "AriyaSavakaLurker on Reddit",
link: "https://www.reddit.com/r/ChatGPTCoding/comments/1ik16y6/whats_your_take_on_aider/mbip39n/"
},
{
text: "Aider is also my best friend.",
author: "jzn21",
author: "jzn21 on Reddit",
link: "https://www.reddit.com/r/ChatGPTCoding/comments/1heuvuo/aider_vs_cline_vs_windsurf_vs_cursor/m27dcnb/"
},
{
text: "Try Aider, it's worth it.",
author: "jorgejhms",
author: "jorgejhms on Reddit",
link: "https://www.reddit.com/r/ChatGPTCoding/comments/1heuvuo/aider_vs_cline_vs_windsurf_vs_cursor/m27cp99/"
},
{
text: "I like aider :)",
author: "Chenwei Cui",
author: "Chenwei Cui on X",
link: "https://x.com/ccui42/status/1904965344999145698"
},
{
text: "Aider is the precision tool of LLM code gen... Minimal, thoughtful and capable of surgical changes to your codebase all while keeping the developer in control.",
author: "Reilly Sweetland",
text: "Aider is the precision tool of LLM code gen... Minimal, thoughtful and capable of surgical changes ... while keeping the developer in control.",
author: "Reilly Sweetland on X",
link: "https://x.com/rsweetland/status/1904963807237259586"
},
{
text: "Cannot believe aider vibe coded a 650 LOC feature across service and cli today in 1 shot.",
author: "autopoietist",
author: "autopoietist on Discord",
link: "https://discord.com/channels/1131200896827654144/1131200896827654149/1355675042259796101"
},
{
text: "Oh no the secret is out! Yes, Aider is the best coding tool around. I highly, highly recommend it to anyone.",
author: "Joshua D Vander Hook",
author: "Joshua D Vander Hook on X",
link: "https://x.com/jodavaho/status/1911154899057795218"
},
{
text: "thanks to aider, i have started and finished three personal projects within the last two days",
author: "joseph stalzyn",
author: "joseph stalzyn on X",
link: "https://x.com/anitaheeder/status/1908338609645904160"
},
{
text: "Been using aider as my daily driver for over a year ... I absolutely love the tool, like beyond words.",
author: "koleok",
author: "koleok on Discord",
link: "https://discord.com/channels/1131200896827654144/1273248471394291754/1356727448372252783"
},
{
text: "Aider ... is the tool to benchmark against.",
author: "BeetleB on Hacker News",
link: "https://news.ycombinator.com/item?id=43930201"
},
{
text: "aider is really cool",
author: "kache (@yacineMTB)",
author: "kache on X",
link: "https://x.com/yacineMTB/status/1911224442430124387"
}
];
@ -636,7 +641,7 @@ const testimonials = [
<ul class="info-list">
<li><a href="/docs/leaderboards/">LLM Leaderboards</a></li>
<li><a href="https://github.com/Aider-AI/aider">GitHub Repository</a></li>
<li><a href="https://discord.gg/Tv2uQnR88V">Discord Community</a></li>
<li><a href="https://discord.gg/Y7X7bhMQFV">Discord Community</a></li>
<li><a href="/blog/">Blog</a></li>
</ul>
</div>
@ -649,7 +654,7 @@ const testimonials = [
<div class="footer-links">
<a href="/docs/install.html">Documentation</a>
<a href="https://github.com/Aider-AI/aider">GitHub</a>
<a href="https://discord.gg/Tv2uQnR88V">Discord</a>
<a href="https://discord.gg/Y7X7bhMQFV">Discord</a>
<a href="/blog/">Blog</a>
</div>
</div>

View file

@ -425,7 +425,7 @@ function Invoke-Installer($artifacts, $platforms) {
Write-Information ""
Write-Information "Installing aider-chat..."
& "$dest_dir\uv.exe" tool install --force --python python3.12 aider-chat@latest
& "$dest_dir\uv.exe" tool install --force --python python3.12 --with pip aider-chat@latest
if (-not $NoModifyPath) {
Add-Ci-Path $dest_dir

View file

@ -1178,7 +1178,7 @@ install() {
say "Installing aider..."
say ""
# Install aider-chat using the newly installed uv
ensure "${_install_dir}/uv" tool install --force --python python3.12 aider-chat@latest
ensure "${_install_dir}/uv" tool install --force --python python3.12 --with pip aider-chat@latest
# Avoid modifying the users PATH if they are managing their PATH manually
case :$PATH:

View file

@ -784,7 +784,7 @@ def run_test_real(
instructions += prompts.instructions_addendum.format(file_list=file_list)
io = InputOutput(
pretty=True,
pretty=False,
yes=True,
chat_history_file=history_fname,
)

View file

@ -132,7 +132,7 @@ def find_non_self_methods(path):
with open(filename, "r") as file:
try:
node = ast.parse(file.read(), filename=filename)
except:
except: # noqa: E722
pass
checker = SelfUsageChecker()
checker.visit(node)

View file

@ -12,11 +12,10 @@ classifiers = [
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python",
"Topic :: Software Development",
]
requires-python = ">=3.9,<3.13"
requires-python = ">=3.10,<3.13"
dynamic = ["dependencies", "optional-dependencies", "version"]
[project.urls]
@ -48,5 +47,5 @@ build-backend = "setuptools.build_meta"
write_to = "aider/_version.py"
[tool.codespell]
skip = "*.svg,Gemfile.lock"
skip = "*.svg,Gemfile.lock,tests/fixtures/*,aider/website/assets/*"
write-changes = true

View file

@ -260,7 +260,7 @@ multidict==6.4.3
# -c requirements/common-constraints.txt
# aiohttp
# yarl
networkx==3.2.1
networkx==3.4.2
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements.in
@ -431,7 +431,11 @@ rsa==4.9.1
# via
# -c requirements/common-constraints.txt
# google-auth
scipy==1.13.1
scipy==1.15.3
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements.in
shtab==1.7.2
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements.in

View file

@ -160,7 +160,7 @@ googleapis-common-protos==1.70.0
# via
# google-api-core
# grpcio-status
greenlet==3.2.1
greenlet==3.2.2
# via
# playwright
# sqlalchemy
@ -258,7 +258,7 @@ markupsafe==3.0.2
# via jinja2
marshmallow==3.26.1
# via dataclasses-json
matplotlib==3.10.1
matplotlib==3.10.3
# via -r requirements/requirements-dev.in
mccabe==0.7.0
# via flake8
@ -280,11 +280,11 @@ multiprocess==0.70.18
# via pathos
mypy-extensions==1.1.0
# via typing-inspect
narwhals==1.38.0
narwhals==1.38.2
# via altair
nest-asyncio==1.6.0
# via llama-index-core
networkx==3.2.1
networkx==3.4.2
# via
# -r requirements/requirements.in
# llama-index-core
@ -490,7 +490,7 @@ safetensors==0.5.3
# via transformers
scikit-learn==1.6.1
# via sentence-transformers
scipy==1.13.1
scipy==1.15.3
# via
# -r requirements/requirements.in
# scikit-learn
@ -503,6 +503,8 @@ setuptools==80.3.1
# via pip-tools
shellingham==1.5.4
# via typer
shtab==1.7.2
# via -r requirements/requirements.in
six==1.17.0
# via
# mixpanel
@ -619,7 +621,7 @@ uv==0.7.3
# via -r requirements/requirements-dev.in
uvicorn==0.34.2
# via mcp
virtualenv==20.31.1
virtualenv==20.31.2
# via pre-commit
watchfiles==1.0.5
# via -r requirements/requirements.in

View file

@ -58,7 +58,7 @@ markupsafe==3.0.2
# via
# -c requirements/common-constraints.txt
# jinja2
narwhals==1.38.0
narwhals==1.38.2
# via
# -c requirements/common-constraints.txt
# altair

View file

@ -127,7 +127,7 @@ markdown-it-py==3.0.0
# via
# -c requirements/common-constraints.txt
# rich
matplotlib==3.10.1
matplotlib==3.10.3
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements-dev.in
@ -301,7 +301,7 @@ uv==0.7.3
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements-dev.in
virtualenv==20.31.1
virtualenv==20.31.2
# via
# -c requirements/common-constraints.txt
# pre-commit

View file

@ -81,7 +81,7 @@ fsspec==2025.3.2
# huggingface-hub
# llama-index-core
# torch
greenlet==3.2.1
greenlet==3.2.2
# via
# -c requirements/common-constraints.txt
# sqlalchemy
@ -163,7 +163,7 @@ nest-asyncio==1.6.0
# via
# -c requirements/common-constraints.txt
# llama-index-core
networkx==3.2.1
networkx==3.4.2
# via
# -c requirements/common-constraints.txt
# llama-index-core
@ -236,7 +236,7 @@ scikit-learn==1.6.1
# via
# -c requirements/common-constraints.txt
# sentence-transformers
scipy==1.13.1
scipy==1.15.3
# via
# -c requirements/common-constraints.txt
# scikit-learn

View file

@ -1,6 +1,6 @@
# This file was autogenerated by uv via the following command:
# uv pip compile --no-strip-extras --constraint=requirements/common-constraints.txt --output-file=requirements/requirements-playwright.txt requirements/requirements-playwright.in
greenlet==3.2.1
greenlet==3.2.2
# via
# -c requirements/common-constraints.txt
# playwright

View file

@ -27,6 +27,7 @@ psutil
watchfiles
socksio
pillow
shtab
oslex
google-generativeai
mcp>=1.0.0
@ -35,14 +36,12 @@ mcp>=1.0.0
# in matplotlib and a bunch of other deps
# https://github.com/networkx/networkx/blob/d7132daa8588f653eacac7a5bae1ee85a183fa43/pyproject.toml#L57
# We really only need networkx itself and scipy for the repomap.
# Pin below v3.3 to retain python 3.9 compatibility.
networkx<3.3
networkx
# This is the one networkx dependency that we need.
# Including it here explicitly because we
# didn't specify networkx[default] above.
# Pin below 1.14 to retain python 3.9 compatibility.
scipy<1.14
scipy
# GitHub Release action failing on "KeyError: 'home-page'"
# https://github.com/pypa/twine/blob/6fbf880ee60915cf1666348c4bdd78a10415f2ac/twine/__init__.py#L40

View file

@ -1,4 +1,5 @@
#!/usr/bin/env python
# flake8: noqa: E501
"""
Generate a celebratory SVG image for Aider reaching 30,000 GitHub stars.
This creates a shareable social media graphic with confetti animation.
@ -7,7 +8,6 @@ This creates a shareable social media graphic with confetti animation.
import argparse
import base64
import math
import os
import random
from pathlib import Path

View file

@ -3,7 +3,6 @@
Download Material Design Icons SVGs used in the README and save to local assets.
"""
import os
from pathlib import Path
import requests

View file

@ -1,15 +1,17 @@
history_prompt = """
Update the history doc with changes shown in the diffs.
Describe actual user-facing changes, not every single commit that was made implementing them.
Update the history markdown doc with changes shown in the diffs.
Succinctly describe actual user-facing changes, not every single commit or detail that was made implementing them.
Only add new items not already listed.
Only add new items not already listed in the history markdown.
Do NOT edit or update existing history entries.
Do NOT add duplicate entries for changes that have existing history entries.
Do NOT add additional entries for small tweaks to features which are already listed in the existing history.
Pay attention to see if changes are later modified or superseded.
Pay attention to see if changes are later modified or superseded in the commit logs.
The history doc should only reflect the *final* version of changes which have evolved within a version's commit history.
If the history doc already describes the final behavior, don't document the changes that led us there.
Bullet each item at the start of the line with `-`.
End each bullet with a period.
If the change was made by someone other than Paul Gauthier note it at the end of the bullet point as ", by XXX."
@ -19,6 +21,6 @@ Changes in the .x-dev version should be listed under a "### main branch" heading
Start a new "### main branch" section at the top of the file if needed.
Also, add this as the last bullet under the "### main branch" section:
Also, add this as the last bullet under the "### main branch" section, replacing an existing version if present:
{aider_line}
""" # noqa

View file

@ -1,7 +1,5 @@
#!/usr/bin/env python3
import json
import os
import re
import sys
import pyte

View file

@ -3,6 +3,7 @@
import os
import re
import subprocess
import sys
import tempfile
from history_prompts import history_prompt
@ -52,26 +53,11 @@ def run_git_diff():
return result.stdout
def run_plain_git_log():
latest_ver = get_latest_version_from_history()
cmd = [
"git",
"log",
f"v{latest_ver}..HEAD",
"--",
"aider/",
":!aider/website/",
":!scripts/",
":!HISTORY.md",
]
result = subprocess.run(cmd, capture_output=True, text=True)
return result.stdout
def main():
aider_args = sys.argv[1:]
# Get the git log and diff output
log_content = run_git_log()
plain_log_content = run_plain_git_log()
diff_content = run_git_diff()
# Extract relevant portion of HISTORY.md
@ -108,14 +94,15 @@ def main():
tmp_diff.write(diff_content)
diff_path = tmp_diff.name
with tempfile.NamedTemporaryFile(mode="w", delete=False, suffix=".plain_log") as tmp_plain_log:
tmp_plain_log.write(plain_log_content)
plain_log_path = tmp_plain_log.name
with tempfile.NamedTemporaryFile(mode="w", delete=False, suffix=".md") as tmp_hist:
tmp_hist.write(relevant_history)
hist_path = tmp_hist.name
# Display line counts
print(f"Lines in {hist_path}: {len(relevant_history.splitlines())}")
print(f"Lines in {log_path}: {len(log_content.splitlines())}")
print(f"Lines in {diff_path}: {len(diff_content.splitlines())}")
# Run blame to get aider percentage
blame_result = subprocess.run(["python3", "scripts/blame.py"], capture_output=True, text=True)
aider_line = blame_result.stdout.strip().split("\n")[-1] # Get last line with percentage
@ -129,14 +116,12 @@ def main():
"--read",
log_path,
"--read",
plain_log_path,
"--read",
diff_path,
"--msg",
message,
"--no-git",
"--no-auto-lint",
]
] + aider_args
subprocess.run(cmd)
# Read back the updated history
@ -164,7 +149,6 @@ def main():
# Cleanup
os.unlink(log_path)
os.unlink(plain_log_path)
os.unlink(diff_path)
os.unlink(hist_path)

View file

@ -650,7 +650,7 @@ TWO
coder.partial_response_function_call = dict()
return []
def mock_get_commit_message(diffs, context):
def mock_get_commit_message(diffs, context, user_language=None):
self.assertNotIn("one", diffs)
self.assertNotIn("ONE", diffs)
return "commit message"
@ -705,7 +705,7 @@ three
saved_diffs = []
def mock_get_commit_message(diffs, context):
def mock_get_commit_message(diffs, context, user_language=None):
saved_diffs.append(diffs)
return "commit message"
@ -783,7 +783,7 @@ two
saved_diffs = []
def mock_get_commit_message(diffs, context):
def mock_get_commit_message(diffs, context, user_language=None):
saved_diffs.append(diffs)
return "commit message"
@ -1182,6 +1182,122 @@ This command will print 'Hello, World!' to the console."""
sanity_check_messages(coder.cur_messages)
self.assertEqual(coder.cur_messages[-1]["role"], "assistant")
def test_normalize_language(self):
coder = Coder.create(self.GPT35, None, io=InputOutput())
# Test None and empty
self.assertIsNone(coder.normalize_language(None))
self.assertIsNone(coder.normalize_language(""))
# Test "C" and "POSIX"
self.assertIsNone(coder.normalize_language("C"))
self.assertIsNone(coder.normalize_language("POSIX"))
# Test already formatted names
self.assertEqual(coder.normalize_language("English"), "English")
self.assertEqual(coder.normalize_language("French"), "French")
# Test common locale codes (fallback map, assuming babel is not installed or fails)
with patch("aider.coders.base_coder.Locale", None):
self.assertEqual(coder.normalize_language("en_US"), "English")
self.assertEqual(coder.normalize_language("fr_FR"), "French")
self.assertEqual(coder.normalize_language("es"), "Spanish")
self.assertEqual(coder.normalize_language("de_DE.UTF-8"), "German")
self.assertEqual(
coder.normalize_language("zh-CN"), "Chinese"
) # Test hyphen in fallback
self.assertEqual(coder.normalize_language("ja"), "Japanese")
self.assertEqual(
coder.normalize_language("unknown_code"), "unknown_code"
) # Fallback to original
# Test with babel.Locale mocked (available)
mock_babel_locale = MagicMock()
mock_locale_instance = MagicMock()
mock_babel_locale.parse.return_value = mock_locale_instance
with patch("aider.coders.base_coder.Locale", mock_babel_locale):
mock_locale_instance.get_display_name.return_value = "english" # For en_US
self.assertEqual(coder.normalize_language("en_US"), "English")
mock_babel_locale.parse.assert_called_with("en_US")
mock_locale_instance.get_display_name.assert_called_with("en")
mock_locale_instance.get_display_name.return_value = "french" # For fr-FR
self.assertEqual(coder.normalize_language("fr-FR"), "French") # Test with hyphen
mock_babel_locale.parse.assert_called_with("fr_FR") # Hyphen replaced
mock_locale_instance.get_display_name.assert_called_with("en")
# Test with babel.Locale raising an exception (simulating parse failure)
mock_babel_locale_error = MagicMock()
mock_babel_locale_error.parse.side_effect = Exception("Babel parse error")
with patch("aider.coders.base_coder.Locale", mock_babel_locale_error):
self.assertEqual(coder.normalize_language("en_US"), "English") # Falls back to map
def test_get_user_language(self):
io = InputOutput()
coder = Coder.create(self.GPT35, None, io=io)
# 1. Test with self.chat_language set
coder.chat_language = "fr_CA"
with patch.object(coder, "normalize_language", return_value="French Canadian") as mock_norm:
self.assertEqual(coder.get_user_language(), "French Canadian")
mock_norm.assert_called_once_with("fr_CA")
coder.chat_language = None # Reset
# 2. Test with locale.getlocale()
with patch("locale.getlocale", return_value=("en_GB", "UTF-8")) as mock_getlocale:
with patch.object(
coder, "normalize_language", return_value="British English"
) as mock_norm:
self.assertEqual(coder.get_user_language(), "British English")
mock_getlocale.assert_called_once()
mock_norm.assert_called_once_with("en_GB")
# Test with locale.getlocale() returning None or empty
with patch("locale.getlocale", return_value=(None, None)) as mock_getlocale:
with patch("os.environ.get") as mock_env_get: # Ensure env vars are not used yet
mock_env_get.return_value = None
self.assertIsNone(coder.get_user_language()) # Should be None if nothing found
# 3. Test with environment variables: LANG
with patch(
"locale.getlocale", side_effect=Exception("locale error")
): # Mock locale to fail
with patch("os.environ.get") as mock_env_get:
mock_env_get.side_effect = lambda key: "de_DE.UTF-8" if key == "LANG" else None
with patch.object(coder, "normalize_language", return_value="German") as mock_norm:
self.assertEqual(coder.get_user_language(), "German")
mock_env_get.assert_any_call("LANG")
mock_norm.assert_called_once_with("de_DE")
# Test LANGUAGE (takes precedence over LANG if both were hypothetically checked
# by os.environ.get, but our code checks in order, so we mock the first one it finds)
with patch("locale.getlocale", side_effect=Exception("locale error")):
with patch("os.environ.get") as mock_env_get:
mock_env_get.side_effect = lambda key: "es_ES" if key == "LANGUAGE" else None
with patch.object(coder, "normalize_language", return_value="Spanish") as mock_norm:
self.assertEqual(coder.get_user_language(), "Spanish")
mock_env_get.assert_any_call("LANGUAGE") # LANG would be called first
mock_norm.assert_called_once_with("es_ES")
# 4. Test priority: chat_language > locale > env
coder.chat_language = "it_IT"
with patch("locale.getlocale", return_value=("en_US", "UTF-8")) as mock_getlocale:
with patch("os.environ.get", return_value="de_DE") as mock_env_get:
with patch.object(
coder, "normalize_language", side_effect=lambda x: x.upper()
) as mock_norm:
self.assertEqual(coder.get_user_language(), "IT_IT") # From chat_language
mock_norm.assert_called_once_with("it_IT")
mock_getlocale.assert_not_called()
mock_env_get.assert_not_called()
coder.chat_language = None
# 5. Test when no language is found
with patch("locale.getlocale", side_effect=Exception("locale error")):
with patch("os.environ.get", return_value=None) as mock_env_get:
self.assertIsNone(coder.get_user_language())
def test_architect_coder_auto_accept_true(self):
with GitTemporaryDirectory():
io = InputOutput(yes=True)

View file

@ -2105,3 +2105,95 @@ class TestCommands(TestCase):
mock_tool_error.assert_any_call(
"Command '/model gpt-4' is only supported in interactive mode, skipping."
)
def test_reset_after_coder_clone_preserves_original_read_only_files(self):
with GitTemporaryDirectory() as _:
repo_dir = str(".")
io = InputOutput(pretty=False, fancy_input=False, yes=True)
orig_ro_path = Path(repo_dir) / "orig_ro.txt"
orig_ro_path.write_text("original read only")
editable_path = Path(repo_dir) / "editable.txt"
editable_path.write_text("editable content")
other_ro_path = Path(repo_dir) / "other_ro.txt"
other_ro_path.write_text("other read only")
original_read_only_fnames_set = {str(orig_ro_path)}
# Create the initial Coder
orig_coder = Coder.create(main_model=self.GPT35, io=io, fnames=[], repo=None)
orig_coder.root = repo_dir # Set root for path operations
# Replace its commands object with one that has the original_read_only_fnames
orig_coder.commands = Commands(
io, orig_coder, original_read_only_fnames=list(original_read_only_fnames_set)
)
orig_coder.commands.coder = orig_coder
# Populate coder's file sets
orig_coder.abs_read_only_fnames.add(str(orig_ro_path))
orig_coder.abs_fnames.add(str(editable_path))
orig_coder.abs_read_only_fnames.add(str(other_ro_path))
# Simulate SwitchCoder by creating a new coder from the original one
new_coder = Coder.create(from_coder=orig_coder)
new_commands = new_coder.commands
# Perform /reset
new_commands.cmd_reset("")
# Assertions for /reset
self.assertEqual(len(new_coder.abs_fnames), 0)
self.assertEqual(len(new_coder.abs_read_only_fnames), 1)
# self.assertIn(str(orig_ro_path), new_coder.abs_read_only_fnames)
self.assertTrue(
any(os.path.samefile(p, str(orig_ro_path)) for p in new_coder.abs_read_only_fnames),
f"File {str(orig_ro_path)} not found in {new_coder.abs_read_only_fnames}",
)
self.assertEqual(len(new_coder.done_messages), 0)
self.assertEqual(len(new_coder.cur_messages), 0)
def test_drop_bare_after_coder_clone_preserves_original_read_only_files(self):
with GitTemporaryDirectory() as _:
repo_dir = str(".")
io = InputOutput(pretty=False, fancy_input=False, yes=True)
orig_ro_path = Path(repo_dir) / "orig_ro.txt"
orig_ro_path.write_text("original read only")
editable_path = Path(repo_dir) / "editable.txt"
editable_path.write_text("editable content")
other_ro_path = Path(repo_dir) / "other_ro.txt"
other_ro_path.write_text("other read only")
original_read_only_fnames_set = {str(orig_ro_path)}
orig_coder = Coder.create(main_model=self.GPT35, io=io, fnames=[], repo=None)
orig_coder.root = repo_dir
orig_coder.commands = Commands(
io, orig_coder, original_read_only_fnames=list(original_read_only_fnames_set)
)
orig_coder.commands.coder = orig_coder
orig_coder.abs_read_only_fnames.add(str(orig_ro_path))
orig_coder.abs_fnames.add(str(editable_path))
orig_coder.abs_read_only_fnames.add(str(other_ro_path))
orig_coder.done_messages = [{"role": "user", "content": "d1"}]
orig_coder.cur_messages = [{"role": "user", "content": "c1"}]
new_coder = Coder.create(from_coder=orig_coder)
new_commands = new_coder.commands
new_commands.cmd_drop("")
self.assertEqual(len(new_coder.abs_fnames), 0)
self.assertEqual(len(new_coder.abs_read_only_fnames), 1)
# self.assertIn(str(orig_ro_path), new_coder.abs_read_only_fnames)
self.assertTrue(
any(os.path.samefile(p, str(orig_ro_path)) for p in new_coder.abs_read_only_fnames),
f"File {str(orig_ro_path)} not found in {new_coder.abs_read_only_fnames}",
)
self.assertEqual(new_coder.done_messages, [{"role": "user", "content": "d1"}])
self.assertEqual(new_coder.cur_messages, [{"role": "user", "content": "c1"}])

View file

@ -5,6 +5,7 @@ from unittest.mock import MagicMock, patch
from prompt_toolkit.completion import CompleteEvent
from prompt_toolkit.document import Document
from rich.text import Text
from aider.dump import dump # noqa: F401
from aider.io import AutoCompleter, ConfirmGroup, InputOutput
@ -451,8 +452,6 @@ class TestInputOutputMultilineMode(unittest.TestCase):
"""Test that tool_output correctly handles hex colors without # prefix"""
from unittest.mock import patch
from rich.text import Text
# Create IO with hex color without # for tool_output_color
io = InputOutput(tool_output_color="FFA500", pretty=True)
@ -476,5 +475,136 @@ class TestInputOutputMultilineMode(unittest.TestCase):
mock_print.assert_called_once()
@patch("aider.io.is_dumb_terminal", return_value=False)
@patch.dict(os.environ, {"NO_COLOR": ""})
class TestInputOutputFormatFiles(unittest.TestCase):
def test_format_files_for_input_pretty_false(self, mock_is_dumb_terminal):
io = InputOutput(pretty=False, fancy_input=False)
rel_fnames = ["file1.txt", "file[markup].txt", "ro_file.txt"]
rel_read_only_fnames = ["ro_file.txt"]
expected_output = "file1.txt\nfile[markup].txt\nro_file.txt (read only)\n"
# Sort the expected lines because the order of editable vs read-only might vary
# depending on internal sorting, but the content should be the same.
# The method sorts editable_files and read_only_files separately.
# The final output joins sorted(read_only_files) + sorted(editable_files)
# Based on current implementation:
# read_only_files = ["ro_file.txt (read only)"]
# editable_files = ["file1.txt", "file[markup].txt"]
# output = "\n".join(read_only_files + editable_files) + "\n"
# Correct expected output based on implementation:
expected_output_lines = sorted(
[
"ro_file.txt (read only)",
"file1.txt",
"file[markup].txt",
]
)
expected_output = "\n".join(expected_output_lines) + "\n"
actual_output = io.format_files_for_input(rel_fnames, rel_read_only_fnames)
# Normalizing actual output by splitting, sorting, and rejoining
actual_output_lines = sorted(filter(None, actual_output.splitlines()))
normalized_actual_output = "\n".join(actual_output_lines) + "\n"
self.assertEqual(normalized_actual_output, expected_output)
@patch("aider.io.Columns")
@patch("os.path.abspath")
@patch("os.path.join")
def test_format_files_for_input_pretty_true_no_files(
self, mock_join, mock_abspath, mock_columns, mock_is_dumb_terminal
):
io = InputOutput(pretty=True, root="test_root")
io.format_files_for_input([], [])
mock_columns.assert_not_called()
@patch("aider.io.Columns")
@patch("os.path.abspath")
@patch("os.path.join")
def test_format_files_for_input_pretty_true_editable_only(
self, mock_join, mock_abspath, mock_columns, mock_is_dumb_terminal
):
io = InputOutput(pretty=True, root="test_root")
rel_fnames = ["edit1.txt", "edit[markup].txt"]
io.format_files_for_input(rel_fnames, [])
mock_columns.assert_called_once()
args, _ = mock_columns.call_args
renderables = args[0]
self.assertEqual(len(renderables), 2)
self.assertIsInstance(renderables[0], Text)
self.assertEqual(renderables[0].plain, "edit1.txt")
self.assertIsInstance(renderables[1], Text)
self.assertEqual(renderables[1].plain, "edit[markup].txt")
@patch("aider.io.Columns")
@patch("os.path.abspath")
@patch("os.path.join")
def test_format_files_for_input_pretty_true_readonly_only(
self, mock_join, mock_abspath, mock_columns, mock_is_dumb_terminal
):
io = InputOutput(pretty=True, root="test_root")
# Mock path functions to ensure rel_path is chosen by the shortener logic
mock_join.side_effect = lambda *args: "/".join(args)
mock_abspath.side_effect = lambda p: "/ABS_PREFIX_VERY_LONG/" + os.path.normpath(p)
rel_read_only_fnames = ["ro1.txt", "ro[markup].txt"]
# When all files in chat are read-only
rel_fnames = list(rel_read_only_fnames)
io.format_files_for_input(rel_fnames, rel_read_only_fnames)
self.assertEqual(mock_columns.call_count, 2)
args, _ = mock_columns.call_args
renderables = args[0]
self.assertEqual(len(renderables), 3) # Readonly: + 2 files
self.assertIsInstance(renderables[0], Text)
self.assertEqual(renderables[0].plain, "Readonly:")
self.assertIsInstance(renderables[1], Text)
self.assertEqual(renderables[1].plain, "ro1.txt")
self.assertIsInstance(renderables[2], Text)
self.assertEqual(renderables[2].plain, "ro[markup].txt")
@patch("aider.io.Columns")
@patch("os.path.abspath")
@patch("os.path.join")
def test_format_files_for_input_pretty_true_mixed_files(
self, mock_join, mock_abspath, mock_columns, mock_is_dumb_terminal
):
io = InputOutput(pretty=True, root="test_root")
mock_join.side_effect = lambda *args: "/".join(args)
mock_abspath.side_effect = lambda p: "/ABS_PREFIX_VERY_LONG/" + os.path.normpath(p)
rel_fnames = ["edit1.txt", "edit[markup].txt", "ro1.txt", "ro[markup].txt"]
rel_read_only_fnames = ["ro1.txt", "ro[markup].txt"]
io.format_files_for_input(rel_fnames, rel_read_only_fnames)
self.assertEqual(mock_columns.call_count, 4)
# Check arguments for the first rendering of read-only files (call 0)
args_ro, _ = mock_columns.call_args_list[0]
renderables_ro = args_ro[0]
self.assertEqual(
renderables_ro, [Text("Readonly:"), Text("ro1.txt"), Text("ro[markup].txt")]
)
# Check arguments for the first rendering of editable files (call 2)
args_ed, _ = mock_columns.call_args_list[2]
renderables_ed = args_ed[0]
self.assertEqual(
renderables_ed, [Text("Editable:"), Text("edit1.txt"), Text("edit[markup].txt")]
)
if __name__ == "__main__":
unittest.main()

View file

@ -949,16 +949,19 @@ class TestMain(TestCase):
def test_invalid_edit_format(self):
with GitTemporaryDirectory():
with patch("aider.io.InputOutput.offer_url") as mock_offer_url:
result = main(
["--edit-format", "not-a-real-format", "--exit", "--yes"],
input=DummyInput(),
output=DummyOutput(),
)
self.assertEqual(result, 1) # main() should return 1 on error
mock_offer_url.assert_called_once()
args, _ = mock_offer_url.call_args
self.assertEqual(args[0], "https://aider.chat/docs/more/edit-formats.html")
# Suppress stderr for this test as argparse prints an error message
with patch("sys.stderr", new_callable=StringIO) as mock_stderr:
with self.assertRaises(SystemExit) as cm:
_ = main(
["--edit-format", "not-a-real-format", "--exit", "--yes"],
input=DummyInput(),
output=DummyOutput(),
)
# argparse.ArgumentParser.exit() is called with status 2 for invalid choice
self.assertEqual(cm.exception.code, 2)
stderr_output = mock_stderr.getvalue()
self.assertIn("invalid choice", stderr_output)
self.assertIn("not-a-real-format", stderr_output)
def test_default_model_selection(self):
with GitTemporaryDirectory():

View file

@ -0,0 +1,73 @@
from pathlib import Path
from aider.models import ModelInfoManager
from aider.openrouter import OpenRouterModelManager
class DummyResponse:
"""Minimal stand-in for requests.Response used in tests."""
def __init__(self, json_data):
self.status_code = 200
self._json_data = json_data
def json(self):
return self._json_data
def test_openrouter_get_model_info_from_cache(monkeypatch, tmp_path):
"""
OpenRouterModelManager should return correct metadata taken from the
downloaded (and locally cached) models JSON payload.
"""
payload = {
"data": [
{
"id": "mistralai/mistral-medium-3",
"context_length": 32768,
"pricing": {"prompt": "100", "completion": "200"},
"top_provider": {"context_length": 32768},
}
]
}
# Fake out the network call and the HOME directory used for the cache file
monkeypatch.setattr("requests.get", lambda *a, **k: DummyResponse(payload))
monkeypatch.setattr(Path, "home", staticmethod(lambda: tmp_path))
manager = OpenRouterModelManager()
info = manager.get_model_info("openrouter/mistralai/mistral-medium-3")
assert info["max_input_tokens"] == 32768
assert info["input_cost_per_token"] == 0.0001
assert info["output_cost_per_token"] == 0.0002
assert info["litellm_provider"] == "openrouter"
def test_model_info_manager_uses_openrouter_manager(monkeypatch):
"""
ModelInfoManager should delegate to OpenRouterModelManager when litellm
provides no data for an OpenRouter-prefixed model.
"""
# Ensure litellm path returns no info so that fallback logic triggers
monkeypatch.setattr("aider.models.litellm.get_model_info", lambda *a, **k: {})
stub_info = {
"max_input_tokens": 512,
"max_tokens": 512,
"max_output_tokens": 512,
"input_cost_per_token": 0.0001,
"output_cost_per_token": 0.0002,
"litellm_provider": "openrouter",
}
# Force OpenRouterModelManager to return our stub info
monkeypatch.setattr(
"aider.models.OpenRouterModelManager.get_model_info",
lambda self, model: stub_info,
)
mim = ModelInfoManager()
info = mim.get_model_info("openrouter/fake/model")
assert info == stub_info

View file

@ -206,13 +206,15 @@ class TestRepo(unittest.TestCase):
self.assertEqual(commit.committer.name, "Test User (aider)")
# Now test with explicit False
git_repo_explicit_false = GitRepo(io, None, None, attribute_author=False, attribute_committer=False)
git_repo_explicit_false = GitRepo(
io, None, None, attribute_author=False, attribute_committer=False
)
fname.write_text("explicit false content")
commit_result = git_repo_explicit_false.commit(fnames=[str(fname)], aider_edits=True)
self.assertIsNotNone(commit_result)
commit = raw_repo.head.commit
self.assertEqual(commit.author.name, "Test User") # Explicit False
self.assertEqual(commit.committer.name, "Test User") # Explicit False
self.assertEqual(commit.author.name, "Test User") # Explicit False
self.assertEqual(commit.committer.name, "Test User") # Explicit False
# check that the original committer name is restored
original_committer_name = os.environ.get("GIT_COMMITTER_NAME")
@ -223,11 +225,21 @@ class TestRepo(unittest.TestCase):
# Test user commit with explicit no-committer attribution
git_repo_user_no_committer = GitRepo(io, None, None, attribute_committer=False)
fname.write_text("user no committer content")
commit_result = git_repo_user_no_committer.commit(fnames=[str(fname)], aider_edits=False)
commit_result = git_repo_user_no_committer.commit(
fnames=[str(fname)], aider_edits=False
)
self.assertIsNotNone(commit_result)
commit = raw_repo.head.commit
self.assertEqual(commit.author.name, "Test User", msg="Author name should not be modified for user commits")
self.assertEqual(commit.committer.name, "Test User", msg="Committer name should not be modified when attribute_committer=False")
self.assertEqual(
commit.author.name,
"Test User",
msg="Author name should not be modified for user commits",
)
self.assertEqual(
commit.committer.name,
"Test User",
msg="Committer name should not be modified when attribute_committer=False",
)
@unittest.skipIf(platform.system() == "Windows", "Git env var behavior differs on Windows")
def test_commit_with_co_authored_by(self):
@ -246,21 +258,22 @@ class TestRepo(unittest.TestCase):
# Mock coder args: Co-authored-by enabled, author/committer use default (None)
mock_coder = MagicMock()
mock_coder.args.attribute_co_authored_by = True
mock_coder.args.attribute_author = None # Default
mock_coder.args.attribute_committer = None # Default
mock_coder.args.attribute_author = None # Default
mock_coder.args.attribute_committer = None # Default
mock_coder.args.attribute_commit_message_author = False
mock_coder.args.attribute_commit_message_committer = False
# The code uses coder.main_model.name for the co-authored-by line
mock_coder.main_model = MagicMock()
mock_coder.main_model.name = "gpt-test"
io = InputOutput()
git_repo = GitRepo(io, None, None)
# commit a change with aider_edits=True and co-authored-by flag
fname.write_text("new content")
commit_result = git_repo.commit(fnames=[str(fname)], aider_edits=True, coder=mock_coder, message="Aider edit")
commit_result = git_repo.commit(
fnames=[str(fname)], aider_edits=True, coder=mock_coder, message="Aider edit"
)
self.assertIsNotNone(commit_result)
# check the commit message and author/committer
@ -268,12 +281,21 @@ class TestRepo(unittest.TestCase):
self.assertIn("Co-authored-by: aider (gpt-test) <noreply@aider.chat>", commit.message)
self.assertEqual(commit.message.splitlines()[0], "Aider edit")
# With default (None), co-authored-by takes precedence
self.assertEqual(commit.author.name, "Test User", msg="Author name should not be modified when co-authored-by takes precedence")
self.assertEqual(commit.committer.name, "Test User", msg="Committer name should not be modified when co-authored-by takes precedence")
self.assertEqual(
commit.author.name,
"Test User",
msg="Author name should not be modified when co-authored-by takes precedence",
)
self.assertEqual(
commit.committer.name,
"Test User",
msg="Committer name should not be modified when co-authored-by takes precedence",
)
@unittest.skipIf(platform.system() == "Windows", "Git env var behavior differs on Windows")
def test_commit_co_authored_by_with_explicit_name_modification(self):
# Test scenario where Co-authored-by is true AND author/committer modification are explicitly True
# Test scenario where Co-authored-by is true AND
# author/committer modification are explicitly True
with GitTemporaryDirectory():
# Setup repo...
# new repo
@ -287,32 +309,45 @@ class TestRepo(unittest.TestCase):
raw_repo.git.add(str(fname))
raw_repo.git.commit("-m", "initial commit")
# Mock coder args: Co-authored-by enabled, author/committer modification explicitly enabled
# Mock coder args: Co-authored-by enabled,
# author/committer modification explicitly enabled
mock_coder = MagicMock()
mock_coder.args.attribute_co_authored_by = True
mock_coder.args.attribute_author = True # Explicitly enable
mock_coder.args.attribute_committer = True # Explicitly enable
mock_coder.args.attribute_author = True # Explicitly enable
mock_coder.args.attribute_committer = True # Explicitly enable
mock_coder.args.attribute_commit_message_author = False
mock_coder.args.attribute_commit_message_committer = False
mock_coder.main_model = MagicMock()
mock_coder.main_model.name = "gpt-test-combo"
io = InputOutput()
git_repo = GitRepo(io, None, None)
# commit a change with aider_edits=True and combo flags
fname.write_text("new content combo")
commit_result = git_repo.commit(fnames=[str(fname)], aider_edits=True, coder=mock_coder, message="Aider combo edit")
commit_result = git_repo.commit(
fnames=[str(fname)], aider_edits=True, coder=mock_coder, message="Aider combo edit"
)
self.assertIsNotNone(commit_result)
# check the commit message and author/committer
commit = raw_repo.head.commit
self.assertIn("Co-authored-by: aider (gpt-test-combo) <noreply@aider.chat>", commit.message)
self.assertIn(
"Co-authored-by: aider (gpt-test-combo) <noreply@aider.chat>", commit.message
)
self.assertEqual(commit.message.splitlines()[0], "Aider combo edit")
# When co-authored-by is true BUT author/committer are explicit True, modification SHOULD happen
self.assertEqual(commit.author.name, "Test User (aider)", msg="Author name should be modified when explicitly True, even with co-author")
self.assertEqual(commit.committer.name, "Test User (aider)", msg="Committer name should be modified when explicitly True, even with co-author")
# When co-authored-by is true BUT author/committer are explicit True,
# modification SHOULD happen
self.assertEqual(
commit.author.name,
"Test User (aider)",
msg="Author name should be modified when explicitly True, even with co-author",
)
self.assertEqual(
commit.committer.name,
"Test User (aider)",
msg="Committer name should be modified when explicitly True, even with co-author",
)
@unittest.skipIf(platform.system() == "Windows", "Git env var behavior differs on Windows")
def test_commit_ai_edits_no_coauthor_explicit_false(self):
@ -333,8 +368,8 @@ class TestRepo(unittest.TestCase):
# Case 1: attribute_author = False, attribute_committer = None (default True)
mock_coder_no_author = MagicMock()
mock_coder_no_author.args.attribute_co_authored_by = False
mock_coder_no_author.args.attribute_author = False # Explicit False
mock_coder_no_author.args.attribute_committer = None # Default True
mock_coder_no_author.args.attribute_author = False # Explicit False
mock_coder_no_author.args.attribute_committer = None # Default True
mock_coder_no_author.args.attribute_commit_message_author = False
mock_coder_no_author.args.attribute_commit_message_committer = False
mock_coder_no_author.main_model = MagicMock()
@ -342,18 +377,23 @@ class TestRepo(unittest.TestCase):
git_repo_no_author = GitRepo(io, None, None)
fname.write_text("no author content")
commit_result = git_repo_no_author.commit(fnames=[str(fname)], aider_edits=True, coder=mock_coder_no_author, message="Aider no author")
commit_result = git_repo_no_author.commit(
fnames=[str(fname)],
aider_edits=True,
coder=mock_coder_no_author,
message="Aider no author",
)
self.assertIsNotNone(commit_result)
commit = raw_repo.head.commit
self.assertNotIn("Co-authored-by:", commit.message)
self.assertEqual(commit.author.name, "Test User") # Explicit False
self.assertEqual(commit.committer.name, "Test User (aider)") # Default True
self.assertEqual(commit.author.name, "Test User") # Explicit False
self.assertEqual(commit.committer.name, "Test User (aider)") # Default True
# Case 2: attribute_author = None (default True), attribute_committer = False
mock_coder_no_committer = MagicMock()
mock_coder_no_committer.args.attribute_co_authored_by = False
mock_coder_no_committer.args.attribute_author = None # Default True
mock_coder_no_committer.args.attribute_committer = False # Explicit False
mock_coder_no_committer.args.attribute_author = None # Default True
mock_coder_no_committer.args.attribute_committer = False # Explicit False
mock_coder_no_committer.args.attribute_commit_message_author = False
mock_coder_no_committer.args.attribute_commit_message_committer = False
mock_coder_no_committer.main_model = MagicMock()
@ -361,12 +401,25 @@ class TestRepo(unittest.TestCase):
git_repo_no_committer = GitRepo(io, None, None)
fname.write_text("no committer content")
commit_result = git_repo_no_committer.commit(fnames=[str(fname)], aider_edits=True, coder=mock_coder_no_committer, message="Aider no committer")
commit_result = git_repo_no_committer.commit(
fnames=[str(fname)],
aider_edits=True,
coder=mock_coder_no_committer,
message="Aider no committer",
)
self.assertIsNotNone(commit_result)
commit = raw_repo.head.commit
self.assertNotIn("Co-authored-by:", commit.message)
self.assertEqual(commit.author.name, "Test User (aider)", msg="Author name should be modified (default True) when co-author=False")
self.assertEqual(commit.committer.name, "Test User", msg="Committer name should not be modified (explicit False) when co-author=False")
self.assertEqual(
commit.author.name,
"Test User (aider)",
msg="Author name should be modified (default True) when co-author=False",
)
self.assertEqual(
commit.committer.name,
"Test User",
msg="Committer name should not be modified (explicit False) when co-author=False",
)
def test_get_tracked_files(self):
# Create a temporary directory

View file

@ -155,7 +155,7 @@ def test_ai_comment_pattern():
assert (
question_js_has_bang == "?"
), "Expected at least one bang (!) comment in watch_question.js fixture"
# Test Lisp fixture
lisp_path = fixtures_dir / "watch.lisp"
lisp_lines, lisp_comments, lisp_has_bang = watcher.get_ai_comments(str(lisp_path))

View file

@ -1,7 +1,5 @@
import pytest
from unittest.mock import MagicMock
from aider.scrape import Scraper
from aider.scrape import install_playwright, Scraper
class DummyIO:
def __init__(self):
@ -21,17 +19,21 @@ class DummyIO:
def test_scraper_disable_playwright_flag(monkeypatch):
io = DummyIO()
# Simulate that playwright is not available (disable_playwright just means playwright_available=False)
# Simulate that playwright is not available
# (disable_playwright just means playwright_available=False)
scraper = Scraper(print_error=io.tool_error, playwright_available=False)
# Patch scrape_with_httpx to check it is called
called = {}
def fake_httpx(url):
called['called'] = True
called["called"] = True
return "plain text", "text/plain"
scraper.scrape_with_httpx = fake_httpx
content = scraper.scrape("http://example.com")
assert content == "plain text"
assert called['called']
assert called["called"]
def test_scraper_enable_playwright(monkeypatch):
io = DummyIO()
@ -39,13 +41,16 @@ def test_scraper_enable_playwright(monkeypatch):
scraper = Scraper(print_error=io.tool_error, playwright_available=True)
# Patch scrape_with_playwright to check it is called
called = {}
def fake_playwright(url):
called['called'] = True
called["called"] = True
return "<html>hi</html>", "text/html"
scraper.scrape_with_playwright = fake_playwright
content = scraper.scrape("http://example.com")
assert content.startswith("hi") or "<html>" in content
assert called['called']
assert called["called"]
def test_commands_web_disable_playwright(monkeypatch):
"""
@ -59,16 +64,22 @@ def test_commands_web_disable_playwright(monkeypatch):
self.outputs = []
self.warnings = []
self.errors = []
def tool_output(self, msg, *a, **k):
self.outputs.append(msg)
def tool_warning(self, msg, *a, **k):
self.warnings.append(msg)
def tool_error(self, msg, *a, **k):
self.errors.append(msg)
def read_text(self, filename, silent=False):
return ""
def confirm_ask(self, *a, **k):
return True
def print(self, *a, **k):
pass
@ -77,18 +88,25 @@ def test_commands_web_disable_playwright(monkeypatch):
def __init__(self):
self.cur_messages = []
self.main_model = type("M", (), {"edit_format": "code", "name": "dummy", "info": {}})
def get_rel_fname(self, fname):
return fname
def get_inchat_relative_files(self):
return []
def abs_root_path(self, fname):
return fname
def get_all_abs_files(self):
return []
def get_announcements(self):
return []
def format_chat_chunks(self):
return type("Chunks", (), {"repo": [], "readonly_files": [], "chat_files": []})()
def event(self, *a, **k):
pass
@ -99,6 +117,7 @@ def test_commands_web_disable_playwright(monkeypatch):
class DummyScraper:
def __init__(self, **kwargs):
self.called = False
def scrape(self, url):
self.called = True
return "dummy content"