Merge branch 'main' into feat/add_copilot

This commit is contained in:
Paul Gauthier 2025-03-28 19:13:54 -10:00
commit ab00415ca1
277 changed files with 14165 additions and 2038 deletions

View file

@ -1,20 +1,108 @@
# Release history
### main branch
- Offer to OAuth against OpenRouter if no model and keys are provided.
- Prioritize `gemini/gemini-2.5-pro-exp-03-25` if `GEMINI_API_KEY` is set, and `vertex_ai/gemini-2.5-pro-exp-03-25` if `VERTEXAI_PROJECT` is set, when no model is specified.
- Select OpenRouter default model based on free/paid tier status if `OPENROUTER_API_KEY` is set and no model is specified.
- Warn at startup if `--stream` and `--cache-prompts` are used together, as cost estimates may be inaccurate.
- Boost repomap ranking for files whose path components match identifiers mentioned in the chat.
- Change web scraping timeout from an error to a warning, allowing scraping to continue with potentially incomplete content.
- Left-align markdown headings in the terminal output, by Peter Schilling.
- Aider wrote 90% of the code in this release.
- Display thinking/reasoning content from LLMs which return it.
- Enhanced handling of reasoning tags to better clean up model responses.
### Aider v0.79.2
- Added 'gemini' alias for gemini-2.5-pro model.
- Updated Gemini 2.5 Pro max output tokens to 64k.
- Added support for Lisp-style semicolon comments in file watcher, by Matteo Landi.
- Added OpenRouter API error detection and retries.
- Added openrouter/deepseek-chat-v3-0324 model.
- Aider wrote 93% of the code in this release.
### Aider v0.79.1
- Improved model listing to include all models in fuzzy matching, including those provided by aider (not litellm).
### Aider v0.79.0
- Added support for Gemini 2.5 Pro models.
- Added support for DeepSeek V3 0324 model.
- Added a new `/context` command that automatically identifies which files need to be edited for a given request.
- Added `/edit` as an alias for the `/editor` command.
- Added "overeager" mode for Claude 3.7 Sonnet models to try and keep it working within the requested scope.
- Aider wrote 65% of the code in this release.
### Aider v0.78.0
- Added support for thinking tokens for OpenRouter Sonnet 3.7.
- Added commands to switch between model types: `/editor-model` for Editor Model, and `/weak-model` for Weak Model, by csala.
- Added model setting validation to ignore `--reasoning-effort` and `--thinking-tokens` if the model doesn't support them.
- Added `--check-model-accepts-settings` flag (default: true) to force unsupported model settings.
- Annotated which models support reasoning_effort and thinking_tokens settings in the model settings data.
- Improved code block rendering in markdown output with better padding using NoInsetMarkdown.
- Added `--git-commit-verify` flag (default: False) to control whether git commit hooks are bypassed.
- Fixed autocompletion for `/ask`, `/code`, and `/architect` commands, by shladnik.
- Added vi-like behavior when pressing enter in multiline-mode while in vi normal/navigation-mode, by Marco Mayer.
- Added AWS_PROFILE support for Bedrock models, allowing use of AWS profiles instead of explicit credentials, by lentil32.
- Enhanced `--aiderignore` argument to resolve both absolute and relative paths, by mopemope.
- Improved platform information handling to gracefully handle retrieval errors.
- Aider wrote 92% of the code in this release.
### Aider v0.77.1
- Bumped dependencies to pickup litellm fix for Ollama.
- Added support for `openrouter/google/gemma-3-27b-it` model.
- Updated exclude patterns for help documentation.
### Aider v0.77.0
- Big upgrade in [programming languages supported](https://aider.chat/docs/languages.html) by adopting [tree-sitter-language-pack](https://github.com/Goldziher/tree-sitter-language-pack/).
- 130 new languages with linter support.
- 20 new languages with repo-map support.
- Added `/think-tokens` command to set thinking token budget with support for human-readable formats (8k, 10.5k, 0.5M).
- Added `/reasoning-effort` command to control model reasoning level.
- The `/think-tokens` and `/reasoning-effort` commands display current settings when called without arguments.
- Display of thinking token budget and reasoning effort in model information.
- Changed `--thinking-tokens` argument to accept string values with human-readable formats.
- Added `--auto-accept-architect` flag (default: true) to automatically accept changes from architect coder format without confirmation.
- Added support for `cohere_chat/command-a-03-2025` and `gemini/gemma-3-27b-it`
- The bare `/drop` command now preserves original read-only files provided via args.read.
- Fixed a bug where default model would be set by deprecated `--shortcut` switches even when already specified in the command line.
- Improved AutoCompleter to require 3 characters for autocompletion to reduce noise.
- Aider wrote 72% of the code in this release.
### Aider v0.76.2
- Fixed handling of JSONDecodeError when loading model cache file.
- Fixed handling of GitCommandError when retrieving git user configuration.
- Aider wrote 75% of the code in this release.
### Aider v0.76.1
- Added ignore_permission_denied option to file watcher to prevent errors when accessing restricted files, by Yutaka Matsubara.
- Aider wrote 0% of the code in this release.
### Aider v0.76.0
- Improved support for thinking/reasoningmodels:
- Added `--thinking-tokens` CLI option to control token budget for models that support thinking.
- Display thinking/reasoning content from LLMs which return it.
- Enhanced handling of reasoning tags to better clean up model responses.
- Added deprecation warning for `remove_reasoning` setting, now replaced by `reasoning_tag`.
- Aider will notify you when it's completed the last request and needs your input:
- Added [notifications when LLM responses are ready](https://aider.chat/docs/usage/notifications.html) with `--notifications` flag.
- Specify desktop notification command with `--notifications-command`.
- Added support for QWQ 32B.
- Switch to `tree-sitter-language-pack` for tree sitter support.
- Improved error handling for EOF (Ctrl+D) in user input prompts.
- Added helper function to ensure hex color values have a # prefix.
- Fixed handling of Git errors when reading staged files.
- Improved SSL verification control for model information requests.
- Added support for QWQ 32B.
- Added [notifications when LLM responses are ready](https://aider.chat/docs/usage/notifications.html) with `--notifications` flag.
- Specify desktop notification command with `--notifications-command`.
- Improved empty LLM response handling with clearer warning messages.
- Fixed Git identity retrieval to respect global configuration, by Akira Komamura.
- Offer to install dependencies for Bedrock and Vertex AI models.
- Aider wrote 82% of the code in this release.
- Deprecated model shortcut args (like --4o, --opus) in favor of the --model flag.
- Aider wrote 85% of the code in this release.
### Aider v0.75.3

252
README.md
View file

@ -1,150 +1,170 @@
<p align="center">
<a href="https://aider.chat/"><img src="https://aider.chat/assets/logo.svg" alt="Aider Logo" width="300"></a>
</p>
<!-- Edit README.md, not index.md -->
<h1 align="center">
AI Pair Programming in Your Terminal
</h1>
# Aider is AI pair programming in your terminal
Aider lets you pair program with LLMs,
to edit code in your local git repository.
Start a new project or work with an existing code base.
Aider works best with Claude 3.7 Sonnet, DeepSeek R1 & Chat V3, OpenAI o1, o3-mini & GPT-4o. Aider can [connect to almost any LLM, including local models](https://aider.chat/docs/llms.html).
<p align="center">
Aider lets you pair program with LLMs to start a new project or build on your existing codebase.
</p>
<!-- SCREENCAST START -->
<p align="center">
<img
src="https://aider.chat/assets/screencast.svg"
alt="aider screencast"
>
</p>
<!-- SCREENCAST END -->
<!-- VIDEO START
<p align="center">
<video style="max-width: 100%; height: auto;" autoplay loop muted playsinline>
<source src="/assets/shell-cmds-small.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
</p>
VIDEO END -->
<p align="center">
<a href="https://discord.gg/Tv2uQnR88V">
<img src="https://img.shields.io/badge/Join-Discord-blue.svg"/>
</a>
<a href="https://aider.chat/docs/install.html">
<img src="https://img.shields.io/badge/Read-Docs-green.svg"/>
</a>
</p>
## Getting started
<!--[[[cog
# We can't "include" here.
# Because this page is rendered by GitHub as the repo README
cog.out(open("aider/website/_includes/get-started.md").read())
from scripts.homepage import get_badges_md
text = get_badges_md()
cog.out(text)
]]]-->
<a href="https://github.com/Aider-AI/aider/stargazers"><img alt="GitHub Stars" title="Total number of GitHub stars the Aider project has received"
src="https://img.shields.io/github/stars/Aider-AI/aider?style=flat-square&logo=github&color=f1c40f&labelColor=555555"/></a>
<a href="https://pypi.org/project/aider-chat/"><img alt="PyPI Downloads" title="Total number of installations via pip from PyPI"
src="https://img.shields.io/badge/📦%20Installs-1.7M-2ecc71?style=flat-square&labelColor=555555"/></a>
<img alt="Tokens per week" title="Number of tokens processed weekly by Aider users"
src="https://img.shields.io/badge/📈%20Tokens%2Fweek-15B-3498db?style=flat-square&labelColor=555555"/>
<a href="https://openrouter.ai/"><img alt="OpenRouter Ranking" title="Aider's ranking among applications on the OpenRouter platform"
src="https://img.shields.io/badge/🏆%20OpenRouter-Top%2020-9b59b6?style=flat-square&labelColor=555555"/></a>
<a href="https://aider.chat/HISTORY.html"><img alt="Singularity" title="Percentage of the new code in Aider's last release written by Aider itself"
src="https://img.shields.io/badge/🔄%20Singularity-65%25-e74c3c?style=flat-square&labelColor=555555"/></a>
<!--[[[end]]]-->
</p>
If you already have python 3.8-3.13 installed, you can get started quickly like this:
## Features
### [Cloud and local LLMs](https://aider.chat/docs/llms.html)
<a href="https://aider.chat/docs/llms.html"><img src="https://aider.chat/assets/icons/brain.svg" width="32" height="32" align="left" valign="middle" style="margin-right:10px"></a>
Aider works best with Claude 3.7 Sonnet, DeepSeek R1 & Chat V3, OpenAI o1, o3-mini & GPT-4o, but can connect to almost any LLM, including local models.
<br>
### [Maps your codebase](https://aider.chat/docs/repomap.html)
<a href="https://aider.chat/docs/repomap.html"><img src="https://aider.chat/assets/icons/map-outline.svg" width="32" height="32" align="left" valign="middle" style="margin-right:10px"></a>
Aider makes a map of your entire codebase, which helps it work well in larger projects.
<br>
### [100+ code languages](https://aider.chat/docs/languages.html)
<a href="https://aider.chat/docs/languages.html"><img src="https://aider.chat/assets/icons/code-tags.svg" width="32" height="32" align="left" valign="middle" style="margin-right:10px"></a>
Aider works with most popular programming languages: python, javascript, rust, ruby, go, cpp, php, html, css, and dozens more.
<br>
### [Git integration](https://aider.chat/docs/git.html)
<a href="https://aider.chat/docs/git.html"><img src="https://aider.chat/assets/icons/source-branch.svg" width="32" height="32" align="left" valign="middle" style="margin-right:10px"></a>
Aider automatically commits changes with sensible commit messages. Use familiar git tools to easily diff, manage and undo AI changes.
<br>
### [Use in your IDE](https://aider.chat/docs/usage/watch.html)
<a href="https://aider.chat/docs/usage/watch.html"><img src="https://aider.chat/assets/icons/monitor.svg" width="32" height="32" align="left" valign="middle" style="margin-right:10px"></a>
Use aider from within your favorite IDE or editor. Ask for changes by adding comments to your code and aider will get to work.
<br>
### [Images & web pages](https://aider.chat/docs/usage/images-urls.html)
<a href="https://aider.chat/docs/usage/images-urls.html"><img src="https://aider.chat/assets/icons/image-multiple.svg" width="32" height="32" align="left" valign="middle" style="margin-right:10px"></a>
Add images and web pages to the chat to provide visual context, screenshots, reference docs, etc.
<br>
### [Voice-to-code](https://aider.chat/docs/usage/voice.html)
<a href="https://aider.chat/docs/usage/voice.html"><img src="https://aider.chat/assets/icons/microphone.svg" width="32" height="32" align="left" valign="middle" style="margin-right:10px"></a>
Speak with aider about your code! Request new features, test cases or bug fixes using your voice and let aider implement the changes.
<br>
### [Linting & testing](https://aider.chat/docs/usage/lint-test.html)
<a href="https://aider.chat/docs/usage/lint-test.html"><img src="https://aider.chat/assets/icons/check-all.svg" width="32" height="32" align="left" valign="middle" style="margin-right:10px"></a>
Automatically lint and test your code every time aider makes changes. Aider can fix problems detected by your linters and test suites.
<br>
### [Copy/paste to web chat](https://aider.chat/docs/usage/copypaste.html)
<a href="https://aider.chat/docs/usage/copypaste.html"><img src="https://aider.chat/assets/icons/content-copy.svg" width="32" height="32" align="left" valign="middle" style="margin-right:10px"></a>
Work with any LLM via its web chat interface. Aider streamlines copy/pasting code context and edits back and forth with a browser.
## Getting Started
```bash
python -m pip install aider-install
aider-install
# Change directory into your code base
# Change directory into your codebase
cd /to/your/project
# Work with DeepSeek via DeepSeek's API
aider --model deepseek --api-key deepseek=your-key-goes-here
# DeepSeek
aider --model deepseek --api-key deepseek=<key>
# Work with Claude 3.7 Sonnet via Anthropic's API
aider --model sonnet --api-key anthropic=your-key-goes-here
# Claude 3.7 Sonnet
aider --model sonnet --api-key anthropic=<key>
# Work with GPT-4o via OpenAI's API
aider --model gpt-4o --api-key openai=your-key-goes-here
# Work with Sonnet via OpenRouter's API
aider --model openrouter/anthropic/claude-3.7-sonnet --api-key openrouter=your-key-goes-here
# Work with DeepSeek via OpenRouter's API
aider --model openrouter/deepseek/deepseek-chat --api-key openrouter=your-key-goes-here
# Work with Github Copilot
aider --model github_copilot/claude-3.7-sonnet-thought
# o3-mini
aider --model o3-mini --api-key openai=<key>
```
<!--[[[end]]]-->
> [!TIP]
> If you have not authenticated with Github Copilot before, the first time you run Aider with the `github_copilot` model, you will be prompted to authenticate with Github Copilot using device code authentication. Follow the instructions in the terminal to authenticate.
See the [installation instructions](https://aider.chat/docs/install.html) and [usage documentation](https://aider.chat/docs/usage.html) for more details.
See the
[installation instructions](https://aider.chat/docs/install.html)
and
[usage documentation](https://aider.chat/docs/usage.html)
for more details.
## More Information
## Features
- Run aider with the files you want to edit: `aider <file1> <file2> ...`
- Ask for changes:
- Add new features or test cases.
- Describe a bug.
- Paste in an error message or GitHub issue URL.
- Refactor code.
- Update docs.
- Aider will edit your files to complete your request.
- Aider [automatically git commits](https://aider.chat/docs/git.html) changes with a sensible commit message.
- [Use aider inside your favorite editor or IDE](https://aider.chat/docs/usage/watch.html).
- Aider works with [most popular languages](https://aider.chat/docs/languages.html): python, javascript, typescript, php, html, css, and more...
- Aider can edit multiple files at once for complex requests.
- Aider uses a [map of your entire git repo](https://aider.chat/docs/repomap.html), which helps it work well in larger codebases.
- Edit files in your editor or IDE while chatting with aider,
and it will always use the latest version.
Pair program with AI.
- [Add images to the chat](https://aider.chat/docs/usage/images-urls.html) (GPT-4o, Claude 3.5 Sonnet, etc).
- [Add URLs to the chat](https://aider.chat/docs/usage/images-urls.html) and aider will read their content.
- [Code with your voice](https://aider.chat/docs/usage/voice.html).
- Aider works best with Claude 3.7 Sonnet, DeepSeek V3, o1 & GPT-4o and can [connect to almost any LLM](https://aider.chat/docs/llms.html).
## Top tier performance
[Aider has one of the top scores on SWE Bench](https://aider.chat/2024/06/02/main-swe-bench.html).
SWE Bench is a challenging software engineering benchmark where aider
solved *real* GitHub issues from popular open source
projects like django, scikitlearn, matplotlib, etc.
## More info
- [Documentation](https://aider.chat/)
- [Installation](https://aider.chat/docs/install.html)
- [Usage](https://aider.chat/docs/usage.html)
- [Tutorial videos](https://aider.chat/docs/usage/tutorials.html)
### Documentation
- [Installation Guide](https://aider.chat/docs/install.html)
- [Usage Guide](https://aider.chat/docs/usage.html)
- [Tutorial Videos](https://aider.chat/docs/usage/tutorials.html)
- [Connecting to LLMs](https://aider.chat/docs/llms.html)
- [Configuration](https://aider.chat/docs/config.html)
- [Configuration Options](https://aider.chat/docs/config.html)
- [Troubleshooting](https://aider.chat/docs/troubleshooting.html)
- [FAQ](https://aider.chat/docs/faq.html)
### Community & Resources
- [LLM Leaderboards](https://aider.chat/docs/leaderboards/)
- [GitHub](https://github.com/Aider-AI/aider)
- [Discord](https://discord.gg/Tv2uQnR88V)
- [GitHub Repository](https://github.com/Aider-AI/aider)
- [Discord Community](https://discord.gg/Tv2uQnR88V)
- [Blog](https://aider.chat/blog/)
## Kind Words From Users
## Kind words from users
- *The best free open source AI coding assistant.* -- [IndyDevDan](https://youtu.be/YALpX8oOn78)
- *The best AI coding assistant so far.* -- [Matthew Berman](https://www.youtube.com/watch?v=df8afeb1FY8)
- *Aider ... has easily quadrupled my coding productivity.* -- [SOLAR_FIELDS](https://news.ycombinator.com/item?id=36212100)
- *It's a cool workflow... Aider's ergonomics are perfect for me.* -- [qup](https://news.ycombinator.com/item?id=38185326)
- *It's really like having your senior developer live right in your Git repo - truly amazing!* -- [rappster](https://github.com/Aider-AI/aider/issues/124)
- *What an amazing tool. It's incredible.* -- [valyagolev](https://github.com/Aider-AI/aider/issues/6#issue-1722897858)
- *Aider is such an astounding thing!* -- [cgrothaus](https://github.com/Aider-AI/aider/issues/82#issuecomment-1631876700)
- *It was WAY faster than I would be getting off the ground and making the first few working versions.* -- [Daniel Feldman](https://twitter.com/d_feldman/status/1662295077387923456)
- *THANK YOU for Aider! It really feels like a glimpse into the future of coding.* -- [derwiki](https://news.ycombinator.com/item?id=38205643)
- *It's just amazing. It is freeing me to do things I felt were out my comfort zone before.* -- [Dougie](https://discord.com/channels/1131200896827654144/1174002618058678323/1174084556257775656)
- *This project is stellar.* -- [funkytaco](https://github.com/Aider-AI/aider/issues/112#issuecomment-1637429008)
- *Amazing project, definitely the best AI coding assistant I've used.* -- [joshuavial](https://github.com/Aider-AI/aider/issues/84)
- *I absolutely love using Aider ... It makes software development feel so much lighter as an experience.* -- [principalideal0](https://discord.com/channels/1131200896827654144/1133421607499595858/1229689636012691468)
- *I have been recovering from multiple shoulder surgeries ... and have used aider extensively. It has allowed me to continue productivity.* -- [codeninja](https://www.reddit.com/r/OpenAI/s/nmNwkHy1zG)
- *I am an aider addict. I'm getting so much more work done, but in less time.* -- [dandandan](https://discord.com/channels/1131200896827654144/1131200896827654149/1135913253483069470)
- *After wasting $100 on tokens trying to find something better, I'm back to Aider. It blows everything else out of the water hands down, there's no competition whatsoever.* -- [SystemSculpt](https://discord.com/channels/1131200896827654144/1131200896827654149/1178736602797846548)
- *Aider is amazing, coupled with Sonnet 3.5 its quite mind blowing.* -- [Josh Dingus](https://discord.com/channels/1131200896827654144/1133060684540813372/1262374225298198548)
- *Hands down, this is the best AI coding assistant tool so far.* -- [IndyDevDan](https://www.youtube.com/watch?v=MPYFPvxfGZs)
- *[Aider] changed my daily coding workflows. It's mind-blowing how a single Python application can change your life.* -- [maledorak](https://discord.com/channels/1131200896827654144/1131200896827654149/1258453375620747264)
- *Best agent for actual dev work in existing codebases.* -- [Nick Dobos](https://twitter.com/NickADobos/status/1690408967963652097?s=20)
- *"The best free open source AI coding assistant."* — [IndyDevDan](https://youtu.be/YALpX8oOn78)
- *"The best AI coding assistant so far."* — [Matthew Berman](https://www.youtube.com/watch?v=df8afeb1FY8)
- *"Aider ... has easily quadrupled my coding productivity."* — [SOLAR_FIELDS](https://news.ycombinator.com/item?id=36212100)
- *"It's a cool workflow... Aider's ergonomics are perfect for me."* — [qup](https://news.ycombinator.com/item?id=38185326)
- *"It's really like having your senior developer live right in your Git repo - truly amazing!"* — [rappster](https://github.com/Aider-AI/aider/issues/124)
- *"What an amazing tool. It's incredible."* — [valyagolev](https://github.com/Aider-AI/aider/issues/6#issue-1722897858)
- *"Aider is such an astounding thing!"* — [cgrothaus](https://github.com/Aider-AI/aider/issues/82#issuecomment-1631876700)
- *"It was WAY faster than I would be getting off the ground and making the first few working versions."* — [Daniel Feldman](https://twitter.com/d_feldman/status/1662295077387923456)
- *"THANK YOU for Aider! It really feels like a glimpse into the future of coding."* — [derwiki](https://news.ycombinator.com/item?id=38205643)
- *"It's just amazing. It is freeing me to do things I felt were out my comfort zone before."* — [Dougie](https://discord.com/channels/1131200896827654144/1174002618058678323/1174084556257775656)
- *"This project is stellar."* — [funkytaco](https://github.com/Aider-AI/aider/issues/112#issuecomment-1637429008)
- *"Amazing project, definitely the best AI coding assistant I've used."* — [joshuavial](https://github.com/Aider-AI/aider/issues/84)
- *"I absolutely love using Aider ... It makes software development feel so much lighter as an experience."* — [principalideal0](https://discord.com/channels/1131200896827654144/1133421607499595858/1229689636012691468)
- *"I have been recovering from multiple shoulder surgeries ... and have used aider extensively. It has allowed me to continue productivity."* — [codeninja](https://www.reddit.com/r/OpenAI/s/nmNwkHy1zG)
- *"I am an aider addict. I'm getting so much more work done, but in less time."* — [dandandan](https://discord.com/channels/1131200896827654144/1131200896827654149/1135913253483069470)
- *"After wasting $100 on tokens trying to find something better, I'm back to Aider. It blows everything else out of the water hands down, there's no competition whatsoever."* — [SystemSculpt](https://discord.com/channels/1131200896827654144/1131200896827654149/1178736602797846548)
- *"Aider is amazing, coupled with Sonnet 3.5 it's quite mind blowing."* — [Josh Dingus](https://discord.com/channels/1131200896827654144/1133060684540813372/1262374225298198548)
- *"Hands down, this is the best AI coding assistant tool so far."* — [IndyDevDan](https://www.youtube.com/watch?v=MPYFPvxfGZs)
- *"[Aider] changed my daily coding workflows. It's mind-blowing how a single Python application can change your life."* — [maledorak](https://discord.com/channels/1131200896827654144/1131200896827654149/1258453375620747264)
- *"Best agent for actual dev work in existing codebases."* — [Nick Dobos](https://twitter.com/NickADobos/status/1690408967963652097?s=20)
- *"One of my favorite pieces of software. Blazing trails on new paradigms!"* — [Chris Wall](https://x.com/chris65536/status/1905053299251798432)
- *"Aider has been revolutionary for me and my work."* — [Starry Hope](https://x.com/starryhopeblog/status/1904985812137132056)
- *"Try aider! One of the best ways to vibe code."* — [Chris Wall](https://x.com/Chris65536/status/1905053418961391929)
- *"Aider is hands down the best. And it's free and opensource."* — [AriyaSavakaLurker](https://www.reddit.com/r/ChatGPTCoding/comments/1ik16y6/whats_your_take_on_aider/mbip39n/)
- *"Aider is also my best friend."* — [jzn21](https://www.reddit.com/r/ChatGPTCoding/comments/1heuvuo/aider_vs_cline_vs_windsurf_vs_cursor/m27dcnb/)
- *"Try Aider, it's worth it."* — [jorgejhms](https://www.reddit.com/r/ChatGPTCoding/comments/1heuvuo/aider_vs_cline_vs_windsurf_vs_cursor/m27cp99/)
- *"I like aider :)"* — [Chenwei Cui](https://x.com/ccui42/status/1904965344999145698)
- *"Aider is the precision tool of LLM code gen. It is minimal, thoughtful and capable of surgical changes to your codebase all while keeping the developer in control."* — [Reilly Sweetland](https://x.com/rsweetland/status/1904963807237259586)

View file

@ -1,6 +1,6 @@
from packaging import version
__version__ = "0.75.3.dev"
__version__ = "0.79.3.dev"
safe_version = __version__
try:

View file

@ -3,6 +3,7 @@
import argparse
import os
import sys
from pathlib import Path
import configargparse
@ -12,10 +13,20 @@ from aider.args_formatter import (
MarkdownHelpFormatter,
YamlHelpFormatter,
)
from aider.deprecated import add_deprecated_model_args
from .dump import dump # noqa: F401
def resolve_aiderignore_path(path_str, git_root=None):
path = Path(path_str)
if path.is_absolute():
return str(path)
elif git_root:
return str(Path(git_root) / path)
return str(path)
def default_env_file(git_root):
return os.path.join(git_root, ".env") if git_root else ".env"
@ -38,98 +49,6 @@ def get_parser(default_config_files, git_root):
default=None,
help="Specify the model to use for the main chat",
)
opus_model = "claude-3-opus-20240229"
group.add_argument(
"--opus",
action="store_const",
dest="model",
const=opus_model,
help=f"Use {opus_model} model for the main chat",
)
sonnet_model = "anthropic/claude-3-7-sonnet-20250219"
group.add_argument(
"--sonnet",
action="store_const",
dest="model",
const=sonnet_model,
help=f"Use {sonnet_model} model for the main chat",
)
haiku_model = "claude-3-5-haiku-20241022"
group.add_argument(
"--haiku",
action="store_const",
dest="model",
const=haiku_model,
help=f"Use {haiku_model} model for the main chat",
)
gpt_4_model = "gpt-4-0613"
group.add_argument(
"--4",
"-4",
action="store_const",
dest="model",
const=gpt_4_model,
help=f"Use {gpt_4_model} model for the main chat",
)
gpt_4o_model = "gpt-4o"
group.add_argument(
"--4o",
action="store_const",
dest="model",
const=gpt_4o_model,
help=f"Use {gpt_4o_model} model for the main chat",
)
gpt_4o_mini_model = "gpt-4o-mini"
group.add_argument(
"--mini",
action="store_const",
dest="model",
const=gpt_4o_mini_model,
help=f"Use {gpt_4o_mini_model} model for the main chat",
)
gpt_4_turbo_model = "gpt-4-1106-preview"
group.add_argument(
"--4-turbo",
action="store_const",
dest="model",
const=gpt_4_turbo_model,
help=f"Use {gpt_4_turbo_model} model for the main chat",
)
gpt_3_model_name = "gpt-3.5-turbo"
group.add_argument(
"--35turbo",
"--35-turbo",
"--3",
"-3",
action="store_const",
dest="model",
const=gpt_3_model_name,
help=f"Use {gpt_3_model_name} model for the main chat",
)
deepseek_model = "deepseek/deepseek-chat"
group.add_argument(
"--deepseek",
action="store_const",
dest="model",
const=deepseek_model,
help=f"Use {deepseek_model} model for the main chat",
)
o1_mini_model = "o1-mini"
group.add_argument(
"--o1-mini",
action="store_const",
dest="model",
const=o1_mini_model,
help=f"Use {o1_mini_model} model for the main chat",
)
o1_preview_model = "o1-preview"
group.add_argument(
"--o1-preview",
action="store_const",
dest="model",
const=o1_preview_model,
help=f"Use {o1_preview_model} model for the main chat",
)
##########
group = parser.add_argument_group("API Keys and settings")
@ -208,6 +127,11 @@ def get_parser(default_config_files, git_root):
type=str,
help="Set the reasoning_effort API parameter (default: not set)",
)
group.add_argument(
"--thinking-tokens",
type=str,
help="Set the thinking token budget for models that support it (default: not set)",
)
group.add_argument(
"--verify-ssl",
action=argparse.BooleanOptionalAction,
@ -234,6 +158,12 @@ def get_parser(default_config_files, git_root):
const="architect",
help="Use architect edit format for the main chat",
)
group.add_argument(
"--auto-accept-architect",
action=argparse.BooleanOptionalAction,
default=True,
help="Enable/disable automatic acceptance of architect changes (default: True)",
)
group.add_argument(
"--weak-model",
metavar="WEAK_MODEL",
@ -261,6 +191,14 @@ def get_parser(default_config_files, git_root):
default=True,
help="Only work with models that have meta-data available (default: True)",
)
group.add_argument(
"--check-model-accepts-settings",
action=argparse.BooleanOptionalAction,
default=True,
help=(
"Check if model accepts settings like reasoning_effort/thinking_tokens (default: True)"
),
)
group.add_argument(
"--max-chat-history-tokens",
type=int,
@ -460,9 +398,11 @@ def get_parser(default_config_files, git_root):
default_aiderignore_file = (
os.path.join(git_root, ".aiderignore") if git_root else ".aiderignore"
)
group.add_argument(
"--aiderignore",
metavar="AIDERIGNORE",
type=lambda path_str: resolve_aiderignore_path(path_str, git_root),
default=default_aiderignore_file,
help="Specify the aider ignore file (default: .aiderignore in git root)",
)
@ -508,6 +448,12 @@ def get_parser(default_config_files, git_root):
default=False,
help="Prefix all commit messages with 'aider: ' (default: False)",
)
group.add_argument(
"--git-commit-verify",
action=argparse.BooleanOptionalAction,
default=False,
help="Enable/disable git pre-commit hooks with --no-verify (default: False)",
)
group.add_argument(
"--commit",
action="store_true",
@ -842,6 +788,11 @@ def get_parser(default_config_files, git_root):
help="Specify which editor to use for the /editor command",
)
##########
group = parser.add_argument_group("Deprecated model settings")
# Add deprecated model shortcut arguments
add_deprecated_model_args(parser, group)
return parser

View file

@ -1,6 +1,7 @@
from .architect_coder import ArchitectCoder
from .ask_coder import AskCoder
from .base_coder import Coder
from .context_coder import ContextCoder
from .editblock_coder import EditBlockCoder
from .editblock_fenced_coder import EditBlockFencedCoder
from .editor_editblock_coder import EditorEditBlockCoder
@ -23,4 +24,5 @@ __all__ = [
ArchitectCoder,
EditorEditBlockCoder,
EditorWholeFileCoder,
ContextCoder,
]

View file

@ -6,6 +6,7 @@ from .base_coder import Coder
class ArchitectCoder(AskCoder):
edit_format = "architect"
gpt_prompts = ArchitectPrompts()
auto_accept_architect = False
def reply_completed(self):
content = self.partial_response_content
@ -13,7 +14,7 @@ class ArchitectCoder(AskCoder):
if not content or not content.strip():
return
if not self.io.confirm_ask("Edit the files?"):
if not self.auto_accept_architect and not self.io.confirm_ask("Edit the files?"):
return
kwargs = dict()

View file

@ -207,10 +207,22 @@ class Coder:
prefix = "Model"
output = f"{prefix}: {main_model.name} with {self.edit_format} edit format"
# Check for thinking token budget
thinking_tokens = main_model.get_thinking_tokens(main_model)
if thinking_tokens:
output += f", {thinking_tokens} think tokens"
# Check for reasoning effort
reasoning_effort = main_model.get_reasoning_effort(main_model)
if reasoning_effort:
output += f", reasoning {reasoning_effort}"
if self.add_cache_headers or main_model.caches_by_default:
output += ", prompt cache"
if main_model.info.get("supports_assistant_prefill"):
output += ", infinite output"
lines.append(output)
if self.edit_format == "architect":
@ -310,6 +322,7 @@ class Coder:
ignore_mentions=None,
file_watcher=None,
auto_copy_context=False,
auto_accept_architect=True,
):
# Fill in a dummy Analytics if needed, but it is never .enable()'d
self.analytics = analytics if analytics is not None else Analytics()
@ -322,6 +335,7 @@ class Coder:
self.abs_root_path_cache = {}
self.auto_copy_context = auto_copy_context
self.auto_accept_architect = auto_accept_architect
self.ignore_mentions = ignore_mentions
if not self.ignore_mentions:
@ -383,7 +397,7 @@ class Coder:
self.main_model = main_model
# Set the reasoning tag name based on model settings or default
self.reasoning_tag_name = (
self.main_model.remove_reasoning if self.main_model.remove_reasoning else REASONING_TAG
self.main_model.reasoning_tag if self.main_model.reasoning_tag else REASONING_TAG
)
self.stream = stream and main_model.streaming
@ -1016,7 +1030,13 @@ class Coder:
return None
def get_platform_info(self):
platform_text = f"- Platform: {platform.platform()}\n"
platform_text = ""
try:
platform_text = f"- Platform: {platform.platform()}\n"
except KeyError:
# Skip platform info if it can't be retrieved
platform_text = "- Platform information unavailable\n"
shell_var = "COMSPEC" if os.name == "nt" else "SHELL"
shell_val = os.getenv(shell_var)
platform_text += f"- Shell: {shell_var}={shell_val}\n"
@ -1057,7 +1077,13 @@ class Coder:
return platform_text
def fmt_system_prompt(self, prompt):
lazy_prompt = self.gpt_prompts.lazy_prompt if self.main_model.lazy else ""
if self.main_model.lazy:
lazy_prompt = self.gpt_prompts.lazy_prompt
elif self.main_model.overeager:
lazy_prompt = self.gpt_prompts.overeager_prompt
else:
lazy_prompt = ""
platform_text = self.get_platform_info()
if self.suggest_shell_commands:
@ -1430,7 +1456,8 @@ class Coder:
return
try:
self.reply_completed()
if self.reply_completed():
return
except KeyboardInterrupt:
interrupted = True
@ -1573,22 +1600,26 @@ class Coder:
)
]
def get_file_mentions(self, content):
def get_file_mentions(self, content, ignore_current=False):
words = set(word for word in content.split())
# drop sentence punctuation from the end
words = set(word.rstrip(",.!;:?") for word in words)
# strip away all kinds of quotes
quotes = "".join(['"', "'", "`"])
quotes = "\"'`*_"
words = set(word.strip(quotes) for word in words)
addable_rel_fnames = self.get_addable_relative_files()
if ignore_current:
addable_rel_fnames = self.get_all_relative_files()
existing_basenames = {}
else:
addable_rel_fnames = self.get_addable_relative_files()
# Get basenames of files already in chat or read-only
existing_basenames = {os.path.basename(f) for f in self.get_inchat_relative_files()} | {
os.path.basename(self.get_rel_fname(f)) for f in self.abs_read_only_fnames
}
# Get basenames of files already in chat or read-only
existing_basenames = {os.path.basename(f) for f in self.get_inchat_relative_files()} | {
os.path.basename(self.get_rel_fname(f)) for f in self.abs_read_only_fnames
}
mentioned_rel_fnames = set()
fname_to_rel_fnames = {}
@ -1712,7 +1743,10 @@ class Coder:
try:
reasoning_content = completion.choices[0].message.reasoning_content
except AttributeError:
reasoning_content = None
try:
reasoning_content = completion.choices[0].message.reasoning
except AttributeError:
reasoning_content = None
try:
self.partial_response_content = completion.choices[0].message.content or ""
@ -1775,22 +1809,27 @@ class Coder:
pass
text = ""
try:
reasoning_content = chunk.choices[0].delta.reasoning_content
if reasoning_content:
if not self.got_reasoning_content:
text += f"<{REASONING_TAG}>\n\n"
text += reasoning_content
self.got_reasoning_content = True
received_content = True
except AttributeError:
pass
try:
reasoning_content = chunk.choices[0].delta.reasoning
except AttributeError:
reasoning_content = None
if reasoning_content:
if not self.got_reasoning_content:
text += f"<{REASONING_TAG}>\n\n"
text += reasoning_content
self.got_reasoning_content = True
received_content = True
try:
content = chunk.choices[0].delta.content
if content:
if self.got_reasoning_content and not self.ended_reasoning_content:
text += f"\n\n</{REASONING_TAG}>\n\n"
text += f"\n\n</{self.reasoning_tag_name}>\n\n"
self.ended_reasoning_content = True
text += content
@ -1922,11 +1961,6 @@ class Coder:
f" ${format_cost(self.total_cost)} session."
)
if self.add_cache_headers and self.stream:
warning = " Use --no-stream for accurate caching costs."
self.usage_report = tokens_report + "\n" + cost_report + warning
return
if cache_hit_tokens and cache_write_tokens:
sep = "\n"
else:

View file

@ -14,6 +14,9 @@ You NEVER leave comments describing code without implementing it!
You always COMPLETELY IMPLEMENT the needed code!
"""
overeager_prompt = """Pay careful attention to the scope of the user's request.
Do what they ask, but no more."""
example_messages = []
files_content_prefix = """I have *added these files to the chat* so you can go ahead and edit them.

View file

@ -0,0 +1,53 @@
from .base_coder import Coder
from .context_prompts import ContextPrompts
class ContextCoder(Coder):
"""Identify which files need to be edited for a given request."""
edit_format = "context"
gpt_prompts = ContextPrompts()
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
if not self.repo_map:
return
self.repo_map.refresh = "always"
self.repo_map.max_map_tokens *= self.repo_map.map_mul_no_files
self.repo_map.map_mul_no_files = 1.0
def reply_completed(self):
content = self.partial_response_content
if not content or not content.strip():
return True
# dump(repr(content))
current_rel_fnames = set(self.get_inchat_relative_files())
mentioned_rel_fnames = set(self.get_file_mentions(content, ignore_current=True))
# dump(current_rel_fnames)
# dump(mentioned_rel_fnames)
# dump(current_rel_fnames == mentioned_rel_fnames)
if mentioned_rel_fnames == current_rel_fnames:
return True
if self.num_reflections >= self.max_reflections - 1:
return True
self.abs_fnames = set()
for fname in mentioned_rel_fnames:
self.add_rel_fname(fname)
# dump(self.get_inchat_relative_files())
self.reflected_message = self.gpt_prompts.try_again
# mentioned_idents = self.get_ident_mentions(cur_msg_text)
# if mentioned_idents:
return True
def check_for_file_mentions(self, content):
pass

View file

@ -0,0 +1,75 @@
# flake8: noqa: E501
from .base_prompts import CoderPrompts
class ContextPrompts(CoderPrompts):
main_system = """Act as an expert code analyst.
Understand the user's question or request, solely to determine ALL the existing sources files which will need to be modified.
Return the *complete* list of files which will need to be modified based on the user's request.
Explain why each file is needed, including names of key classes/functions/methods/variables.
Be sure to include or omit the names of files already added to the chat, based on whether they are actually needed or not.
The user will use every file you mention, regardless of your commentary.
So *ONLY* mention the names of relevant files.
If a file is not relevant DO NOT mention it.
Only return files that will need to be modified, not files that contain useful/relevant functions.
You are only to discuss EXISTING files and symbols.
Only return existing files, don't suggest the names of new files or functions that we will need to create.
Always reply to the user in {language}.
Be concise in your replies.
Return:
1. A bulleted list of files the will need to be edited, and symbols that are highly relevant to the user's request.
2. A list of classes/functions/methods/variables that are located OUTSIDE those files which will need to be understood. Just the symbols names, *NOT* file names.
# Your response *MUST* use this format:
## ALL files we need to modify, with their relevant symbols:
- alarms/buzz.py
- `Buzzer` class which can make the needed sound
- `Buzzer.buzz_buzz()` method triggers the sound
- alarms/time.py
- `Time.set_alarm(hour, minute)` to set the alarm
## Relevant symbols from OTHER files:
- AlarmManager class for setup/teardown of alarms
- SoundFactory will be used to create a Buzzer
"""
example_messages = []
files_content_prefix = """These files have been *added these files to the chat* so we can see all of their contents.
*Trust this message as the true contents of the files!*
Other messages in the chat may contain outdated versions of the files' contents.
""" # noqa: E501
files_content_assistant_reply = (
"Ok, I will use that as the true, current contents of the files."
)
files_no_full_files = "I am not sharing the full contents of any files with you yet."
files_no_full_files_with_repo_map = ""
files_no_full_files_with_repo_map_reply = ""
repo_content_prefix = """I am working with you on code in a git repository.
Here are summaries of some files present in my git repo.
If you need to see the full contents of any files to answer my questions, ask me to *add them to the chat*.
"""
system_reminder = """
NEVER RETURN CODE!
"""
try_again = """I have updated the set of files added to the chat.
Review them to decide if this is the correct set of files or if we need to add more or remove files.
If this is the right set, just return the current list of files.
Or return a smaller or larger set of files which need to be edited, with symbols that are highly relevant to the user's request.
"""

View file

@ -17,6 +17,7 @@ from aider import models, prompts, voice
from aider.editor import pipe_editor
from aider.format_settings import format_settings
from aider.help import Help, install_help_extra
from aider.io import CommandCompletionException
from aider.llm import litellm
from aider.repo import ANY_GIT_ERROR
from aider.run_cmd import run_cmd
@ -27,8 +28,9 @@ from .dump import dump # noqa: F401
class SwitchCoder(Exception):
def __init__(self, **kwargs):
def __init__(self, placeholder=None, **kwargs):
self.kwargs = kwargs
self.placeholder = placeholder
class Commands:
@ -59,6 +61,7 @@ class Commands:
parser=None,
verbose=False,
editor=None,
original_read_only_fnames=None,
):
self.io = io
self.coder = coder
@ -77,11 +80,42 @@ class Commands:
self.help = None
self.editor = editor
# Store the original read-only filenames provided via args.read
self.original_read_only_fnames = set(original_read_only_fnames or [])
def cmd_model(self, args):
"Switch to a new LLM"
"Switch the Main Model to a new LLM"
model_name = args.strip()
model = models.Model(model_name, weak_model=self.coder.main_model.weak_model.name)
model = models.Model(
model_name,
editor_model=self.coder.main_model.editor_model.name,
weak_model=self.coder.main_model.weak_model.name,
)
models.sanity_check_models(self.io, model)
raise SwitchCoder(main_model=model)
def cmd_editor_model(self, args):
"Switch the Editor Model to a new LLM"
model_name = args.strip()
model = models.Model(
self.coder.main_model.name,
editor_model=model_name,
weak_model=self.coder.main_model.weak_model.name,
)
models.sanity_check_models(self.io, model)
raise SwitchCoder(main_model=model)
def cmd_weak_model(self, args):
"Switch the Weak Model to a new LLM"
model_name = args.strip()
model = models.Model(
self.coder.main_model.name,
editor_model=self.coder.main_model.editor_model.name,
weak_model=model_name,
)
models.sanity_check_models(self.io, model)
raise SwitchCoder(main_model=model)
@ -114,6 +148,10 @@ class Commands:
" them."
),
),
(
"context",
"Automatically identify which files will need to be edited.",
),
]
)
@ -355,7 +393,21 @@ class Commands:
def _drop_all_files(self):
self.coder.abs_fnames = set()
self.coder.abs_read_only_fnames = set()
# When dropping all files, keep those that were originally provided via args.read
if self.original_read_only_fnames:
# Keep only the original read-only files
to_keep = set()
for abs_fname in self.coder.abs_read_only_fnames:
rel_fname = self.coder.get_rel_fname(abs_fname)
if (
abs_fname in self.original_read_only_fnames
or rel_fname in self.original_read_only_fnames
):
to_keep.add(abs_fname)
self.coder.abs_read_only_fnames = to_keep
else:
self.coder.abs_read_only_fnames = set()
def _clear_chat_history(self):
self.coder.done_messages = []
@ -822,7 +874,12 @@ class Commands:
"Remove files from the chat session to free up context space"
if not args.strip():
self.io.tool_output("Dropping all files from the chat session.")
if self.original_read_only_fnames:
self.io.tool_output(
"Dropping all files from the chat session except originally read-only files."
)
else:
self.io.tool_output("Dropping all files from the chat session.")
self._drop_all_files()
return
@ -1065,6 +1122,18 @@ class Commands:
show_announcements=False,
)
def completions_ask(self):
raise CommandCompletionException()
def completions_code(self):
raise CommandCompletionException()
def completions_architect(self):
raise CommandCompletionException()
def completions_context(self):
raise CommandCompletionException()
def cmd_ask(self, args):
"""Ask questions about the code base without editing any files. If no prompt provided, switches to ask mode.""" # noqa
return self._generic_chat_command(args, "ask")
@ -1077,7 +1146,11 @@ class Commands:
"""Enter architect/editor mode using 2 different models. If no prompt provided, switches to architect/editor mode.""" # noqa
return self._generic_chat_command(args, "architect")
def _generic_chat_command(self, args, edit_format):
def cmd_context(self, args):
"""Enter context mode to see surrounding code context. If no prompt provided, switches to context mode.""" # noqa
return self._generic_chat_command(args, "context", placeholder=args.strip() or None)
def _generic_chat_command(self, args, edit_format, placeholder=None):
if not args.strip():
# Switch to the corresponding chat mode if no args provided
return self.cmd_chat_mode(edit_format)
@ -1094,11 +1167,13 @@ class Commands:
user_msg = args
coder.run(user_msg)
# Use the provided placeholder if any
raise SwitchCoder(
edit_format=self.coder.edit_format,
summarize_from_coder=False,
from_coder=coder,
show_announcements=False,
placeholder=placeholder,
)
def get_help_md(self):
@ -1411,6 +1486,62 @@ class Commands:
if user_input.strip():
self.io.set_placeholder(user_input.rstrip())
def cmd_edit(self, args=""):
"Alias for /editor: Open an editor to write a prompt"
return self.cmd_editor(args)
def cmd_think_tokens(self, args):
"Set the thinking token budget (supports formats like 8096, 8k, 10.5k, 0.5M)"
model = self.coder.main_model
if not args.strip():
# Display current value if no args are provided
formatted_budget = model.get_thinking_tokens(model)
if formatted_budget is None:
self.io.tool_output("Thinking tokens are not currently set.")
else:
budget = model.extra_params["thinking"].get("budget_tokens")
self.io.tool_output(
f"Current thinking token budget: {budget:,} tokens ({formatted_budget})."
)
return
value = args.strip()
model.set_thinking_tokens(value)
formatted_budget = model.get_thinking_tokens(model)
budget = model.extra_params["thinking"].get("budget_tokens")
self.io.tool_output(f"Set thinking token budget to {budget:,} tokens ({formatted_budget}).")
self.io.tool_output()
# Output announcements
announcements = "\n".join(self.coder.get_announcements())
self.io.tool_output(announcements)
def cmd_reasoning_effort(self, args):
"Set the reasoning effort level (values: number or low/medium/high depending on model)"
model = self.coder.main_model
if not args.strip():
# Display current value if no args are provided
reasoning_value = model.get_reasoning_effort(model)
if reasoning_value is None:
self.io.tool_output("Reasoning effort is not currently set.")
else:
self.io.tool_output(f"Current reasoning effort: {reasoning_value}")
return
value = args.strip()
model.set_reasoning_effort(value)
reasoning_value = model.get_reasoning_effort(model)
self.io.tool_output(f"Set reasoning effort to {reasoning_value}")
self.io.tool_output()
# Output announcements
announcements = "\n".join(self.coder.get_announcements())
self.io.tool_output(announcements)
def cmd_copy_context(self, args=None):
"""Copy the current chat context as markdown, suitable to paste into a web UI"""

126
aider/deprecated.py Normal file
View file

@ -0,0 +1,126 @@
def add_deprecated_model_args(parser, group):
"""Add deprecated model shortcut arguments to the argparse parser."""
opus_model = "claude-3-opus-20240229"
group.add_argument(
"--opus",
action="store_true",
help=f"Use {opus_model} model for the main chat (deprecated, use --model)",
default=False,
)
sonnet_model = "anthropic/claude-3-7-sonnet-20250219"
group.add_argument(
"--sonnet",
action="store_true",
help=f"Use {sonnet_model} model for the main chat (deprecated, use --model)",
default=False,
)
haiku_model = "claude-3-5-haiku-20241022"
group.add_argument(
"--haiku",
action="store_true",
help=f"Use {haiku_model} model for the main chat (deprecated, use --model)",
default=False,
)
gpt_4_model = "gpt-4-0613"
group.add_argument(
"--4",
"-4",
action="store_true",
help=f"Use {gpt_4_model} model for the main chat (deprecated, use --model)",
default=False,
)
gpt_4o_model = "gpt-4o"
group.add_argument(
"--4o",
action="store_true",
help=f"Use {gpt_4o_model} model for the main chat (deprecated, use --model)",
default=False,
)
gpt_4o_mini_model = "gpt-4o-mini"
group.add_argument(
"--mini",
action="store_true",
help=f"Use {gpt_4o_mini_model} model for the main chat (deprecated, use --model)",
default=False,
)
gpt_4_turbo_model = "gpt-4-1106-preview"
group.add_argument(
"--4-turbo",
action="store_true",
help=f"Use {gpt_4_turbo_model} model for the main chat (deprecated, use --model)",
default=False,
)
gpt_3_model_name = "gpt-3.5-turbo"
group.add_argument(
"--35turbo",
"--35-turbo",
"--3",
"-3",
action="store_true",
help=f"Use {gpt_3_model_name} model for the main chat (deprecated, use --model)",
default=False,
)
deepseek_model = "deepseek/deepseek-chat"
group.add_argument(
"--deepseek",
action="store_true",
help=f"Use {deepseek_model} model for the main chat (deprecated, use --model)",
default=False,
)
o1_mini_model = "o1-mini"
group.add_argument(
"--o1-mini",
action="store_true",
help=f"Use {o1_mini_model} model for the main chat (deprecated, use --model)",
default=False,
)
o1_preview_model = "o1-preview"
group.add_argument(
"--o1-preview",
action="store_true",
help=f"Use {o1_preview_model} model for the main chat (deprecated, use --model)",
default=False,
)
def handle_deprecated_model_args(args, io):
"""Handle deprecated model shortcut arguments and provide appropriate warnings."""
# Define model mapping
model_map = {
"opus": "claude-3-opus-20240229",
"sonnet": "anthropic/claude-3-7-sonnet-20250219",
"haiku": "claude-3-5-haiku-20241022",
"4": "gpt-4-0613",
"4o": "gpt-4o",
"mini": "gpt-4o-mini",
"4_turbo": "gpt-4-1106-preview",
"35turbo": "gpt-3.5-turbo",
"deepseek": "deepseek/deepseek-chat",
"o1_mini": "o1-mini",
"o1_preview": "o1-preview",
}
# Check if any deprecated args are used
for arg_name, model_name in model_map.items():
arg_name_clean = arg_name.replace("-", "_")
if hasattr(args, arg_name_clean) and getattr(args, arg_name_clean):
# Find preferred name to display in warning
from aider.models import MODEL_ALIASES
display_name = model_name
# Check if there's a shorter alias for this model
for alias, full_name in MODEL_ALIASES.items():
if full_name == model_name:
display_name = alias
break
# Show the warning
io.tool_warning(
f"The --{arg_name.replace('_', '-')} flag is deprecated and will be removed in a"
f" future version. Please use --model {display_name} instead."
)
# Set the model
if not args.model:
args.model = model_name
break

View file

@ -83,4 +83,8 @@ class LiteLLMExceptions:
)
if "boto3" in str(ex):
return ExInfo("APIConnectionError", False, "You need to: pip install boto3")
if "OpenrouterException" in str(ex) and "'choices'" in str(ex):
return ExInfo(
"APIConnectionError", True, "The OpenRouter API provider is down or overloaded."
)
return self.exceptions.get(ex.__class__, ExInfo(None, None, None))

View file

@ -10,4 +10,10 @@ exclude_website_pats = [
"docs/unified-diffs.md",
"docs/leaderboards/index.md",
"assets/**",
".jekyll-metadata",
"Gemfile.lock",
"Gemfile",
"_config.yml",
"**/OLD/**",
"OLD/**",
]

View file

@ -18,6 +18,7 @@ from prompt_toolkit.enums import EditingMode
from prompt_toolkit.filters import Condition, is_searching
from prompt_toolkit.history import FileHistory
from prompt_toolkit.key_binding import KeyBindings
from prompt_toolkit.key_binding.vi_state import InputMode
from prompt_toolkit.keys import Keys
from prompt_toolkit.lexers import PygmentsLexer
from prompt_toolkit.output.vt100 import is_dumb_terminal
@ -34,6 +35,7 @@ from rich.text import Text
from aider.mdstream import MarkdownStream
from .dump import dump # noqa: F401
from .editor import pipe_editor
from .utils import is_image_file
# Constants
@ -68,6 +70,13 @@ def restore_multiline(func):
return wrapper
class CommandCompletionException(Exception):
"""Raised when a command should use the normal autocompleter instead of
command-specific completion."""
pass
@dataclass
class ConfirmGroup:
preference: str = None
@ -186,14 +195,23 @@ class AutoCompleter(Completer):
return
if text[0] == "/":
yield from self.get_command_completions(document, complete_event, text, words)
return
try:
yield from self.get_command_completions(document, complete_event, text, words)
return
except CommandCompletionException:
# Fall through to normal completion
pass
candidates = self.words
candidates.update(set(self.fname_to_rel_fnames))
candidates = [word if type(word) is tuple else (word, word) for word in candidates]
last_word = words[-1]
# Only provide completions if the user has typed at least 3 characters
if len(last_word) < 3:
return
completions = []
for word_match, word_insert in candidates:
if word_match.lower().startswith(last_word.lower()):
@ -487,11 +505,16 @@ class InputOutput:
get_rel_fname(fname, root) for fname in (abs_read_only_fnames or [])
]
show = self.format_files_for_input(rel_fnames, rel_read_only_fnames)
prompt_prefix = ""
if edit_format:
show += edit_format
prompt_prefix += edit_format
if self.multiline_mode:
show += (" " if edit_format else "") + "multi"
show += "> "
prompt_prefix += (" " if edit_format else "") + "multi"
prompt_prefix += "> "
show += prompt_prefix
self.prompt_prefix = prompt_prefix
inp = ""
multiline_input = False
@ -534,12 +557,31 @@ class InputOutput:
def _(event):
"Navigate forward through history"
event.current_buffer.history_forward()
@kb.add("c-x", "c-e")
def _(event):
"Edit current input in external editor (like Bash)"
buffer = event.current_buffer
current_text = buffer.text
# Open the editor with the current text
edited_text = pipe_editor(input_data=current_text)
# Replace the buffer with the edited text, strip any trailing newlines
buffer.text = edited_text.rstrip('\n')
# Move cursor to the end of the text
buffer.cursor_position = len(buffer.text)
@kb.add("enter", eager=True, filter=~is_searching)
def _(event):
"Handle Enter key press"
if self.multiline_mode:
# In multiline mode, Enter adds a newline
if self.multiline_mode and not (
self.editingmode == EditingMode.VI
and event.app.vi_state.input_mode == InputMode.NAVIGATION
):
# In multiline mode and if not in vi-mode or vi navigation/normal mode,
# Enter adds a newline
event.current_buffer.insert_text("\n")
else:
# In normal mode, Enter submits
@ -557,7 +599,7 @@ class InputOutput:
while True:
if multiline_input:
show = ". "
show = self.prompt_prefix
try:
if self.prompt_session:
@ -573,7 +615,7 @@ class InputOutput:
self.clipboard_watcher.start()
def get_continuation(width, line_number, is_soft_wrap):
return ". "
return self.prompt_prefix
line = self.prompt_session.prompt(
show,

View file

@ -24,11 +24,13 @@ from aider.coders import Coder
from aider.coders.base_coder import UnknownEditFormat
from aider.commands import Commands, SwitchCoder
from aider.copypaste import ClipboardWatcher
from aider.deprecated import handle_deprecated_model_args
from aider.format_settings import format_settings, scrub_sensitive_info
from aider.history import ChatSummary
from aider.io import InputOutput
from aider.llm import litellm # noqa: F401; properly init litellm on launch
from aider.models import ModelSettings
from aider.onboarding import select_default_model
from aider.repo import ANY_GIT_ERROR, GitRepo
from aider.report import report_uncaught_exceptions
from aider.versioncheck import check_version, install_from_main_branch, install_upgrade
@ -125,8 +127,15 @@ def setup_git(git_root, io):
if not repo:
return
user_name = repo.git.config("--default", "", "--get", "user.name") or None
user_email = repo.git.config("--default", "", "--get", "user.email") or None
try:
user_name = repo.git.config("--get", "user.name") or None
except git.exc.GitCommandError:
user_name = None
try:
user_email = repo.git.config("--get", "user.email") or None
except git.exc.GitCommandError:
user_email = None
if user_name and user_email:
return repo.working_tree_dir
@ -349,11 +358,21 @@ def register_models(git_root, model_settings_fname, io, verbose=False):
def load_dotenv_files(git_root, dotenv_fname, encoding="utf-8"):
# Standard .env file search path
dotenv_files = generate_search_path_list(
".env",
git_root,
dotenv_fname,
)
# Explicitly add the OAuth keys file to the beginning of the list
oauth_keys_file = Path.home() / ".aider" / "oauth-keys.env"
if oauth_keys_file.exists():
# Insert at the beginning so it's loaded first (and potentially overridden)
dotenv_files.insert(0, str(oauth_keys_file.resolve()))
# Remove duplicates if it somehow got included by generate_search_path_list
dotenv_files = list(dict.fromkeys(dotenv_files))
loaded = []
for fname in dotenv_files:
try:
@ -560,6 +579,9 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
io = get_io(False)
io.tool_warning("Terminal does not support pretty output (UnicodeDecodeError)")
if args.stream and args.cache_prompts:
io.tool_warning("Cost estimates may be inaccurate when using streaming and caching.")
# Process any environment variables set via --set-env
if args.set_env:
for env_setting in args.set_env:
@ -588,6 +610,9 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
if args.openai_api_key:
os.environ["OPENAI_API_KEY"] = args.openai_api_key
# Handle deprecated model shortcut args
handle_deprecated_model_args(args, io)
if args.openai_api_base:
os.environ["OPENAI_API_BASE"] = args.openai_api_base
if args.openai_api_version:
@ -703,11 +728,6 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
if args.check_update:
check_version(io, verbose=args.verbose)
if args.list_models:
models.print_matching_models(io, args.list_models)
analytics.event("exit", reason="Listed models")
return 0
if args.git:
git_root = setup_git(git_root, io)
if args.gitignore:
@ -727,6 +747,11 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
register_models(git_root, args.model_settings_file, io, verbose=args.verbose)
register_litellm_models(git_root, args.model_metadata_file, io, verbose=args.verbose)
if args.list_models:
models.print_matching_models(io, args.list_models)
analytics.event("exit", reason="Listed models")
return 0
# Process any command line aliases
if args.alias:
for alias_def in args.alias:
@ -740,42 +765,60 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
alias, model = parts
models.MODEL_ALIASES[alias.strip()] = model.strip()
if not args.model:
# Select model based on available API keys
model_key_pairs = [
("ANTHROPIC_API_KEY", "sonnet"),
("DEEPSEEK_API_KEY", "deepseek"),
("OPENROUTER_API_KEY", "openrouter/anthropic/claude-3.7-sonnet"),
("OPENAI_API_KEY", "gpt-4o"),
("GEMINI_API_KEY", "flash"),
]
for env_key, model_name in model_key_pairs:
if os.environ.get(env_key):
args.model = model_name
io.tool_warning(
f"Found {env_key} so using {model_name} since no --model was specified."
)
break
if not args.model:
io.tool_error("You need to specify a --model and an --api-key to use.")
io.offer_url(urls.models_and_keys, "Open documentation url for more info?")
return 1
selected_model_name = select_default_model(args, io, analytics)
if not selected_model_name:
# Error message and analytics event are handled within select_default_model
return 1
args.model = selected_model_name # Update args with the selected model
main_model = models.Model(
args.model,
weak_model=args.weak_model,
editor_model=args.editor_model,
editor_edit_format=args.editor_edit_format,
verbose=args.verbose,
)
# add --reasoning-effort cli param
# Check if deprecated remove_reasoning is set
if main_model.remove_reasoning is not None:
io.tool_warning(
"Model setting 'remove_reasoning' is deprecated, please use 'reasoning_tag' instead."
)
# Set reasoning effort and thinking tokens if specified
if args.reasoning_effort is not None:
if not getattr(main_model, "extra_params", None):
main_model.extra_params = {}
if "extra_body" not in main_model.extra_params:
main_model.extra_params["extra_body"] = {}
main_model.extra_params["extra_body"]["reasoning_effort"] = args.reasoning_effort
# Apply if check is disabled or model explicitly supports it
if not args.check_model_accepts_settings or (
main_model.accepts_settings and "reasoning_effort" in main_model.accepts_settings
):
main_model.set_reasoning_effort(args.reasoning_effort)
if args.thinking_tokens is not None:
# Apply if check is disabled or model explicitly supports it
if not args.check_model_accepts_settings or (
main_model.accepts_settings and "thinking_tokens" in main_model.accepts_settings
):
main_model.set_thinking_tokens(args.thinking_tokens)
# Show warnings about unsupported settings that are being ignored
if args.check_model_accepts_settings:
settings_to_check = [
{"arg": args.reasoning_effort, "name": "reasoning_effort"},
{"arg": args.thinking_tokens, "name": "thinking_tokens"},
]
for setting in settings_to_check:
if setting["arg"] is not None and (
not main_model.accepts_settings
or setting["name"] not in main_model.accepts_settings
):
io.tool_warning(
f"Warning: {main_model.name} does not support '{setting['name']}', ignoring."
)
io.tool_output(
f"Use --no-check-model-accepts-settings to force the '{setting['name']}'"
" setting."
)
if args.copy_paste and args.edit_format is None:
if main_model.edit_format in ("diff", "whole"):
@ -824,6 +867,7 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
attribute_commit_message_committer=args.attribute_commit_message_committer,
commit_prompt=args.commit_prompt,
subtree_only=args.subtree_only,
git_commit_verify=args.git_commit_verify,
)
except FileNotFoundError:
pass
@ -849,6 +893,7 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
parser=parser,
verbose=args.verbose,
editor=args.editor,
original_read_only_fnames=read_only_fnames,
)
summarizer = ChatSummary(
@ -871,6 +916,9 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
else:
map_tokens = args.map_tokens
# Track auto-commits configuration
analytics.event("auto_commits", enabled=bool(args.auto_commits))
try:
coder = Coder.create(
main_model=main_model,
@ -903,6 +951,7 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
chat_language=args.chat_language,
detect_urls=args.detect_urls,
auto_copy_context=args.copy_paste,
auto_accept_architect=args.auto_accept_architect,
)
except UnknownEditFormat as err:
io.tool_error(str(err))
@ -1061,6 +1110,10 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
except SwitchCoder as switch:
coder.ok_to_warm_cache = False
# Set the placeholder if provided
if hasattr(switch, "placeholder") and switch.placeholder is not None:
io.placeholder = switch.placeholder
kwargs = dict(io=io, from_coder=coder)
kwargs.update(switch.kwargs)
if "show_announcements" in kwargs:

View file

@ -3,9 +3,12 @@
import io
import time
from rich import box
from rich.console import Console
from rich.live import Live
from rich.markdown import Markdown
from rich.markdown import CodeBlock, Heading, Markdown
from rich.panel import Panel
from rich.syntax import Syntax
from rich.text import Text
from aider.dump import dump # noqa: F401
@ -46,6 +49,46 @@ The end.
""" # noqa: E501
class NoInsetCodeBlock(CodeBlock):
"""A code block with syntax highlighting and no padding."""
def __rich_console__(self, console, options):
code = str(self.text).rstrip()
syntax = Syntax(code, self.lexer_name, theme=self.theme, word_wrap=True, padding=(1, 0))
yield syntax
class LeftHeading(Heading):
"""A heading class that renders left-justified."""
def __rich_console__(self, console, options):
text = self.text
text.justify = "left" # Override justification
if self.tag == "h1":
# Draw a border around h1s, but keep text left-aligned
yield Panel(
text,
box=box.HEAVY,
style="markdown.h1.border",
)
else:
# Styled text for h2 and beyond
if self.tag == "h2":
yield Text("") # Keep the blank line before h2
yield text
class NoInsetMarkdown(Markdown):
"""Markdown with code blocks that have no padding and left-justified headings."""
elements = {
**Markdown.elements,
"fence": NoInsetCodeBlock,
"code_block": NoInsetCodeBlock,
"heading_open": LeftHeading,
}
class MarkdownStream:
"""Streaming markdown renderer that progressively displays content with a live updating window.
@ -88,7 +131,7 @@ class MarkdownStream:
# Render the markdown to a string buffer
string_io = io.StringIO()
console = Console(file=string_io, force_terminal=True)
markdown = Markdown(text, **self.mdargs)
markdown = NoInsetMarkdown(text, **self.mdargs)
console.print(markdown)
output = string_io.getvalue()
@ -186,6 +229,7 @@ if __name__ == "__main__":
_text = _text * 10
pm = MarkdownStream()
print("Using NoInsetMarkdown for code blocks with padding=0")
for i in range(6, len(_text), 5):
pm.update(_text[:i])
time.sleep(0.01)

View file

@ -90,6 +90,8 @@ MODEL_ALIASES = {
"deepseek": "deepseek/deepseek-chat",
"r1": "deepseek/deepseek-reasoner",
"flash": "gemini/gemini-2.0-flash-exp",
"gemini-2.5-pro": "gemini/gemini-2.5-pro-exp-03-25",
"gemini": "gemini/gemini-2.5-pro-exp-03-25",
}
# Model metadata loaded from resources and user's files.
@ -103,6 +105,7 @@ class ModelSettings:
use_repo_map: bool = False
send_undo_reply: bool = False
lazy: bool = False
overeager: bool = False
reminder: str = "user"
examples_as_sys_msg: bool = False
extra_params: Optional[dict] = None
@ -113,8 +116,10 @@ class ModelSettings:
streaming: bool = True
editor_model_name: Optional[str] = None
editor_edit_format: Optional[str] = None
remove_reasoning: Optional[str] = None
reasoning_tag: Optional[str] = None
remove_reasoning: Optional[str] = None # Deprecated alias for reasoning_tag
system_prompt_prefix: Optional[str] = None
accepts_settings: Optional[list] = None
# Load model settings from package resource
@ -152,7 +157,11 @@ class ModelInfoManager:
if self.cache_file.exists():
cache_age = time.time() - self.cache_file.stat().st_mtime
if cache_age < self.CACHE_TTL:
self.content = json.loads(self.cache_file.read_text())
try:
self.content = json.loads(self.cache_file.read_text())
except json.JSONDecodeError:
# If the cache file is corrupted, treat it as missing
self.content = None
except OSError:
pass
@ -225,11 +234,14 @@ model_info_manager = ModelInfoManager()
class Model(ModelSettings):
def __init__(self, model, weak_model=None, editor_model=None, editor_edit_format=None):
def __init__(
self, model, weak_model=None, editor_model=None, editor_edit_format=None, verbose=False
):
# Map any alias to its canonical name
model = MODEL_ALIASES.get(model, model)
self.name = model
self.verbose = verbose
self.max_chat_history_tokens = 1024
self.weak_model = None
@ -272,6 +284,11 @@ class Model(ModelSettings):
val = getattr(source, field.name)
setattr(self, field.name, val)
# Handle backward compatibility: if remove_reasoning is set but reasoning_tag isn't,
# use remove_reasoning's value for reasoning_tag
if self.reasoning_tag is None and self.remove_reasoning is not None:
self.reasoning_tag = self.remove_reasoning
def configure_model_settings(self, model):
# Look for exact model match
exact_match = False
@ -282,6 +299,10 @@ class Model(ModelSettings):
exact_match = True
break # Continue to apply overrides
# Initialize accepts_settings if it's None
if self.accepts_settings is None:
self.accepts_settings = []
model = model.lower()
# If no exact match, try generic settings
@ -309,6 +330,8 @@ class Model(ModelSettings):
self.use_repo_map = True
self.use_temperature = False
self.system_prompt_prefix = "Formatting re-enabled. "
if "reasoning_effort" not in self.accepts_settings:
self.accepts_settings.append("reasoning_effort")
return # <--
if "/o1-mini" in model:
@ -330,6 +353,8 @@ class Model(ModelSettings):
self.use_temperature = False
self.streaming = False
self.system_prompt_prefix = "Formatting re-enabled. "
if "reasoning_effort" not in self.accepts_settings:
self.accepts_settings.append("reasoning_effort")
return # <--
if "deepseek" in model and "v3" in model:
@ -344,7 +369,7 @@ class Model(ModelSettings):
self.use_repo_map = True
self.examples_as_sys_msg = True
self.use_temperature = False
self.remove_reasoning = "think"
self.reasoning_tag = "think"
return # <--
if ("llama3" in model or "llama-3" in model) and "70b" in model:
@ -370,6 +395,15 @@ class Model(ModelSettings):
self.reminder = "sys"
return # <--
if "3-7-sonnet" in model:
self.edit_format = "diff"
self.use_repo_map = True
self.examples_as_sys_msg = True
self.reminder = "user"
if "thinking_tokens" not in self.accepts_settings:
self.accepts_settings.append("thinking_tokens")
return # <--
if "3.5-sonnet" in model or "3-5-sonnet" in model:
self.edit_format = "diff"
self.use_repo_map = True
@ -397,7 +431,7 @@ class Model(ModelSettings):
self.edit_format = "diff"
self.editor_edit_format = "editor-diff"
self.use_repo_map = True
self.remove_resoning = "think"
self.reasoning_tag = "think"
self.examples_as_sys_msg = True
self.use_temperature = 0.6
self.extra_params = dict(top_p=0.95)
@ -558,6 +592,21 @@ class Model(ModelSettings):
model = self.name
res = litellm.validate_environment(model)
# If missing AWS credential keys but AWS_PROFILE is set, consider AWS credentials valid
if res["missing_keys"] and any(
key in ["AWS_ACCESS_KEY_ID", "AWS_SECRET_ACCESS_KEY"] for key in res["missing_keys"]
):
if model.startswith("bedrock/") or model.startswith("us.anthropic."):
if os.environ.get("AWS_PROFILE"):
res["missing_keys"] = [
k
for k in res["missing_keys"]
if k not in ["AWS_ACCESS_KEY_ID", "AWS_SECRET_ACCESS_KEY"]
]
if not res["missing_keys"]:
res["keys_in_environment"] = True
if res["keys_in_environment"]:
return res
if res["missing_keys"]:
@ -582,6 +631,107 @@ class Model(ModelSettings):
map_tokens = max(map_tokens, 1024)
return map_tokens
def set_reasoning_effort(self, effort):
"""Set the reasoning effort parameter for models that support it"""
if effort is not None:
if not self.extra_params:
self.extra_params = {}
if "extra_body" not in self.extra_params:
self.extra_params["extra_body"] = {}
self.extra_params["extra_body"]["reasoning_effort"] = effort
def parse_token_value(self, value):
"""
Parse a token value string into an integer.
Accepts formats: 8096, "8k", "10.5k", "0.5M", "10K", etc.
Args:
value: String or int token value
Returns:
Integer token value
"""
if isinstance(value, int):
return value
if not isinstance(value, str):
return int(value) # Try to convert to int
value = value.strip().upper()
if value.endswith("K"):
multiplier = 1024
value = value[:-1]
elif value.endswith("M"):
multiplier = 1024 * 1024
value = value[:-1]
else:
multiplier = 1
# Convert to float first to handle decimal values like "10.5k"
return int(float(value) * multiplier)
def set_thinking_tokens(self, value):
"""
Set the thinking token budget for models that support it.
Accepts formats: 8096, "8k", "10.5k", "0.5M", "10K", etc.
"""
if value is not None:
num_tokens = self.parse_token_value(value)
self.use_temperature = False
if not self.extra_params:
self.extra_params = {}
# OpenRouter models use 'reasoning' instead of 'thinking'
if self.name.startswith("openrouter/"):
self.extra_params["reasoning"] = {"max_tokens": num_tokens}
else:
self.extra_params["thinking"] = {"type": "enabled", "budget_tokens": num_tokens}
def get_thinking_tokens(self, model):
"""Get formatted thinking token budget if available"""
budget = None
if model.extra_params:
# Check for OpenRouter reasoning format
if (
"reasoning" in model.extra_params
and "max_tokens" in model.extra_params["reasoning"]
):
budget = model.extra_params["reasoning"]["max_tokens"]
# Check for standard thinking format
elif (
"thinking" in model.extra_params
and "budget_tokens" in model.extra_params["thinking"]
):
budget = model.extra_params["thinking"]["budget_tokens"]
if budget is not None:
# Format as xx.yK for thousands, xx.yM for millions
if budget >= 1024 * 1024:
value = budget / (1024 * 1024)
if value == int(value):
return f"{int(value)}M"
else:
return f"{value:.1f}M"
else:
value = budget / 1024
if value == int(value):
return f"{int(value)}k"
else:
return f"{value:.1f}k"
return None
def get_reasoning_effort(self, model):
"""Get reasoning effort value if available"""
if (
model.extra_params
and "extra_body" in model.extra_params
and "reasoning_effort" in model.extra_params["extra_body"]
):
return model.extra_params["extra_body"]["reasoning_effort"]
return None
def is_deepseek_r1(self):
name = self.name.lower()
if "deepseek" not in name:
@ -600,7 +750,6 @@ class Model(ModelSettings):
kwargs = dict(
model=self.name,
messages=messages,
stream=stream,
)
@ -629,6 +778,10 @@ class Model(ModelSettings):
hash_object = hashlib.sha1(key)
if "timeout" not in kwargs:
kwargs["timeout"] = request_timeout
if self.verbose:
dump(kwargs)
kwargs["messages"] = messages
res = litellm.completion(**kwargs)
return hash_object, res
@ -654,7 +807,7 @@ class Model(ModelSettings):
res = response.choices[0].message.content
from aider.reasoning_tags import remove_reasoning_content
return remove_reasoning_content(res, self.remove_reasoning)
return remove_reasoning_content(res, self.reasoning_tag)
except litellm_ex.exceptions_tuple() as err:
ex_info = litellm_ex.get_ex_info(err)
@ -823,7 +976,10 @@ def fuzzy_match_models(name):
name = name.lower()
chat_models = set()
for orig_model, attrs in litellm.model_cost.items():
model_metadata = list(litellm.model_cost.items())
model_metadata += list(model_info_manager.local_model_metadata.items())
for orig_model, attrs in model_metadata:
model = orig_model.lower()
if attrs.get("mode") != "chat":
continue

432
aider/onboarding.py Normal file
View file

@ -0,0 +1,432 @@
import base64
import hashlib
import http.server
import os
import secrets
import socketserver
import threading
import time
import webbrowser
from urllib.parse import parse_qs, urlparse
import requests
from aider import urls
from aider.io import InputOutput
from aider.utils import check_pip_install_extra
def check_openrouter_tier(api_key):
"""
Checks if the user is on a free tier for OpenRouter.
Args:
api_key: The OpenRouter API key to check.
Returns:
A boolean indicating if the user is on a free tier (True) or paid tier (False).
Returns True if the check fails.
"""
try:
response = requests.get(
"https://openrouter.ai/api/v1/auth/key",
headers={"Authorization": f"Bearer {api_key}"},
timeout=5, # Add a reasonable timeout
)
response.raise_for_status()
data = response.json()
# According to the documentation, 'is_free_tier' will be true if the user has never paid
return data.get("data", {}).get("is_free_tier", True) # Default to True if not found
except Exception:
# If there's any error, we'll default to assuming free tier
return True
def try_to_select_default_model():
"""
Attempts to select a default model based on available API keys.
Checks OpenRouter tier status to select appropriate model.
Returns:
The name of the selected model, or None if no suitable default is found.
"""
# Special handling for OpenRouter
openrouter_key = os.environ.get("OPENROUTER_API_KEY")
if openrouter_key:
# Check if the user is on a free tier
is_free_tier = check_openrouter_tier(openrouter_key)
if is_free_tier:
return "openrouter/google/gemini-2.5-pro-exp-03-25:free"
else:
return "openrouter/anthropic/claude-3.7-sonnet"
# Select model based on other available API keys
model_key_pairs = [
("ANTHROPIC_API_KEY", "sonnet"),
("DEEPSEEK_API_KEY", "deepseek"),
("OPENAI_API_KEY", "gpt-4o"),
("GEMINI_API_KEY", "gemini/gemini-2.5-pro-exp-03-25"),
("VERTEXAI_PROJECT", "vertex_ai/gemini-2.5-pro-exp-03-25"),
]
for env_key, model_name in model_key_pairs:
api_key_value = os.environ.get(env_key)
if api_key_value:
return model_name
return None
def offer_openrouter_oauth(io, analytics):
"""
Offers OpenRouter OAuth flow to the user if no API keys are found.
Args:
io: The InputOutput object for user interaction.
analytics: The Analytics object for tracking events.
Returns:
True if authentication was successful, False otherwise.
"""
# No API keys found - Offer OpenRouter OAuth
io.tool_output("OpenRouter provides free and paid access to many LLMs.")
# Use confirm_ask which handles non-interactive cases
if io.confirm_ask(
"Login to OpenRouter or create a free account?",
default="y",
):
analytics.event("oauth_flow_initiated", provider="openrouter")
openrouter_key = start_openrouter_oauth_flow(io, analytics)
if openrouter_key:
# Successfully got key via OAuth, use the default OpenRouter model
# Ensure OPENROUTER_API_KEY is now set in the environment for later use
os.environ["OPENROUTER_API_KEY"] = openrouter_key
# Track OAuth success leading to model selection
analytics.event("oauth_flow_success")
return True
# OAuth failed or was cancelled by user implicitly (e.g., closing browser)
# Error messages are handled within start_openrouter_oauth_flow
analytics.event("oauth_flow_failure")
io.tool_error("OpenRouter authentication did not complete successfully.")
# Fall through to the final error message
return False
def select_default_model(args, io, analytics):
"""
Selects a default model based on available API keys if no model is specified.
Offers OAuth flow for OpenRouter if no keys are found.
Args:
args: The command line arguments object.
io: The InputOutput object for user interaction.
analytics: The Analytics object for tracking events.
Returns:
The name of the selected model, or None if no suitable default is found.
"""
if args.model:
return args.model # Model already specified
model = try_to_select_default_model()
if model:
io.tool_warning(f"Using {model} model with API key from environment.")
analytics.event("auto_model_selection", model=model)
return model
no_model_msg = "No LLM model was specified and no API keys were provided."
io.tool_warning(no_model_msg)
# Try OAuth if no model was detected
offer_openrouter_oauth(io, analytics)
# Check again after potential OAuth success
model = try_to_select_default_model()
if model:
return model
io.offer_url(urls.models_and_keys, "Open documentation URL for more info?")
# Helper function to find an available port
def find_available_port(start_port=8484, end_port=8584):
for port in range(start_port, end_port + 1):
try:
# Check if the port is available by trying to bind to it
with socketserver.TCPServer(("localhost", port), None):
return port
except OSError:
# Port is likely already in use
continue
return None
# PKCE code generation
def generate_pkce_codes():
code_verifier = secrets.token_urlsafe(64)
hasher = hashlib.sha256()
hasher.update(code_verifier.encode("utf-8"))
code_challenge = base64.urlsafe_b64encode(hasher.digest()).rstrip(b"=").decode("utf-8")
return code_verifier, code_challenge
# Function to exchange the authorization code for an API key
def exchange_code_for_key(code, code_verifier, io):
try:
response = requests.post(
"https://openrouter.ai/api/v1/auth/keys",
headers={"Content-Type": "application/json"},
json={
"code": code,
"code_verifier": code_verifier,
"code_challenge_method": "S256",
},
timeout=30, # Add a timeout
)
response.raise_for_status() # Raise exception for bad status codes (4xx or 5xx)
data = response.json()
api_key = data.get("key")
if not api_key:
io.tool_error("Error: 'key' not found in OpenRouter response.")
io.tool_error(f"Response: {response.text}")
return None
return api_key
except requests.exceptions.Timeout:
io.tool_error("Error: Request to OpenRouter timed out during code exchange.")
return None
except requests.exceptions.HTTPError as e:
io.tool_error(
f"Error exchanging code for OpenRouter key: {e.status_code} {e.response.reason}"
)
io.tool_error(f"Response: {e.response.text}")
return None
except requests.exceptions.RequestException as e:
io.tool_error(f"Error exchanging code for OpenRouter key: {e}")
return None
except Exception as e:
io.tool_error(f"Unexpected error during code exchange: {e}")
return None
# Function to start the OAuth flow
def start_openrouter_oauth_flow(io, analytics):
"""Initiates the OpenRouter OAuth PKCE flow using a local server."""
# Check for requests library
if not check_pip_install_extra(io, "requests", "OpenRouter OAuth", "aider[oauth]"):
return None
port = find_available_port()
if not port:
io.tool_error("Could not find an available port between 8484 and 8584.")
io.tool_error("Please ensure a port in this range is free, or configure manually.")
return None
callback_url = f"http://localhost:{port}/callback/aider"
auth_code = None
server_error = None
server_started = threading.Event()
shutdown_server = threading.Event()
class OAuthCallbackHandler(http.server.SimpleHTTPRequestHandler):
def do_GET(self):
nonlocal auth_code, server_error
parsed_path = urlparse(self.path)
if parsed_path.path == "/callback/aider":
query_params = parse_qs(parsed_path.query)
if "code" in query_params:
auth_code = query_params["code"][0]
self.send_response(200)
self.send_header("Content-type", "text/html")
self.end_headers()
self.wfile.write(
b"<html><body><h1>Success!</h1>"
b"<p>Aider has received the authentication code. "
b"You can close this browser tab.</p></body></html>"
)
# Signal the main thread to shut down the server
# Signal the main thread to shut down the server
shutdown_server.set()
else:
# Redirect to aider website if 'code' is missing (e.g., user visited manually)
self.send_response(302) # Found (temporary redirect)
self.send_header("Location", urls.website)
self.end_headers()
# No need to set server_error, just redirect.
# Do NOT shut down the server here; wait for timeout or success.
else:
# Redirect anything else (e.g., favicon.ico) to the main website as well
self.send_response(302)
self.send_header("Location", urls.website)
self.end_headers()
self.wfile.write(b"Not Found")
def log_message(self, format, *args):
# Suppress server logging to keep terminal clean
pass
def run_server():
nonlocal server_error
try:
with socketserver.TCPServer(("localhost", port), OAuthCallbackHandler) as httpd:
io.tool_output(f"Temporary server listening on {callback_url}", log_only=True)
server_started.set() # Signal that the server is ready
# Wait until shutdown is requested or timeout occurs (handled by main thread)
while not shutdown_server.is_set():
httpd.handle_request() # Handle one request at a time
# Add a small sleep to prevent busy-waiting if needed,
# though handle_request should block appropriately.
time.sleep(0.1)
io.tool_output("Shutting down temporary server.", log_only=True)
except Exception as e:
server_error = f"Failed to start or run temporary server: {e}"
server_started.set() # Signal even if failed, error will be checked
shutdown_server.set() # Ensure shutdown logic proceeds
server_thread = threading.Thread(target=run_server, daemon=True)
server_thread.start()
# Wait briefly for the server to start, or for an error
if not server_started.wait(timeout=5):
io.tool_error("Temporary authentication server failed to start in time.")
shutdown_server.set() # Ensure thread exits if it eventually starts
server_thread.join(timeout=1)
return None
# Check if server failed during startup
if server_error:
io.tool_error(server_error)
shutdown_server.set() # Ensure thread exits
server_thread.join(timeout=1)
return None
# Generate codes and URL
code_verifier, code_challenge = generate_pkce_codes()
auth_url_base = "https://openrouter.ai/auth"
auth_params = {
"callback_url": callback_url,
"code_challenge": code_challenge,
"code_challenge_method": "S256",
}
auth_url = f"{auth_url_base}?{'&'.join(f'{k}={v}' for k, v in auth_params.items())}"
io.tool_output("\nPlease open this URL in your browser to connect Aider with OpenRouter:")
io.tool_output()
print(auth_url)
MINUTES = 5
io.tool_output(f"\nWaiting up to {MINUTES} minutes for you to finish in the browser...")
io.tool_output("Use Control-C to interrupt.")
try:
webbrowser.open(auth_url)
except Exception:
pass
# Wait for the callback to set the auth_code or for timeout/error
interrupted = False
try:
shutdown_server.wait(timeout=MINUTES * 60) # Convert minutes to seconds
except KeyboardInterrupt:
io.tool_warning("\nOAuth flow interrupted.")
analytics.event("oauth_flow_failed", provider="openrouter", reason="user_interrupt")
interrupted = True
# Ensure the server thread is signaled to shut down
shutdown_server.set()
# Join the server thread to ensure it's cleaned up
server_thread.join(timeout=1)
if interrupted:
return None # Return None if interrupted by user
if server_error:
io.tool_error(f"Authentication failed: {server_error}")
analytics.event("oauth_flow_failed", provider="openrouter", reason=server_error)
return None
if not auth_code:
io.tool_error("Authentication with OpenRouter failed.")
analytics.event("oauth_flow_failed", provider="openrouter")
return None
io.tool_output("Completing authentication...")
analytics.event("oauth_flow_code_received", provider="openrouter")
# Exchange code for key
api_key = exchange_code_for_key(auth_code, code_verifier, io)
if api_key:
# Set env var for the current session immediately
os.environ["OPENROUTER_API_KEY"] = api_key
# Save the key to the oauth-keys.env file
try:
config_dir = os.path.expanduser("~/.aider")
os.makedirs(config_dir, exist_ok=True)
key_file = os.path.join(config_dir, "oauth-keys.env")
with open(key_file, "a", encoding="utf-8") as f:
f.write(f'OPENROUTER_API_KEY="{api_key}"\n')
io.tool_warning("Aider will load the OpenRouter key automatically in future sessions.")
io.tool_output()
analytics.event("oauth_flow_success", provider="openrouter")
return api_key
except Exception as e:
io.tool_error(f"Successfully obtained key, but failed to save it to file: {e}")
io.tool_warning("Set OPENROUTER_API_KEY environment variable for this session only.")
# Still return the key for the current session even if saving failed
analytics.event("oauth_flow_save_failed", provider="openrouter", reason=str(e))
return api_key
else:
io.tool_error("Authentication with OpenRouter failed.")
analytics.event("oauth_flow_failed", provider="openrouter", reason="code_exchange_failed")
return None
# Dummy Analytics class for testing
class DummyAnalytics:
def event(self, *args, **kwargs):
# print(f"Analytics Event: {args} {kwargs}") # Optional: print events
pass
def main():
"""Main function to test the OpenRouter OAuth flow."""
print("Starting OpenRouter OAuth flow test...")
# Use a real IO object for interaction
io = InputOutput(
pretty=True,
yes=False,
input_history_file=None,
chat_history_file=None,
tool_output_color="BLUE",
tool_error_color="RED",
)
# Use a dummy analytics object
analytics = DummyAnalytics()
# Ensure OPENROUTER_API_KEY is not set, to trigger the flow naturally
# (though start_openrouter_oauth_flow doesn't check this itself)
if "OPENROUTER_API_KEY" in os.environ:
print("Warning: OPENROUTER_API_KEY is already set in environment.")
# del os.environ["OPENROUTER_API_KEY"] # Optionally unset it for testing
api_key = start_openrouter_oauth_flow(io, analytics)
if api_key:
print("\nOAuth flow completed successfully!")
print(f"Obtained API Key (first 5 chars): {api_key[:5]}...")
# Be careful printing the key, even partially
else:
print("\nOAuth flow failed or was cancelled.")
print("\nOpenRouter OAuth flow test finished.")
if __name__ == "__main__":
main()

View file

@ -0,0 +1,7 @@
These scm files are all adapted from the github repositories listed here:
https://github.com/Goldziher/tree-sitter-language-pack/blob/main/sources/language_definitions.json
See this URL for information on the licenses of each repo:
https://github.com/Goldziher/tree-sitter-language-pack/

View file

@ -0,0 +1,5 @@
(function_declarator
declarator: (identifier) @name.definition.function) @definition.function
(call_expression
function: (identifier) @name.reference.call) @reference.call

View file

@ -0,0 +1,9 @@
(struct_specifier name: (type_identifier) @name.definition.class body:(_)) @definition.class
(declaration type: (union_specifier name: (type_identifier) @name.definition.class)) @definition.class
(function_declarator declarator: (identifier) @name.definition.function) @definition.function
(type_definition declarator: (type_identifier) @name.definition.type) @definition.type
(enum_specifier name: (type_identifier) @name.definition.type) @definition.type

View file

@ -0,0 +1,16 @@
; Definitions
(intent_def
(intent) @name.definition.intent) @definition.intent
(slot_def
(slot) @name.definition.slot) @definition.slot
(alias_def
(alias) @name.definition.alias) @definition.alias
; References
(slot_ref
(slot) @name.reference.slot) @reference.slot
(alias_ref
(alias) @name.reference.alias) @reference.alias

View file

@ -0,0 +1,122 @@
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;;; Function Definitions ;;;;;;;;;;;;;;;;;;;;;;;
(defun_header
function_name: (sym_lit) @name.definition.function) @definition.function
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;;; Function Calls ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;;;
;;; Basically, we consider every list literal with symbol as the
;;; first element to be a call to a function named by that element.
;;; But we must exclude some cases. Note, tree-sitter @ignore
;;; cases only work if they are declared before the cases
;;; we want to include.
;; Exclude lambda lists for function definitions
;; For example:
;;
;; (defun my-func (arg1 arg2) ...)
;;
;; do not treat (arg1 arg2) as a call of function arg1
;;
(defun_header
lambda_list: (list_lit . [(sym_lit) (package_lit)] @ignore))
;; Similar to the above, but for
;;
;; (defmethod m ((type1 param1) (type2 param2)) ...)
;;
;; where list literals having symbol as their first element
;; are nested inside the lambda list.
(defun_header
lambda_list: (list_lit (list_lit . [(sym_lit) (package_lit)] @ignore)))
;;
;; (let ((var ...) (var2 ...)) ...)
;;
;; - exclude var, var2
;; - the same for let*, flet, labels, macrolet, symbol-macrolet
(list_lit . [(sym_lit) (package_lit)] @name.reference.call
. (list_lit (list_lit . [(sym_lit) (package_lit)] @ignore))
(#match? @name.reference.call
"(?i)^(cl:)?(let|let\\*|flet|labels|macrolet|symbol-macrolet)$")
)
;; TODO:
;; - exclude also:
;; - (defclass name (parent parent2)
;; ((slot1 ...)
;; (slot2 ...))
;; exclude the parent, slot1, slot2
;; - (flet ((func-1 (param1 param2))) ...)
;; - we already exclude func-1, but param1 is still recognized
;; as a function call - exclude it too
;; - the same for labels
;; - the same macrolet
;; - what else?
;; (that's a non-goal to completely support all macros
;; and special operators, but every one we support
;; makes the solution a little bit better)
;; - (flet ((func-1 (param1 param2))) ...)
;; - instead of simply excluding it, as we do today,
;; tag func-1 as @local.definition.function (I suppose)
;; - the same for labels, macrolet
;; - @local.scope for let, let*, flet, labels, macrolet
;; - I guess the whole span of the scope text,
;; till the closing paren, should be tagged as @local.scope;
;; Hopefully, combined with @local.definition.function
;; within the scope, the usual @reference.call within
;; that scope will refer to the local definition,
;; and there will be no need to use @local.reference.call
;; (which is more difficult to implement).
;; - When implementing, remember the scope rules differences
;; of let vs let*, flet vs labels.
;; Include all other cases - list literal with symbol as the
;; first element
(list_lit . [(sym_lit) (package_lit)] @name.reference.call) @reference.call
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;;; classes
(list_lit . [(sym_lit) (package_lit)] @ignore
. [(sym_lit) (package_lit)] @name.definition.class
(#match? @ignore "(?i)^(cl:)?defclass$")
) @definition.class
(list_lit . [(sym_lit) (package_lit)] @ignore
. (quoting_lit [(sym_lit) (package_lit)] @name.reference.class)
(#match? @ignore "(?i)^(cl:)?make-instance$")
) @reference.class
;;; TODO:
;; - @reference.class for base classes
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;;; TODO:
;; - Symbols referenced in defpackage
;;
;; (defpackage ...
;; (:export (symbol-a :symbol-b #:symbol-c "SYMBOL-D")))
;;
;; The goal is to allow quick navigation from the API
;; overview in the form of defpackage, to the definition
;; where user can read parameters, docstring, etc.
;; - The @name must not include the colon, or sharpsign colon, quotes,
;; just symbol-a, symbol-b, symbol-c, sybmol-d
;; - Downcase the names specified as string literals?
;; ("SYMBOL-D" -> symbol-d)
;; - We don't know if the exported symbol is a function, variable,
;; class or something else. The official doc
;; (https://tree-sitter.github.io/tree-sitter/code-navigation-systems)
;; does not even suggest a tag for variable reference.
;; (Although in practice, the `tree-sitter tags` command
;; allows any @reference.* and @definition.* tags)
;; Probably it's better to just use @reference.call for all
;; the symbols in the :export clause.
;;
;; - The same for the export function call:
;;
;; (export '(symbol-a :symbol-b #:symbol-c "SYMBOL-D"))

View file

@ -0,0 +1,15 @@
(struct_specifier name: (type_identifier) @name.definition.class body:(_)) @definition.class
(declaration type: (union_specifier name: (type_identifier) @name.definition.class)) @definition.class
(function_declarator declarator: (identifier) @name.definition.function) @definition.function
(function_declarator declarator: (field_identifier) @name.definition.function) @definition.function
(function_declarator declarator: (qualified_identifier scope: (namespace_identifier) @local.scope name: (identifier) @name.definition.method)) @definition.method
(type_definition declarator: (type_identifier) @name.definition.type) @definition.type
(enum_specifier name: (type_identifier) @name.definition.type) @definition.type
(class_specifier name: (type_identifier) @name.definition.class) @definition.class

View file

@ -0,0 +1,26 @@
(module_def (module_declaration (module_fqn) @name.definition.module)) @definition.module
(struct_declaration (struct) . (identifier) @name.definition.class) @definition.class
(interface_declaration (interface) . (identifier) @name.definition.interface) @definition.interface
(enum_declaration (enum) . (identifier) @name.definition.type) @definition.type
(class_declaration (class) . (identifier) @name.definition.class) @definition.class
(constructor (this) @name.definition.method) @definition.method
(destructor (this) @name.definition.method) @definition.method
(postblit (this) @name.definition.method) @definition.method
(manifest_declarator . (identifier) @name.definition.type) @definition.type
(function_declaration (identifier) @name.definition.function) @definition.function
(union_declaration (union) . (identifier) @name.definition.type) @definition.type
(anonymous_enum_declaration (enum_member . (identifier) @name.definition.constant)) @definition.constant
(enum_declaration (enum_member . (identifier) @name.definition.constant)) @definition.constant
(call_expression (identifier) @name.reference.call) @reference.call
(call_expression (type (template_instance (identifier) @name.reference.call))) @reference.call
(parameter (type (identifier) @name.reference.class) @reference.class (identifier))
(variable_declaration (type (identifier) @name.reference.class) @reference.class (declarator))

View file

@ -0,0 +1,92 @@
(class_definition
name: (identifier) @name.definition.class) @definition.class
(method_signature
(function_signature)) @definition.method
(type_alias
(type_identifier) @name.definition.type) @definition.type
(method_signature
(getter_signature
name: (identifier) @name.definition.method)) @definition.method
(method_signature
(setter_signature
name: (identifier) @name.definition.method)) @definition.method
(method_signature
(function_signature
name: (identifier) @name.definition.method)) @definition.method
(method_signature
(factory_constructor_signature
(identifier) @name.definition.method)) @definition.method
(method_signature
(constructor_signature
name: (identifier) @name.definition.method)) @definition.method
(method_signature
(operator_signature)) @definition.method
(method_signature) @definition.method
(mixin_declaration
(mixin)
(identifier) @name.definition.mixin) @definition.mixin
(extension_declaration
name: (identifier) @name.definition.extension) @definition.extension
(new_expression
(type_identifier) @name.reference.class) @reference.class
(enum_declaration
name: (identifier) @name.definition.enum) @definition.enum
(function_signature
name: (identifier) @name.definition.function) @definition.function
(initialized_variable_definition
name: (identifier)
value: (identifier) @name.reference.class
value: (selector
"!"?
(argument_part
(arguments
(argument)*))?)?) @reference.class
(assignment_expression
left: (assignable_expression
(identifier)
(unconditional_assignable_selector
"."
(identifier) @name.reference.send))) @reference.call
(assignment_expression
left: (assignable_expression
(identifier)
(conditional_assignable_selector
"?."
(identifier) @name.reference.send))) @reference.call
((identifier) @name.reference.send
(selector
"!"?
(conditional_assignable_selector
"?." (identifier) @name.reference.send)?
(unconditional_assignable_selector
"."? (identifier) @name.reference.send)?
(argument_part
(arguments
(argument)*))?)*
(cascade_section
(cascade_selector
(identifier)) @name.reference.send
(argument_part
(arguments
(argument)*))?)?) @reference.call

View file

@ -0,0 +1,5 @@
;; defun/defsubst
(function_definition name: (symbol) @name.definition.function) @definition.function
;; Treat macros as function definitions for the sake of TAGS.
(macro_definition name: (symbol) @name.definition.function) @definition.function

View file

@ -0,0 +1,54 @@
; Definitions
; * modules and protocols
(call
target: (identifier) @ignore
(arguments (alias) @name.definition.module)
(#any-of? @ignore "defmodule" "defprotocol")) @definition.module
; * functions/macros
(call
target: (identifier) @ignore
(arguments
[
; zero-arity functions with no parentheses
(identifier) @name.definition.function
; regular function clause
(call target: (identifier) @name.definition.function)
; function clause with a guard clause
(binary_operator
left: (call target: (identifier) @name.definition.function)
operator: "when")
])
(#any-of? @ignore "def" "defp" "defdelegate" "defguard" "defguardp" "defmacro" "defmacrop" "defn" "defnp")) @definition.function
; References
; ignore calls to kernel/special-forms keywords
(call
target: (identifier) @ignore
(#any-of? @ignore "def" "defp" "defdelegate" "defguard" "defguardp" "defmacro" "defmacrop" "defn" "defnp" "defmodule" "defprotocol" "defimpl" "defstruct" "defexception" "defoverridable" "alias" "case" "cond" "else" "for" "if" "import" "quote" "raise" "receive" "require" "reraise" "super" "throw" "try" "unless" "unquote" "unquote_splicing" "use" "with"))
; ignore module attributes
(unary_operator
operator: "@"
operand: (call
target: (identifier) @ignore))
; * function call
(call
target: [
; local
(identifier) @name.reference.call
; remote
(dot
right: (identifier) @name.reference.call)
]) @reference.call
; * pipe into function call
(binary_operator
operator: "|>"
right: (identifier) @name.reference.call) @reference.call
; * modules
(alias) @name.reference.module @reference.module

View file

@ -0,0 +1,19 @@
(value_declaration (function_declaration_left (lower_case_identifier) @name.definition.function)) @definition.function
(function_call_expr (value_expr (value_qid) @name.reference.function)) @reference.function
(exposed_value (lower_case_identifier) @name.reference.function) @reference.function
(type_annotation ((lower_case_identifier) @name.reference.function) (colon)) @reference.function
(type_declaration ((upper_case_identifier) @name.definition.type) ) @definition.type
(type_ref (upper_case_qid (upper_case_identifier) @name.reference.type)) @reference.type
(exposed_type (upper_case_identifier) @name.reference.type) @reference.type
(type_declaration (union_variant (upper_case_identifier) @name.definition.union)) @definition.union
(value_expr (upper_case_qid (upper_case_identifier) @name.reference.union)) @reference.union
(module_declaration
(upper_case_qid (upper_case_identifier)) @name.definition.module
) @definition.module

View file

@ -0,0 +1,41 @@
; Modules
(module) @name.reference.module @reference.module
(import alias: (identifier) @name.reference.module) @reference.module
(remote_type_identifier
module: (identifier) @name.reference.module) @reference.module
((field_access
record: (identifier) @name.reference.module)
(#is-not? local)) @reference.module
; Functions
(function
name: (identifier) @name.definition.function) @definition.function
(external_function
name: (identifier) @name.definition.function) @definition.function
(unqualified_import (identifier) @name.reference.function) @reference.function
((function_call
function: (identifier) @name.reference.function) @reference.function
(#is-not? local))
((field_access
record: (identifier) @ignore
field: (label) @name.reference.function)
(#is-not? local)) @reference.function
((binary_expression
operator: "|>"
right: (identifier) @name.reference.function)
(#is-not? local)) @reference.function
; Types
(type_definition
(type_name
name: (type_identifier) @name.definition.type)) @definition.type
(type_definition
(data_constructors
(data_constructor
name: (constructor_name) @name.definition.constructor))) @definition.constructor
(external_type
(type_name
name: (type_identifier) @name.definition.type)) @definition.type
(type_identifier) @name.reference.type @reference.type
(constructor_name) @name.reference.constructor @reference.constructor

View file

@ -0,0 +1,42 @@
(
(comment)* @doc
.
(function_declaration
name: (identifier) @name.definition.function) @definition.function
(#strip! @doc "^//\\s*")
(#set-adjacent! @doc @definition.function)
)
(
(comment)* @doc
.
(method_declaration
name: (field_identifier) @name.definition.method) @definition.method
(#strip! @doc "^//\\s*")
(#set-adjacent! @doc @definition.method)
)
(call_expression
function: [
(identifier) @name.reference.call
(parenthesized_expression (identifier) @name.reference.call)
(selector_expression field: (field_identifier) @name.reference.call)
(parenthesized_expression (selector_expression field: (field_identifier) @name.reference.call))
]) @reference.call
(type_spec
name: (type_identifier) @name.definition.type) @definition.type
(type_identifier) @name.reference.type @reference.type
(package_clause "package" (package_identifier) @name.definition.module)
(type_declaration (type_spec name: (type_identifier) @name.definition.interface type: (interface_type)))
(type_declaration (type_spec name: (type_identifier) @name.definition.class type: (struct_type)))
(import_declaration (import_spec) @name.reference.module)
(var_declaration (var_spec name: (identifier) @name.definition.variable))
(const_declaration (const_spec name: (identifier) @name.definition.constant))

View file

@ -0,0 +1,20 @@
(class_declaration
name: (identifier) @name.definition.class) @definition.class
(method_declaration
name: (identifier) @name.definition.method) @definition.method
(method_invocation
name: (identifier) @name.reference.method
arguments: (argument_list) @reference.call)
(interface_declaration
name: (identifier) @name.definition.interface) @definition.interface
(type_list
(type_identifier) @name.reference.interface) @reference.implementation
(object_creation_expression
type: (type_identifier) @name.reference.class) @reference.class
(superclass (type_identifier) @name.reference.class) @reference.class

View file

@ -0,0 +1,34 @@
(function_declaration
name: [
(identifier) @name.definition.function
(dot_index_expression
field: (identifier) @name.definition.function)
]) @definition.function
(function_declaration
name: (method_index_expression
method: (identifier) @name.definition.method)) @definition.method
(assignment_statement
(variable_list .
name: [
(identifier) @name.definition.function
(dot_index_expression
field: (identifier) @name.definition.function)
])
(expression_list .
value: (function_definition))) @definition.function
(table_constructor
(field
name: (identifier) @name.definition.function
value: (function_definition))) @definition.function
(function_call
name: [
(identifier) @name.reference.call
(dot_index_expression
field: (identifier) @name.reference.call)
(method_index_expression
method: (identifier) @name.reference.method)
]) @reference.call

View file

@ -0,0 +1,39 @@
;Class definitions @definition.class
;Function definitions @definition.function
;Interface definitions @definition.interface
;Method definitions @definition.method
;Module definitions @definition.module
;Function/method calls @reference.call
;Class reference @reference.class
;Interface implementation @reference.implementation
(
(identifier) @reference.class
(#match? @reference.class "^_*[A-Z][a-zA-Z0-9_]*$")
)
(class_definition (identifier) @name.definition.class) @definition.class
(actor_definition (identifier) @name.definition.class) @definition.class
(primitive_definition (identifier) @name.definition.class) @definition.class
(struct_definition (identifier) @name.definition.class) @definition.class
(type_alias (identifier) @name.definition.class) @definition.class
(trait_definition (identifier) @name.definition.interface) @definition.interface
(interface_definition (identifier) @name.definition.interface) @definition.interface
(constructor (identifier) @name.definition.method) @definition.method
(method (identifier) @name.definition.method) @definition.method
(behavior (identifier) @name.definition.method) @definition.method
(class_definition (type) @name.reference.implementation) @reference.implementation
(actor_definition (type) @name.reference.implementation) @reference.implementation
(primitive_definition (type) @name.reference.implementation) @reference.implementation
(struct_definition (type) @name.reference.implementation) @reference.implementation
(type_alias (type) @name.reference.implementation) @reference.implementation
; calls - not catching all possible call cases of callees for capturing the method name
(call_expression callee: [(identifier) (ffi_identifier)] @name.reference.call) @reference.call
(call_expression callee: (generic_expression [(identifier) (ffi_identifier)] @name.reference.call)) @reference.call
(call_expression callee: (member_expression (identifier) @name.reference.call .)) @reference.call
(call_expression callee: (member_expression (generic_expression [(identifier) (ffi_identifier)] @name.reference.call) .)) @reference.call
; TODO: add more possible callee expressions
(call_expression) @reference.call

View file

@ -0,0 +1,5 @@
(property
(key) @name.definition.property) @definition.property
(substitution
(key) @name.reference.property) @reference.property

View file

@ -0,0 +1,14 @@
(module (expression_statement (assignment left: (identifier) @name.definition.constant) @definition.constant))
(class_definition
name: (identifier) @name.definition.class) @definition.class
(function_definition
name: (identifier) @name.definition.function) @definition.function
(call
function: [
(identifier) @name.reference.call
(attribute
attribute: (identifier) @name.reference.call)
]) @reference.call

View file

@ -0,0 +1,21 @@
(binary_operator
lhs: (identifier) @name.definition.function
operator: "<-"
rhs: (function_definition)
) @definition.function
(binary_operator
lhs: (identifier) @name.definition.function
operator: "="
rhs: (function_definition)
) @definition.function
(call
function: (identifier) @name.reference.call
) @reference.call
(call
function: (namespace_operator
rhs: (identifier) @name.reference.call
)
) @reference.call

View file

@ -0,0 +1,12 @@
(list
.
(symbol) @reference._define
(#match? @reference._define "^(define|define/contract)$")
.
(list
.
(symbol) @name.definition.function) @definition.function)
(list
.
(symbol) @name.reference.call)

View file

@ -0,0 +1,64 @@
; Method definitions
(
(comment)* @doc
.
[
(method
name: (_) @name.definition.method) @definition.method
(singleton_method
name: (_) @name.definition.method) @definition.method
]
(#strip! @doc "^#\\s*")
(#select-adjacent! @doc @definition.method)
)
(alias
name: (_) @name.definition.method) @definition.method
(setter
(identifier) @ignore)
; Class definitions
(
(comment)* @doc
.
[
(class
name: [
(constant) @name.definition.class
(scope_resolution
name: (_) @name.definition.class)
]) @definition.class
(singleton_class
value: [
(constant) @name.definition.class
(scope_resolution
name: (_) @name.definition.class)
]) @definition.class
]
(#strip! @doc "^#\\s*")
(#select-adjacent! @doc @definition.class)
)
; Module definitions
(
(module
name: [
(constant) @name.definition.module
(scope_resolution
name: (_) @name.definition.module)
]) @definition.module
)
; Calls
(call method: (identifier) @name.reference.call) @reference.call
(
[(identifier) (constant)] @name.reference.call @reference.call
(#is-not? local)
(#not-match? @name.reference.call "^(lambda|load|require|require_relative|__FILE__|__LINE__)$")
)

View file

@ -0,0 +1,60 @@
; ADT definitions
(struct_item
name: (type_identifier) @name.definition.class) @definition.class
(enum_item
name: (type_identifier) @name.definition.class) @definition.class
(union_item
name: (type_identifier) @name.definition.class) @definition.class
; type aliases
(type_item
name: (type_identifier) @name.definition.class) @definition.class
; method definitions
(declaration_list
(function_item
name: (identifier) @name.definition.method) @definition.method)
; function definitions
(function_item
name: (identifier) @name.definition.function) @definition.function
; trait definitions
(trait_item
name: (type_identifier) @name.definition.interface) @definition.interface
; module definitions
(mod_item
name: (identifier) @name.definition.module) @definition.module
; macro definitions
(macro_definition
name: (identifier) @name.definition.macro) @definition.macro
; references
(call_expression
function: (identifier) @name.reference.call) @reference.call
(call_expression
function: (field_expression
field: (field_identifier) @name.reference.call)) @reference.call
(macro_invocation
macro: (identifier) @name.reference.call) @reference.call
; implementations
(impl_item
trait: (type_identifier) @name.reference.implementation) @reference.implementation
(impl_item
type: (type_identifier) @name.reference.implementation
!trait) @reference.implementation

View file

@ -0,0 +1,43 @@
;; Method and Function declarations
(contract_declaration (_
(function_definition
name: (identifier) @name.definition.function) @definition.method))
(source_file
(function_definition
name: (identifier) @name.definition.function) @definition.function)
;; Contract, struct, enum and interface declarations
(contract_declaration
name: (identifier) @name.definition.class) @definition.class
(interface_declaration
name: (identifier) @name.definition.interface) @definition.interface
(library_declaration
name: (identifier) @name.definition.class) @definition.interface
(struct_declaration name: (identifier) @name.definition.class) @definition.class
(enum_declaration name: (identifier) @name.definition.class) @definition.class
(event_definition name: (identifier) @name.definition.class) @definition.class
;; Function calls
(call_expression (expression (identifier)) @name.reference.call ) @reference.call
(call_expression
(expression (member_expression
property: (_) @name.reference.method ))) @reference.call
;; Log emit
(emit_statement name: (_) @name.reference.class) @reference.class
;; Inheritance
(inheritance_specifier
ancestor: (user_defined_type (_) @name.reference.class . )) @reference.class
;; Imports ( note that unknown is not standardised )
(import_directive
import_name: (_) @name.reference.module ) @reference.unknown

View file

@ -0,0 +1,51 @@
(class_declaration
name: (type_identifier) @name.definition.class) @definition.class
(protocol_declaration
name: (type_identifier) @name.definition.interface) @definition.interface
(class_declaration
(class_body
[
(function_declaration
name: (simple_identifier) @name.definition.method
)
(subscript_declaration
(parameter (simple_identifier) @name.definition.method)
)
(init_declaration "init" @name.definition.method)
(deinit_declaration "deinit" @name.definition.method)
]
)
) @definition.method
(protocol_declaration
(protocol_body
[
(protocol_function_declaration
name: (simple_identifier) @name.definition.method
)
(subscript_declaration
(parameter (simple_identifier) @name.definition.method)
)
(init_declaration "init" @name.definition.method)
]
)
) @definition.method
(class_declaration
(class_body
[
(property_declaration
(pattern (simple_identifier) @name.definition.property)
)
]
)
) @definition.property
(property_declaration
(pattern (simple_identifier) @name.definition.property)
) @definition.property
(function_declaration
name: (simple_identifier) @name.definition.function) @definition.function

View file

@ -0,0 +1,20 @@
(assignment
key: "LABEL"
(value
(content) @name.definition.label)) @definition.label
(assignment
key: "GOTO"
(value
(content) @name.reference.label)) @reference.label
(assignment
key: "ENV"
(env_var) @name.definition.variable) @definition.variable
(match
key: "ENV"
(env_var) @name.reference.variable) @reference.variable
(var_sub
(env_var) @name.reference.variable) @reference.variable

View file

@ -0,0 +1,65 @@
; Definitions
(package_clause
name: (package_identifier) @name.definition.module) @definition.module
(trait_definition
name: (identifier) @name.definition.interface) @definition.interface
(enum_definition
name: (identifier) @name.definition.enum) @definition.enum
(simple_enum_case
name: (identifier) @name.definition.class) @definition.class
(full_enum_case
name: (identifier) @name.definition.class) @definition.class
(class_definition
name: (identifier) @name.definition.class) @definition.class
(object_definition
name: (identifier) @name.definition.object) @definition.object
(function_definition
name: (identifier) @name.definition.function) @definition.function
(val_definition
pattern: (identifier) @name.definition.variable) @definition.variable
(given_definition
name: (identifier) @name.definition.variable) @definition.variable
(var_definition
pattern: (identifier) @name.definition.variable) @definition.variable
(val_declaration
name: (identifier) @name.definition.variable) @definition.variable
(var_declaration
name: (identifier) @name.definition.variable) @definition.variable
(type_definition
name: (type_identifier) @name.definition.type) @definition.type
(class_parameter
name: (identifier) @name.definition.property) @definition.property
; References
(call_expression
(identifier) @name.reference.call) @reference.call
(instance_expression
(type_identifier) @name.reference.interface) @reference.interface
(instance_expression
(generic_type
(type_identifier) @name.reference.interface)) @reference.interface
(extends_clause
(type_identifier) @name.reference.class) @reference.class
(extends_clause
(generic_type
(type_identifier) @name.reference.class)) @reference.class

View file

@ -7,8 +7,8 @@ from aider.dump import dump # noqa
# Standard tag identifier
REASONING_TAG = "thinking-content-" + "7bbeb8e1441453ad999a0bbba8a46d4b"
# Output formatting
REASONING_START = "> Thinking ..."
REASONING_END = "> ... done thinking.\n\n------"
REASONING_START = "--------------\n► **THINKING**"
REASONING_END = "------------\n► **ANSWER**"
def remove_reasoning_content(res, reasoning_tag):

View file

@ -56,6 +56,7 @@ class GitRepo:
attribute_commit_message_committer=False,
commit_prompt=None,
subtree_only=False,
git_commit_verify=True,
):
self.io = io
self.models = models
@ -69,6 +70,7 @@ class GitRepo:
self.attribute_commit_message_committer = attribute_commit_message_committer
self.commit_prompt = commit_prompt
self.subtree_only = subtree_only
self.git_commit_verify = git_commit_verify
self.ignore_file_cache = {}
if git_dname:
@ -133,7 +135,9 @@ class GitRepo:
# if context:
# full_commit_message += "\n\n# Aider chat conversation:\n\n" + context
cmd = ["-m", full_commit_message, "--no-verify"]
cmd = ["-m", full_commit_message]
if not self.git_commit_verify:
cmd.append("--no-verify")
if fnames:
fnames = [str(self.abs_root_path(fn)) for fn in fnames]
for fname in fnames:

View file

@ -398,13 +398,30 @@ class RepoMap:
# dump(fname)
rel_fname = self.get_rel_fname(fname)
current_pers = 0.0 # Start with 0 personalization score
if fname in chat_fnames:
personalization[rel_fname] = personalize
current_pers += personalize
chat_rel_fnames.add(rel_fname)
if rel_fname in mentioned_fnames:
personalization[rel_fname] = personalize
# Use max to avoid double counting if in chat_fnames and mentioned_fnames
current_pers = max(current_pers, personalize)
# Check path components against mentioned_idents
path_obj = Path(rel_fname)
path_components = set(path_obj.parts)
basename_with_ext = path_obj.name
basename_without_ext, _ = os.path.splitext(basename_with_ext)
components_to_check = path_components.union({basename_with_ext, basename_without_ext})
matched_idents = components_to_check.intersection(mentioned_idents)
if matched_idents:
# Add personalization *once* if any path component matches a mentioned ident
current_pers += personalize
if current_pers > 0:
personalization[rel_fname] = current_pers # Assign the final calculated value
tags = list(self.get_tags(fname, rel_fname))
if tags is None:
@ -445,12 +462,19 @@ class RepoMap:
progress()
definers = defines[ident]
mul = 1.0
is_snake = ("_" in ident) and any(c.isalpha() for c in ident)
is_camel = any(c.isupper() for c in ident) and any(c.islower() for c in ident)
if ident in mentioned_idents:
mul = 10
elif ident.startswith("_"):
mul = 0.1
else:
mul = 1
mul *= 10
if (is_snake or is_camel) and len(ident) >= 8:
mul *= 10
if ident.startswith("_"):
mul *= 0.1
if len(defines[ident]) > 5:
mul *= 0.1
for referencer, num_refs in Counter(references[ident]).items():
for definer in definers:
@ -458,10 +482,14 @@ class RepoMap:
# if referencer == definer:
# continue
use_mul = mul
if referencer in chat_rel_fnames:
use_mul *= 50
# scale down so high freq (low value) mentions don't dominate
num_refs = math.sqrt(num_refs)
G.add_edge(referencer, definer, weight=mul * num_refs, ident=ident)
G.add_edge(referencer, definer, weight=use_mul * num_refs, ident=ident)
if not references:
pass

View file

@ -63,6 +63,22 @@
//"supports_tool_choice": true,
"supports_prompt_caching": true
},
"openrouter/deepseek/deepseek-chat-v3-0324": {
"max_tokens": 8192,
"max_input_tokens": 64000,
"max_output_tokens": 8192,
"input_cost_per_token": 0.00000055,
"input_cost_per_token_cache_hit": 0.00000014,
"cache_read_input_token_cost": 0.00000014,
"cache_creation_input_token_cost": 0.0,
"output_cost_per_token": 0.00000219,
"litellm_provider": "openrouter",
"mode": "chat",
//"supports_function_calling": true,
"supports_assistant_prefill": true,
//"supports_tool_choice": true,
"supports_prompt_caching": true
},
"fireworks_ai/accounts/fireworks/models/deepseek-r1": {
"max_tokens": 160000,
"max_input_tokens": 128000,
@ -241,4 +257,113 @@
"supports_system_messages": true,
"supports_tool_choice": true
},
"gemini/gemini-2.5-pro-exp-03-25": {
"max_tokens": 8192,
"max_input_tokens": 1048576,
"max_output_tokens": 64000,
"max_images_per_prompt": 3000,
"max_videos_per_prompt": 10,
"max_video_length": 1,
"max_audio_length_hours": 8.4,
"max_audio_per_prompt": 1,
"max_pdf_size_mb": 30,
"input_cost_per_image": 0,
"input_cost_per_video_per_second": 0,
"input_cost_per_audio_per_second": 0,
"input_cost_per_token": 0,
"input_cost_per_character": 0,
"input_cost_per_token_above_128k_tokens": 0,
"input_cost_per_character_above_128k_tokens": 0,
"input_cost_per_image_above_128k_tokens": 0,
"input_cost_per_video_per_second_above_128k_tokens": 0,
"input_cost_per_audio_per_second_above_128k_tokens": 0,
"output_cost_per_token": 0,
"output_cost_per_character": 0,
"output_cost_per_token_above_128k_tokens": 0,
"output_cost_per_character_above_128k_tokens": 0,
//"litellm_provider": "vertex_ai-language-models",
"litellm_provider": "gemini",
"mode": "chat",
"supports_system_messages": true,
"supports_function_calling": true,
"supports_vision": true,
"supports_audio_input": true,
"supports_video_input": true,
"supports_pdf_input": true,
"supports_response_schema": true,
"supports_tool_choice": true,
"source": "https://cloud.google.com/vertex-ai/generative-ai/pricing"
},
"vertex_ai/gemini-2.5-pro-exp-03-25": {
"max_tokens": 8192,
"max_input_tokens": 1048576,
"max_output_tokens": 64000,
"max_images_per_prompt": 3000,
"max_videos_per_prompt": 10,
"max_video_length": 1,
"max_audio_length_hours": 8.4,
"max_audio_per_prompt": 1,
"max_pdf_size_mb": 30,
"input_cost_per_image": 0,
"input_cost_per_video_per_second": 0,
"input_cost_per_audio_per_second": 0,
"input_cost_per_token": 0,
"input_cost_per_character": 0,
"input_cost_per_token_above_128k_tokens": 0,
"input_cost_per_character_above_128k_tokens": 0,
"input_cost_per_image_above_128k_tokens": 0,
"input_cost_per_video_per_second_above_128k_tokens": 0,
"input_cost_per_audio_per_second_above_128k_tokens": 0,
"output_cost_per_token": 0,
"output_cost_per_character": 0,
"output_cost_per_token_above_128k_tokens": 0,
"output_cost_per_character_above_128k_tokens": 0,
"litellm_provider": "vertex_ai-language-models",
"mode": "chat",
"supports_system_messages": true,
"supports_function_calling": true,
"supports_vision": true,
"supports_audio_input": true,
"supports_video_input": true,
"supports_pdf_input": true,
"supports_response_schema": true,
"supports_tool_choice": true,
"source": "https://cloud.google.com/vertex-ai/generative-ai/pricing"
},
"openrouter/google/gemini-2.5-pro-exp-03-25:free": {
"max_tokens": 8192,
"max_input_tokens": 1048576,
"max_output_tokens": 64000,
"max_images_per_prompt": 3000,
"max_videos_per_prompt": 10,
"max_video_length": 1,
"max_audio_length_hours": 8.4,
"max_audio_per_prompt": 1,
"max_pdf_size_mb": 30,
"input_cost_per_image": 0,
"input_cost_per_video_per_second": 0,
"input_cost_per_audio_per_second": 0,
"input_cost_per_token": 0,
"input_cost_per_character": 0,
"input_cost_per_token_above_128k_tokens": 0,
"input_cost_per_character_above_128k_tokens": 0,
"input_cost_per_image_above_128k_tokens": 0,
"input_cost_per_video_per_second_above_128k_tokens": 0,
"input_cost_per_audio_per_second_above_128k_tokens": 0,
"output_cost_per_token": 0,
"output_cost_per_character": 0,
"output_cost_per_token_above_128k_tokens": 0,
"output_cost_per_character_above_128k_tokens": 0,
"litellm_provider": "openrouter",
"mode": "chat",
"supports_system_messages": true,
"supports_function_calling": true,
"supports_vision": true,
"supports_audio_input": true,
"supports_video_input": true,
"supports_pdf_input": true,
"supports_response_schema": true,
"supports_tool_choice": true,
"source": "https://cloud.google.com/vertex-ai/generative-ai/pricing"
},
}

View file

@ -185,6 +185,7 @@
editor_edit_format: editor-diff
- name: anthropic/claude-3-7-sonnet-20250219
overeager: true
edit_format: diff
weak_model_name: anthropic/claude-3-5-haiku-20241022
use_repo_map: true
@ -196,8 +197,10 @@
cache_control: true
editor_model_name: anthropic/claude-3-7-sonnet-20250219
editor_edit_format: editor-diff
accepts_settings: ["thinking_tokens"]
- name: anthropic/claude-3-7-sonnet-latest
overeager: true
edit_format: diff
weak_model_name: anthropic/claude-3-5-haiku-20241022
use_repo_map: true
@ -209,6 +212,7 @@
cache_control: true
editor_model_name: anthropic/claude-3-7-sonnet-latest
editor_edit_format: editor-diff
accepts_settings: ["thinking_tokens"]
- name: claude-3-7-sonnet-20250219
edit_format: diff
@ -222,8 +226,10 @@
cache_control: true
editor_model_name: claude-3-7-sonnet-20250219
editor_edit_format: editor-diff
accepts_settings: ["thinking_tokens"]
- name: claude-3-7-sonnet-latest
overeager: true
edit_format: diff
weak_model_name: claude-3-5-haiku-20241022
use_repo_map: true
@ -235,8 +241,10 @@
cache_control: true
editor_model_name: claude-3-7-sonnet-latest
editor_edit_format: editor-diff
accepts_settings: ["thinking_tokens"]
- name: bedrock/anthropic.claude-3-7-sonnet-20250219-v1:0
overeager: true
edit_format: diff
weak_model_name: bedrock/anthropic.claude-3-5-haiku-20241022-v1:0
use_repo_map: true
@ -248,8 +256,10 @@
cache_control: true
editor_model_name: bedrock/anthropic.claude-3-7-sonnet-20250219-v1:0
editor_edit_format: editor-diff
accepts_settings: ["thinking_tokens"]
- name: bedrock/us.anthropic.claude-3-7-sonnet-20250219-v1:0
overeager: true
edit_format: diff
weak_model_name: bedrock/us.anthropic.claude-3-5-haiku-20241022-v1:0
use_repo_map: true
@ -261,8 +271,10 @@
cache_control: true
editor_model_name: bedrock/us.anthropic.claude-3-7-sonnet-20250219-v1:0
editor_edit_format: editor-diff
accepts_settings: ["thinking_tokens"]
- name: bedrock_converse/anthropic.claude-3-7-sonnet-20250219-v1:0
overeager: true
edit_format: diff
weak_model_name: bedrock_converse/anthropic.claude-3-5-haiku-20241022-v1:0
use_repo_map: true
@ -274,8 +286,10 @@
cache_control: true
editor_model_name: bedrock_converse/anthropic.claude-3-7-sonnet-20250219-v1:0
editor_edit_format: editor-diff
accepts_settings: ["thinking_tokens"]
- name: bedrock_converse/us.anthropic.claude-3-7-sonnet-20250219-v1:0
overeager: true
edit_format: diff
weak_model_name: bedrock_converse/us.anthropic.claude-3-5-haiku-20241022-v1:0
use_repo_map: true
@ -287,8 +301,10 @@
cache_control: true
editor_model_name: bedrock_converse/us.anthropic.claude-3-7-sonnet-20250219-v1:0
editor_edit_format: editor-diff
accepts_settings: ["thinking_tokens"]
- name: vertex_ai/claude-3-7-sonnet@20250219
overeager: true
edit_format: diff
weak_model_name: vertex_ai/claude-3-5-haiku@20241022
use_repo_map: true
@ -297,8 +313,10 @@
max_tokens: 64000
editor_model_name: vertex_ai/claude-3-7-sonnet@20250219
editor_edit_format: editor-diff
accepts_settings: ["thinking_tokens"]
- name: vertex_ai-anthropic_models/vertex_ai/claude-3-7-sonnet@20250219
overeager: true
edit_format: diff
weak_model_name: vertex_ai/claude-3-5-haiku@20241022
use_repo_map: true
@ -307,8 +325,10 @@
max_tokens: 64000
editor_model_name: vertex_ai-anthropic_models/vertex_ai/claude-3-7-sonnet@20250219
editor_edit_format: editor-diff
accepts_settings: ["thinking_tokens"]
- name: openrouter/anthropic/claude-3.7-sonnet
overeager: true
edit_format: diff
weak_model_name: openrouter/anthropic/claude-3-5-haiku
use_repo_map: true
@ -320,8 +340,10 @@
cache_control: true
editor_model_name: openrouter/anthropic/claude-3.7-sonnet
editor_edit_format: editor-diff
accepts_settings: ["thinking_tokens"]
- name: openrouter/anthropic/claude-3.7-sonnet:beta
overeager: true
edit_format: diff
weak_model_name: openrouter/anthropic/claude-3-5-haiku
use_repo_map: true
@ -333,6 +355,7 @@
cache_control: true
editor_model_name: openrouter/anthropic/claude-3.7-sonnet
editor_edit_format: editor-diff
accepts_settings: ["thinking_tokens"]
- name: bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0
edit_format: diff
@ -547,8 +570,8 @@
examples_as_sys_msg: true
extra_params:
max_tokens: 8192
include_reasoning: true
caches_by_default: true
use_temperature: false
editor_model_name: openrouter/deepseek/deepseek-chat
editor_edit_format: editor-diff
@ -635,6 +658,15 @@
reminder: sys
examples_as_sys_msg: true
- name: openrouter/deepseek/deepseek-chat-v3-0324
edit_format: diff
use_repo_map: true
reminder: sys
examples_as_sys_msg: true
extra_params:
max_tokens: 8192
caches_by_default: true
- name: openrouter/openai/gpt-4o
edit_format: diff
weak_model_name: openrouter/openai/gpt-4o-mini
@ -694,6 +726,7 @@
streaming: false
editor_model_name: azure/gpt-4o
editor_edit_format: editor-diff
accepts_settings: ["reasoning_effort"]
- name: o1-preview
edit_format: architect
@ -732,6 +765,7 @@
editor_model_name: openrouter/openai/gpt-4o
editor_edit_format: editor-diff
system_prompt_prefix: "Formatting re-enabled. "
accepts_settings: ["reasoning_effort"]
- name: openai/o1
edit_format: diff
@ -742,6 +776,7 @@
editor_model_name: openai/gpt-4o
editor_edit_format: editor-diff
system_prompt_prefix: "Formatting re-enabled. "
accepts_settings: ["reasoning_effort"]
- name: o1
edit_format: diff
@ -752,6 +787,7 @@
editor_model_name: gpt-4o
editor_edit_format: editor-diff
system_prompt_prefix: "Formatting re-enabled. "
accepts_settings: ["reasoning_effort"]
- name: openrouter/qwen/qwen-2.5-coder-32b-instruct
edit_format: diff
@ -780,7 +816,7 @@
streaming: true
editor_model_name: fireworks_ai/accounts/fireworks/models/deepseek-v3
editor_edit_format: editor-diff
remove_reasoning: think
reasoning_tag: think
extra_params:
max_tokens: 160000
@ -800,6 +836,7 @@
editor_model_name: gpt-4o
editor_edit_format: editor-diff
system_prompt_prefix: "Formatting re-enabled. "
accepts_settings: ["reasoning_effort"]
- name: o3-mini
edit_format: diff
@ -809,6 +846,7 @@
editor_model_name: gpt-4o
editor_edit_format: editor-diff
system_prompt_prefix: "Formatting re-enabled. "
accepts_settings: ["reasoning_effort"]
- name: openrouter/openai/o3-mini
edit_format: diff
@ -818,6 +856,7 @@
editor_model_name: openrouter/openai/gpt-4o
editor_edit_format: editor-diff
system_prompt_prefix: "Formatting re-enabled. "
accepts_settings: ["reasoning_effort"]
- name: openrouter/openai/o3-mini-high
edit_format: diff
@ -827,6 +866,7 @@
editor_model_name: openrouter/openai/gpt-4o
editor_edit_format: editor-diff
system_prompt_prefix: "Formatting re-enabled. "
accepts_settings: ["reasoning_effort"]
- name: azure/o3-mini
edit_format: diff
@ -836,6 +876,7 @@
editor_model_name: azure/gpt-4o
editor_edit_format: editor-diff
system_prompt_prefix: "Formatting re-enabled. "
accepts_settings: ["reasoning_effort"]
- name: gpt-4.5-preview
edit_format: diff
@ -858,7 +899,7 @@
editor_edit_format: editor-diff
- name: fireworks_ai/accounts/fireworks/models/qwq-32b
remove_reasoning: think
reasoning_tag: think
edit_format: diff
weak_model_name: fireworks_ai/accounts/fireworks/models/qwen2p5-coder-32b-instruct
use_repo_map: true
@ -872,7 +913,7 @@
top_p: 0.95
- name: groq/qwen-qwq-32b
remove_reasoning: think
reasoning_tag: think
edit_format: diff
weak_model_name: groq/qwen-2.5-coder-32b
use_repo_map: true
@ -1003,3 +1044,30 @@
extra_headers:
editor-version: Neovim/0.9.0
Copilot-Integration-Id: vscode-chat
- name: cohere_chat/command-a-03-2025
examples_as_sys_msg: true
- name: openrouter/cohere/command-a-03-2025
examples_as_sys_msg: true
- name: gemini/gemma-3-27b-it
use_system_prompt: false
- name: openrouter/google/gemma-3-27b-it:free
use_system_prompt: false
- name: openrouter/google/gemma-3-27b-it
use_system_prompt: false
- name: gemini/gemini-2.5-pro-exp-03-25
edit_format: diff-fenced
use_repo_map: true
- name: openrouter/google/gemini-2.5-pro-exp-03-25:free
edit_format: diff-fenced
use_repo_map: true
- name: vertex_ai/gemini-2.5-pro-exp-03-25
edit_format: diff-fenced
use_repo_map: true

View file

@ -159,7 +159,8 @@ class Scraper:
try:
response = page.goto(url, wait_until="networkidle", timeout=5000)
except PlaywrightTimeoutError:
self.print_error(f"Timeout while loading {url}")
print(f"Page didn't quiesce, scraping content anyway: {url}")
response = None
except PlaywrightError as e:
self.print_error(f"Error navigating to {url}: {str(e)}")
return None, None

View file

@ -64,7 +64,7 @@ class FileWatcher:
"""Watches source files for changes and AI comments"""
# Compiled regex pattern for AI comments
ai_comment_pattern = re.compile(r"(?:#|//|--) *(ai\b.*|ai\b.*|.*\bai[?!]?) *$", re.IGNORECASE)
ai_comment_pattern = re.compile(r"(?:#|//|--|;+) *(ai\b.*|ai\b.*|.*\bai[?!]?) *$", re.IGNORECASE)
def __init__(self, coder, gitignores=None, verbose=False, analytics=None, root=None):
self.coder = coder
@ -140,7 +140,10 @@ class FileWatcher:
roots_to_watch = self.get_roots_to_watch()
for changes in watch(
*roots_to_watch, watch_filter=self.filter_func, stop_event=self.stop_event
*roots_to_watch,
watch_filter=self.filter_func,
stop_event=self.stop_event,
ignore_permission_denied=True,
):
if self.handle_changes(changes):
return
@ -259,7 +262,7 @@ class FileWatcher:
line_nums.append(i)
comments.append(comment)
comment = comment.lower()
comment = comment.lstrip("/#-")
comment = comment.lstrip("/#-;") # Added semicolon for Lisp comments
comment = comment.strip()
if comment.startswith("ai!") or comment.endswith("ai!"):
has_action = "!"

View file

@ -7,12 +7,13 @@ description: Release notes and stats on aider writing its own code.
# Release history
{% include blame.md %}
The above
[stats are based on the git commit history](/docs/faq.html#how-are-the-aider-wrote-xx-of-code-stats-computed)
Aider writes most of its own code, usually about 70-80% of the new code in each release.
These
[statistics are based on the git commit history](/docs/faq.html#how-are-the-aider-wrote-xx-of-code-stats-computed)
of the aider repo.
{% include blame.md %}
## Release notes
<!--[[[cog
@ -24,20 +25,108 @@ cog.out(text)
### main branch
- Offer to OAuth against OpenRouter if no model and keys are provided.
- Prioritize `gemini/gemini-2.5-pro-exp-03-25` if `GEMINI_API_KEY` is set, and `vertex_ai/gemini-2.5-pro-exp-03-25` if `VERTEXAI_PROJECT` is set, when no model is specified.
- Select OpenRouter default model based on free/paid tier status if `OPENROUTER_API_KEY` is set and no model is specified.
- Warn at startup if `--stream` and `--cache-prompts` are used together, as cost estimates may be inaccurate.
- Boost repomap ranking for files whose path components match identifiers mentioned in the chat.
- Change web scraping timeout from an error to a warning, allowing scraping to continue with potentially incomplete content.
- Left-align markdown headings in the terminal output, by Peter Schilling.
- Aider wrote 90% of the code in this release.
- Display thinking/reasoning content from LLMs which return it.
- Enhanced handling of reasoning tags to better clean up model responses.
### Aider v0.79.2
- Added 'gemini' alias for gemini-2.5-pro model.
- Updated Gemini 2.5 Pro max output tokens to 64k.
- Added support for Lisp-style semicolon comments in file watcher, by Matteo Landi.
- Added OpenRouter API error detection and retries.
- Added openrouter/deepseek-chat-v3-0324 model.
- Aider wrote 93% of the code in this release.
### Aider v0.79.1
- Improved model listing to include all models in fuzzy matching, including those provided by aider (not litellm).
### Aider v0.79.0
- Added support for Gemini 2.5 Pro models.
- Added support for DeepSeek V3 0324 model.
- Added a new `/context` command that automatically identifies which files need to be edited for a given request.
- Added `/edit` as an alias for the `/editor` command.
- Added "overeager" mode for Claude 3.7 Sonnet models to try and keep it working within the requested scope.
- Aider wrote 65% of the code in this release.
### Aider v0.78.0
- Added support for thinking tokens for OpenRouter Sonnet 3.7.
- Added commands to switch between model types: `/editor-model` for Editor Model, and `/weak-model` for Weak Model, by csala.
- Added model setting validation to ignore `--reasoning-effort` and `--thinking-tokens` if the model doesn't support them.
- Added `--check-model-accepts-settings` flag (default: true) to force unsupported model settings.
- Annotated which models support reasoning_effort and thinking_tokens settings in the model settings data.
- Improved code block rendering in markdown output with better padding using NoInsetMarkdown.
- Added `--git-commit-verify` flag (default: False) to control whether git commit hooks are bypassed.
- Fixed autocompletion for `/ask`, `/code`, and `/architect` commands, by shladnik.
- Added vi-like behavior when pressing enter in multiline-mode while in vi normal/navigation-mode, by Marco Mayer.
- Added AWS_PROFILE support for Bedrock models, allowing use of AWS profiles instead of explicit credentials, by lentil32.
- Enhanced `--aiderignore` argument to resolve both absolute and relative paths, by mopemope.
- Improved platform information handling to gracefully handle retrieval errors.
- Aider wrote 92% of the code in this release.
### Aider v0.77.1
- Bumped dependencies to pickup litellm fix for Ollama.
- Added support for `openrouter/google/gemma-3-27b-it` model.
- Updated exclude patterns for help documentation.
### Aider v0.77.0
- Big upgrade in [programming languages supported](https://aider.chat/docs/languages.html) by adopting [tree-sitter-language-pack](https://github.com/Goldziher/tree-sitter-language-pack/).
- 130 new languages with linter support.
- 20 new languages with repo-map support.
- Added `/think-tokens` command to set thinking token budget with support for human-readable formats (8k, 10.5k, 0.5M).
- Added `/reasoning-effort` command to control model reasoning level.
- The `/think-tokens` and `/reasoning-effort` commands display current settings when called without arguments.
- Display of thinking token budget and reasoning effort in model information.
- Changed `--thinking-tokens` argument to accept string values with human-readable formats.
- Added `--auto-accept-architect` flag (default: true) to automatically accept changes from architect coder format without confirmation.
- Added support for `cohere_chat/command-a-03-2025` and `gemini/gemma-3-27b-it`
- The bare `/drop` command now preserves original read-only files provided via args.read.
- Fixed a bug where default model would be set by deprecated `--shortcut` switches even when already specified in the command line.
- Improved AutoCompleter to require 3 characters for autocompletion to reduce noise.
- Aider wrote 72% of the code in this release.
### Aider v0.76.2
- Fixed handling of JSONDecodeError when loading model cache file.
- Fixed handling of GitCommandError when retrieving git user configuration.
- Aider wrote 75% of the code in this release.
### Aider v0.76.1
- Added ignore_permission_denied option to file watcher to prevent errors when accessing restricted files, by Yutaka Matsubara.
- Aider wrote 0% of the code in this release.
### Aider v0.76.0
- Improved support for thinking/reasoningmodels:
- Added `--thinking-tokens` CLI option to control token budget for models that support thinking.
- Display thinking/reasoning content from LLMs which return it.
- Enhanced handling of reasoning tags to better clean up model responses.
- Added deprecation warning for `remove_reasoning` setting, now replaced by `reasoning_tag`.
- Aider will notify you when it's completed the last request and needs your input:
- Added [notifications when LLM responses are ready](https://aider.chat/docs/usage/notifications.html) with `--notifications` flag.
- Specify desktop notification command with `--notifications-command`.
- Added support for QWQ 32B.
- Switch to `tree-sitter-language-pack` for tree sitter support.
- Improved error handling for EOF (Ctrl+D) in user input prompts.
- Added helper function to ensure hex color values have a # prefix.
- Fixed handling of Git errors when reading staged files.
- Improved SSL verification control for model information requests.
- Added support for QWQ 32B.
- Added [notifications when LLM responses are ready](https://aider.chat/docs/usage/notifications.html) with `--notifications` flag.
- Specify desktop notification command with `--notifications-command`.
- Improved empty LLM response handling with clearer warning messages.
- Fixed Git identity retrieval to respect global configuration, by Akira Komamura.
- Offer to install dependencies for Bedrock and Vertex AI models.
- Aider wrote 82% of the code in this release.
- Deprecated model shortcut args (like --4o, --opus) in favor of the --model flag.
- Aider wrote 85% of the code in this release.
### Aider v0.75.3

View file

@ -51,4 +51,19 @@ callouts:
note:
title: Note
color: yellow
# Custom CSS for our table of contents
kramdown:
syntax_highlighter_opts:
css_class: highlight
sass:
style: compressed
# Additional CSS
compress_html:
clippings: all
comments: all
endings: all
startings: []

View file

@ -3723,7 +3723,7 @@
Titusz Pan: 9
start_tag: v0.71.0
total_lines: 283
- aider_percentage: 69.44
- aider_percentage: 37.47
aider_total: 284
end_date: '2025-01-31'
end_tag: v0.73.0
@ -3746,6 +3746,10 @@
aider/models.py:
Paul Gauthier: 8
Paul Gauthier (aider): 33
aider/resources/model-settings.yml:
Paul Gauthier: 334
kennyfrc: 11
xqyz: 4
aider/sendchat.py:
Mir Adnan ALI: 28
Paul Gauthier: 11
@ -3770,12 +3774,13 @@
Paul Gauthier (aider): 77
grand_total:
Mir Adnan ALI: 28
Paul Gauthier: 96
Paul Gauthier: 430
Paul Gauthier (aider): 284
xqyz: 1
kennyfrc: 11
xqyz: 5
start_tag: v0.72.0
total_lines: 409
- aider_percentage: 77.14
total_lines: 758
- aider_percentage: 76.07
aider_total: 604
end_date: '2025-02-06'
end_tag: v0.74.0
@ -3813,6 +3818,8 @@
Paul Gauthier: 1
Paul Gauthier (aider): 2
"Viktor Sz\xE9pe": 3
aider/resources/model-settings.yml:
Paul Gauthier: 11
aider/watch.py:
Paul Gauthier (aider): 45
benchmark/docker.sh:
@ -3839,12 +3846,12 @@
Paul Gauthier: 4
Paul Gauthier (aider): 42
grand_total:
Paul Gauthier: 176
Paul Gauthier: 187
Paul Gauthier (aider): 604
"Viktor Sz\xE9pe": 3
start_tag: v0.73.0
total_lines: 783
- aider_percentage: 46.31
total_lines: 794
- aider_percentage: 44.78
aider_total: 163
end_date: '2025-02-24'
end_tag: v0.75.0
@ -3880,6 +3887,8 @@
aider/repomap.py:
Paul Gauthier: 43
Paul Gauthier (aider): 11
aider/resources/model-settings.yml:
Paul Gauthier: 12
aider/special.py:
Lucas Shadler: 1
aider/website/docs/leaderboards/index.md:
@ -3909,8 +3918,404 @@
Antti Kaihola: 1
FeepingCreature (aider): 6
Lucas Shadler: 1
Paul Gauthier: 113
Paul Gauthier: 125
Paul Gauthier (aider): 157
Warren Krewenki: 74
start_tag: v0.74.0
total_lines: 352
total_lines: 364
- aider_percentage: 84.75
aider_total: 1589
end_date: '2025-03-10'
end_tag: v0.76.0
file_counts:
aider/__init__.py:
Paul Gauthier: 1
aider/args.py:
Paul Gauthier: 2
Paul Gauthier (aider): 25
aider/args_formatter.py:
Paul Gauthier: 4
Paul Gauthier (aider): 3
aider/coders/base_coder.py:
Paul Gauthier: 54
Paul Gauthier (aider): 29
aider/deprecated.py:
Paul Gauthier (aider): 107
aider/io.py:
Paul Gauthier: 7
Paul Gauthier (aider): 127
aider/main.py:
Akira Komamura: 2
Mattias: 1
Paul Gauthier: 4
Paul Gauthier (aider): 16
aider/models.py:
Paul Gauthier: 6
Paul Gauthier (aider): 68
aider/queries/tree-sitter-language-pack/csharp-tags.scm:
Paul Gauthier: 14
Paul Gauthier (aider): 12
aider/reasoning_tags.py:
Paul Gauthier: 14
Paul Gauthier (aider): 68
aider/repo.py:
Akira Komamura: 1
Paul Gauthier (aider): 4
aider/repomap.py:
Paul Gauthier: 9
aider/resources/model-settings.yml:
Paul Gauthier: 61
Paul Gauthier (aider): 32
gmoz22: 4
aider/website/_includes/leaderboard.js:
Paul Gauthier (aider): 48
aider/website/docs/leaderboards/index.md:
Paul Gauthier: 2
benchmark/benchmark.py:
Paul Gauthier: 1
benchmark/problem_stats.py:
Paul Gauthier (aider): 2
docker/Dockerfile:
Paul Gauthier: 1
scripts/blame.py:
Paul Gauthier: 1
scripts/pip-compile.sh:
Claudia Pellegrino: 10
Paul Gauthier: 6
Paul Gauthier (aider): 11
scripts/update-history.py:
Paul Gauthier: 1
scripts/versionbump.py:
Paul Gauthier: 4
Paul Gauthier (aider): 64
tests/basic/test_deprecated.py:
Paul Gauthier: 10
Paul Gauthier (aider): 130
tests/basic/test_io.py:
Paul Gauthier (aider): 54
tests/basic/test_main.py:
Paul Gauthier: 1
Paul Gauthier (aider): 93
tests/basic/test_model_info_manager.py:
Paul Gauthier (aider): 72
tests/basic/test_models.py:
Paul Gauthier: 27
Paul Gauthier (aider): 34
tests/basic/test_reasoning.py:
Paul Gauthier: 36
Paul Gauthier (aider): 525
tests/basic/test_repomap.py:
Paul Gauthier: 2
tests/basic/test_ssl_verification.py:
Paul Gauthier (aider): 65
grand_total:
Akira Komamura: 3
Claudia Pellegrino: 10
Mattias: 1
Paul Gauthier: 268
Paul Gauthier (aider): 1589
gmoz22: 4
start_tag: v0.75.0
total_lines: 1875
- aider_percentage: 71.93
aider_total: 1399
end_date: '2025-03-13'
end_tag: v0.77.0
file_counts:
aider/__init__.py:
Paul Gauthier: 1
aider/args.py:
Paul Gauthier (aider): 5
aider/coders/architect_coder.py:
Paul Gauthier (aider): 2
aider/coders/base_coder.py:
Paul Gauthier (aider): 14
aider/commands.py:
Paul Gauthier: 4
Paul Gauthier (aider): 71
aider/deprecated.py:
Paul Gauthier: 2
aider/io.py:
Paul Gauthier (aider): 5
aider/main.py:
Paul Gauthier (aider): 12
aider/models.py:
Paul Gauthier (aider): 83
aider/queries/tree-sitter-language-pack/arduino-tags.scm:
Paul Gauthier: 3
Paul Gauthier (aider): 2
aider/queries/tree-sitter-language-pack/c-tags.scm:
Paul Gauthier: 4
Paul Gauthier (aider): 5
aider/queries/tree-sitter-language-pack/chatito-tags.scm:
Paul Gauthier: 11
Paul Gauthier (aider): 5
aider/queries/tree-sitter-language-pack/commonlisp-tags.scm:
Paul Gauthier: 116
Paul Gauthier (aider): 6
aider/queries/tree-sitter-language-pack/cpp-tags.scm:
Paul Gauthier: 7
Paul Gauthier (aider): 8
aider/queries/tree-sitter-language-pack/d-tags.scm:
Paul Gauthier: 9
Paul Gauthier (aider): 17
aider/queries/tree-sitter-language-pack/dart-tags.scm:
Paul Gauthier: 42
Paul Gauthier (aider): 19
aider/queries/tree-sitter-language-pack/elisp-tags.scm:
Paul Gauthier: 1
Paul Gauthier (aider): 2
aider/queries/tree-sitter-language-pack/elixir-tags.scm:
Paul Gauthier: 10
Paul Gauthier (aider): 8
aider/queries/tree-sitter-language-pack/elm-tags.scm:
Paul Gauthier: 8
Paul Gauthier (aider): 11
aider/queries/tree-sitter-language-pack/gleam-tags.scm:
Paul Gauthier: 26
Paul Gauthier (aider): 15
aider/queries/tree-sitter-language-pack/go-tags.scm:
Paul Gauthier: 14
Paul Gauthier (aider): 14
aider/queries/tree-sitter-language-pack/java-tags.scm:
Paul Gauthier: 10
Paul Gauthier (aider): 7
aider/queries/tree-sitter-language-pack/lua-tags.scm:
Paul Gauthier: 25
Paul Gauthier (aider): 9
aider/queries/tree-sitter-language-pack/pony-tags.scm:
Paul Gauthier: 20
Paul Gauthier (aider): 19
aider/queries/tree-sitter-language-pack/properties-tags.scm:
Paul Gauthier: 3
Paul Gauthier (aider): 2
aider/queries/tree-sitter-language-pack/python-tags.scm:
Paul Gauthier: 9
Paul Gauthier (aider): 5
aider/queries/tree-sitter-language-pack/r-tags.scm:
Paul Gauthier: 17
Paul Gauthier (aider): 4
aider/queries/tree-sitter-language-pack/racket-tags.scm:
Paul Gauthier: 10
Paul Gauthier (aider): 2
aider/queries/tree-sitter-language-pack/ruby-tags.scm:
Paul Gauthier: 23
Paul Gauthier (aider): 12
aider/queries/tree-sitter-language-pack/rust-tags.scm:
Paul Gauthier: 41
Paul Gauthier (aider): 14
aider/queries/tree-sitter-language-pack/solidity-tags.scm:
Paul Gauthier: 30
Paul Gauthier (aider): 13
aider/queries/tree-sitter-language-pack/swift-tags.scm:
Paul Gauthier: 39
Paul Gauthier (aider): 12
aider/queries/tree-sitter-language-pack/udev-tags.scm:
Paul Gauthier: 15
Paul Gauthier (aider): 5
aider/resources/model-settings.yml:
Paul Gauthier: 9
aider/watch.py:
Yutaka Matsubara: 4
aider/website/docs/leaderboards/index.md:
Paul Gauthier: 3
Paul Gauthier (aider): 8
scripts/redact-cast.py:
Paul Gauthier: 27
Paul Gauthier (aider): 98
scripts/tsl_pack_langs.py:
Paul Gauthier (aider): 145
scripts/versionbump.py:
Paul Gauthier (aider): 1
tests/basic/test_coder.py:
Paul Gauthier (aider): 104
tests/basic/test_commands.py:
Paul Gauthier: 2
Paul Gauthier (aider): 190
tests/basic/test_models.py:
Paul Gauthier (aider): 44
tests/basic/test_repomap.py:
Paul Gauthier: 1
Paul Gauthier (aider): 125
tests/fixtures/languages/arduino/test.ino:
Paul Gauthier (aider): 21
tests/fixtures/languages/c/test.c:
Paul Gauthier (aider): 12
tests/fixtures/languages/chatito/test.chatito:
Paul Gauthier (aider): 20
tests/fixtures/languages/commonlisp/test.lisp:
Paul Gauthier (aider): 17
tests/fixtures/languages/d/test.d:
Paul Gauthier (aider): 26
tests/fixtures/languages/dart/test.dart:
Paul Gauthier (aider): 21
tests/fixtures/languages/elm/test.elm:
Paul Gauthier (aider): 16
tests/fixtures/languages/gleam/test.gleam:
Paul Gauthier (aider): 10
tests/fixtures/languages/lua/test.lua:
Paul Gauthier (aider): 25
tests/fixtures/languages/pony/test.pony:
Paul Gauthier (aider): 8
tests/fixtures/languages/properties/test.properties:
Paul Gauthier (aider): 14
tests/fixtures/languages/r/test.r:
Paul Gauthier (aider): 17
tests/fixtures/languages/racket/test.rkt:
Paul Gauthier (aider): 8
tests/fixtures/languages/solidity/test.sol:
Paul Gauthier (aider): 21
tests/fixtures/languages/swift/test.swift:
Paul Gauthier (aider): 18
tests/fixtures/languages/udev/test.rules:
Paul Gauthier (aider): 22
grand_total:
Paul Gauthier: 542
Paul Gauthier (aider): 1399
Yutaka Matsubara: 4
start_tag: v0.76.0
total_lines: 1945
- aider_percentage: 91.82
aider_total: 2682
end_date: '2025-03-21'
end_tag: v0.78.0
file_counts:
aider/__init__.py:
Paul Gauthier: 1
aider/args.py:
Paul Gauthier (aider): 24
Yutaka Matsubara: 2
aider/coders/base_coder.py:
Paul Gauthier: 1
Paul Gauthier (aider): 6
aider/commands.py:
Carles Sala (aider): 30
Paul Gauthier (aider): 10
aider/help_pats.py:
Paul Gauthier: 6
aider/io.py:
Marco Mayer: 2
Paul Gauthier (aider): 17
aider/main.py:
Paul Gauthier: 5
Paul Gauthier (aider): 29
aider/mdstream.py:
Paul Gauthier: 1
Paul Gauthier (aider): 22
aider/models.py:
Paul Gauthier (aider): 41
lentil32 (aider): 15
aider/repo.py:
Paul Gauthier (aider): 5
aider/resources/model-settings.yml:
Paul Gauthier: 3
Paul Gauthier (aider): 22
aider/website/_includes/head_custom.html:
Paul Gauthier: 3
Paul Gauthier (aider): 53
aider/website/_includes/recording.js:
Paul Gauthier: 4
Paul Gauthier (aider): 424
aider/website/assets/asciinema/asciinema-player.min.js:
Paul Gauthier: 1
aider/website/docs/leaderboards/index.md:
Paul Gauthier: 1
aider/website/index.html:
Paul Gauthier: 173
Paul Gauthier (aider): 371
scripts/badges.py:
Paul Gauthier: 1
Paul Gauthier (aider): 496
scripts/blame.py:
Paul Gauthier: 2
scripts/jekyll_run.sh:
Paul Gauthier: 1
Paul Gauthier (aider): 5
scripts/logo_svg.py:
Paul Gauthier: 5
Paul Gauthier (aider): 169
scripts/recording_audio.py:
Paul Gauthier (aider): 338
scripts/redact-cast.py:
Paul Gauthier: 22
Paul Gauthier (aider): 37
scripts/tmux_record.sh:
Paul Gauthier: 1
Paul Gauthier (aider): 17
scripts/update-docs.sh:
Paul Gauthier: 1
scripts/update-history.py:
Paul Gauthier: 1
Paul Gauthier (aider): 52
tests/basic/test_aws_credentials.py:
lentil32 (aider): 169
tests/basic/test_commands.py:
Carles Sala (aider): 40
tests/basic/test_main.py:
Paul Gauthier: 2
Paul Gauthier (aider): 193
tests/basic/test_repo.py:
Paul Gauthier (aider): 48
tests/help/test_help.py:
Paul Gauthier (aider): 49
grand_total:
Carles Sala (aider): 70
Marco Mayer: 2
Paul Gauthier: 235
Paul Gauthier (aider): 2428
Yutaka Matsubara: 2
lentil32 (aider): 184
start_tag: v0.77.0
total_lines: 2921
- aider_percentage: 65.38
aider_total: 221
end_date: '2025-03-25'
end_tag: v0.79.0
file_counts:
aider/__init__.py:
Paul Gauthier: 1
aider/coders/__init__.py:
Paul Gauthier: 2
aider/coders/base_coder.py:
Paul Gauthier: 15
Paul Gauthier (aider): 5
aider/coders/context_coder.py:
Paul Gauthier: 45
Paul Gauthier (aider): 8
aider/commands.py:
Paul Gauthier: 1
Paul Gauthier (aider): 20
aider/io.py:
Paul Gauthier: 11
Paul Gauthier (aider): 2
aider/main.py:
Paul Gauthier (aider): 4
aider/models.py:
Paul Gauthier: 3
Paul Gauthier (aider): 1
aider/repomap.py:
Paul Gauthier: 17
aider/resources/model-settings.yml:
Paul Gauthier: 13
Paul Gauthier (aider): 10
aider/website/docs/leaderboards/index.md:
Paul Gauthier: 1
aider/website/index.html:
Paul Gauthier: 3
Paul Gauthier (aider): 16
scripts/badges.py:
Paul Gauthier (aider): 2
scripts/blame.py:
Paul Gauthier (aider): 16
scripts/dl_icons.py:
Paul Gauthier (aider): 60
scripts/tmux_record.sh:
Paul Gauthier: 1
tests/basic/test_coder.py:
Paul Gauthier: 4
Paul Gauthier (aider): 77
grand_total:
Paul Gauthier: 117
Paul Gauthier (aider): 221
start_tag: v0.78.0
total_lines: 338

View file

@ -256,4 +256,4 @@
date: 2024-12-22
versions: 0.69.2.dev
seconds_per_case: 12.2
total_cost: 0.0000
total_cost: 0.0000

View file

@ -1,6 +1,6 @@
- dirname: 2025-02-25-20-23-07--gemini-pro
test_cases: 225
model: gemini/gemini-2.0-pro-exp-02-05
model: Gemini 2.0 Pro exp-02-05
edit_format: whole
commit_hash: 2fccd47
pass_rate_1: 20.4
@ -338,7 +338,7 @@
- dirname: 2024-12-25-13-31-51--deepseekv3preview-diff2
test_cases: 225
model: DeepSeek Chat V3
model: DeepSeek Chat V3 (prev)
edit_format: diff
commit_hash: 0a23c4a-dirty
pass_rate_1: 22.7
@ -727,4 +727,108 @@
date: 2025-03-07
versions: 0.75.3.dev
seconds_per_case: 137.4
total_cost: 0
total_cost: 0
- dirname: 2025-03-14-23-40-00--cmda-quality-whole2
test_cases: 225
model: command-a-03-2025-quality
edit_format: whole
commit_hash: a1aa63f
pass_rate_1: 2.2
pass_rate_2: 12.0
pass_num_1: 5
pass_num_2: 27
percent_cases_well_formed: 99.6
error_outputs: 2
num_malformed_responses: 1
num_with_malformed_responses: 1
user_asks: 215
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 1
test_timeouts: 4
total_tests: 225
command: OPENAI_API_BASE=https://api.cohere.ai/compatibility/v1 aider --model openai/command-a-03-2025-quality
date: 2025-03-14
versions: 0.77.1.dev
seconds_per_case: 85.1
total_cost: 0.0000
- dirname: 2025-03-15-01-21-24--gemma3-27b-or
test_cases: 225
model: gemma-3-27b-it
edit_format: whole
commit_hash: fd21f51-dirty
pass_rate_1: 1.8
pass_rate_2: 4.9
pass_num_1: 4
pass_num_2: 11
percent_cases_well_formed: 100.0
error_outputs: 3
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 181
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 1
test_timeouts: 3
total_tests: 225
command: aider --model openrouter/google/gemma-3-27b-it
date: 2025-03-15
versions: 0.77.1.dev
seconds_per_case: 79.7
total_cost: 0.0000
- dirname: 2025-03-24-15-41-33--deepseek-v3-0324-polyglot-diff
test_cases: 225
model: DeepSeek V3 (0324)
edit_format: diff
commit_hash: 502b863
pass_rate_1: 28.0
pass_rate_2: 55.1
pass_num_1: 63
pass_num_2: 124
percent_cases_well_formed: 99.6
error_outputs: 32
num_malformed_responses: 1
num_with_malformed_responses: 1
user_asks: 96
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 2
test_timeouts: 4
total_tests: 225
command: aider --model deepseek/deepseek-chat
date: 2025-03-24
versions: 0.78.1.dev
seconds_per_case: 290.0
total_cost: 1.1164
- dirname: 2025-03-25-19-46-45--gemini-25-pro-exp-diff-fenced
test_cases: 225
model: Gemini 2.5 Pro exp-03-25
edit_format: diff-fenced
commit_hash: 33413ec
pass_rate_1: 39.1
pass_rate_2: 72.9
pass_num_1: 88
pass_num_2: 164
percent_cases_well_formed: 89.8
error_outputs: 30
num_malformed_responses: 30
num_with_malformed_responses: 23
user_asks: 57
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 3
total_tests: 225
command: aider --model gemini/gemini-2.5-pro-exp-03-25
date: 2025-03-25
versions: 0.78.1.dev
seconds_per_case: 47.1
total_cost: 0.0000

View file

@ -5,21 +5,15 @@ If you already have python 3.8-3.13 installed, you can get started quickly like
python -m pip install aider-install
aider-install
# Change directory into your code base
# Change directory into your codebase
cd /to/your/project
# Work with DeepSeek via DeepSeek's API
aider --model deepseek --api-key deepseek=your-key-goes-here
# DeepSeek
aider --model deepseek --api-key deepseek=<key>
# Work with Claude 3.7 Sonnet via Anthropic's API
aider --model sonnet --api-key anthropic=your-key-goes-here
# Claude 3.7 Sonnet
aider --model sonnet --api-key anthropic=<key>
# Work with GPT-4o via OpenAI's API
aider --model gpt-4o --api-key openai=your-key-goes-here
# Work with Sonnet via OpenRouter's API
aider --model openrouter/anthropic/claude-3.7-sonnet --api-key openrouter=your-key-goes-here
# Work with DeepSeek via OpenRouter's API
aider --model openrouter/deepseek/deepseek-chat --api-key openrouter=your-key-goes-here
# o3-mini
aider --model o3-mini --api-key openai=<key>
```

View file

@ -5,10 +5,66 @@
<meta property="og:image" content="{{ site.url }}/assets/aider.jpg">
<meta property="twitter:image" content="{{ site.url }}/assets/aider-square.jpg">
{% endif %}
<!-- Custom site title styling -->
<style>
@font-face {
font-family: GlassTTYVT220;
src: local("Glass TTY VT220"), local("Glass TTY VT220 Medium"), url(/assets/Glass_TTY_VT220.ttf) format("truetype");
}
.site-title {
font-size: 1.8rem;
font-weight: 700;
font-family: 'GlassTTYVT220', monospace;
color: #14b014; /* terminal green color */
text-decoration: none;
letter-spacing: 0.5px;
}
/* For SVG logo inside site-title */
.site-title img {
height: 1.8rem;
vertical-align: middle;
}
/* Sidebar gradient styling to match hero section */
.side-bar {
background: linear-gradient(135deg, #ffffff 0%, rgba(20, 176, 20, 0.01) 25%, rgba(20, 176, 20, 0.04) 40%, rgba(220, 230, 255, 0.4) 60%, rgba(205, 218, 255, 0.4) 80%, #F5F6FA 100%);
}
</style>
<link rel="alternate" type="application/rss+xml" title="RSS Feed" href="{{ site.url }}/feed.xml">
<link rel="preconnect" href="https://fonts.gstatic.com">
<link rel="preload" href="https://fonts.googleapis.com/css?family=Open+Sans:400,700&display=swap" as="style" type="text/css" crossorigin>
<meta name="viewport" content="width=device-width, initial-scale=1">
<!-- Logo Progressive Enhancement for Jekyll pages -->
<script>
document.addEventListener('DOMContentLoaded', function() {
const siteTitle = document.querySelector('.site-title');
if (siteTitle) {
const textContent = siteTitle.textContent; // Save the text for fallback
// Create new image element
const logoImg = new Image();
logoImg.src = '/assets/logo.svg';
logoImg.alt = 'Aider Logo';
logoImg.style.height = '1.8rem';
logoImg.style.verticalAlign = 'middle';
// Only replace if image loads successfully
logoImg.onload = function() {
siteTitle.textContent = ''; // Clear text
siteTitle.appendChild(logoImg);
};
// If image fails to load, do nothing (keep the text)
logoImg.onerror = function() {
console.log('SVG logo failed to load, keeping text fallback');
};
}
});
</script>
<meta name="theme-color" content="#157878">
<meta name="apple-mobile-web-app-status-bar-style" content="black-translucent">
<link rel="icon" type="image/png" sizes="32x32" href="{{ '/assets/icons/favicon-32x32.png' | relative_url }}">

View file

@ -0,0 +1,228 @@
/* Terminal header styling */
.terminal-header {
background-color: #e0e0e0;
border-top-left-radius: 6px;
border-top-right-radius: 6px;
padding: 4px 10px;
display: flex;
align-items: center;
border-bottom: 1px solid #c0c0c0;
}
.terminal-buttons {
display: flex;
gap: 4px;
margin-right: 10px;
}
.terminal-button {
width: 10px;
height: 10px;
border-radius: 50%;
}
.terminal-close {
background-color: #ff5f56;
border: 1px solid #e0443e;
}
.terminal-minimize {
background-color: #ffbd2e;
border: 1px solid #dea123;
}
.terminal-expand {
background-color: #27c93f;
border: 1px solid #1aab29;
}
.terminal-title {
flex-grow: 1;
text-align: center;
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Helvetica, Arial, sans-serif;
font-size: 11px;
color: #666;
}
/* Toast notification styling */
.toast-container {
position: fixed;
top: 50%;
left: 50%;
transform: translate(-50%, -50%);
z-index: 9999;
pointer-events: none;
}
.toast-notification {
background-color: rgba(0, 0, 0, 0.7);
color: white;
padding: 12px 25px;
border-radius: 8px;
margin-bottom: 10px;
box-shadow: 0 2px 8px rgba(0, 0, 0, 0.2);
opacity: 0;
transition: opacity 0.3s ease-in-out;
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Helvetica, Arial, sans-serif;
font-size: 18px;
text-align: center;
display: inline-block;
min-width: 200px;
max-width: 90%;
}
/* Page container styling */
.page-container {
max-width: 950px;
margin-left: auto;
margin-right: auto;
position: relative;
}
/* macOS backdrop styling */
.macos-backdrop {
background: linear-gradient(135deg, #ff9966, #ff5e62, #6666ff, #0066ff);
border-radius: 12px;
padding: clamp(5px, 5vw, 50px) clamp(5px, 2.5vw, 50px);
margin: 20px 0;
box-shadow: 0 10px 30px rgba(0, 0, 0, 0.2);
position: relative;
overflow: hidden;
}
/* Add subtle wave animation to backdrop */
.macos-backdrop::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: radial-gradient(circle at center, rgba(255,255,255,0.1) 0%, rgba(255,255,255,0) 70%);
opacity: 0.7;
pointer-events: none;
}
/* Add decorative curved lines to the backdrop */
.macos-backdrop::after {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background-image:
radial-gradient(circle at 20% 30%, transparent 0%, transparent 60%, rgba(255,255,255,0.2) 61%, transparent 62%),
radial-gradient(circle at 80% 70%, transparent 0%, transparent 40%, rgba(255,255,255,0.2) 41%, transparent 42%),
radial-gradient(circle at 40% 90%, transparent 0%, transparent 70%, rgba(255,255,255,0.2) 71%, transparent 72%),
radial-gradient(circle at 60% 10%, transparent 0%, transparent 50%, rgba(255,255,255,0.2) 51%, transparent 52%);
background-size: 100% 100%;
opacity: 1;
pointer-events: none;
z-index: 0;
}
.terminal-container {
border-radius: 8px;
overflow: hidden;
box-shadow: 0 5px 15px rgba(0, 0, 0, 0.2);
margin-top: 0;
margin-bottom: 0;
position: relative;
background-color: white; /* Add background color to terminal container */
z-index: 2; /* Ensure terminal appears above the backdrop effects */
}
/* Timestamp link styling */
.timestamp-link {
color: #0366d6;
text-decoration: none;
font-weight: bold;
cursor: pointer;
}
.timestamp-link:hover {
text-decoration: underline;
}
/* Active timestamp styling */
.timestamp-active {
background-color: #f0f8ff; /* Light blue background */
border-radius: 3px;
padding: 2px 4px;
margin: -2px -4px;
}
/* Highlight the list item containing the active timestamp */
li.active-marker {
background-color: #f6f8fa;
border-radius: 4px;
padding: 4px 8px;
margin-left: -8px;
}
/* Make list items clickable */
.transcript-item {
cursor: pointer;
transition: background-color 0.2s ease;
padding: 4px 8px;
margin-left: -8px;
border-radius: 4px;
}
.transcript-item:hover {
background-color: #f0f0f0;
}
/* Keyboard shortcuts styling */
.keyboard-shortcuts {
text-align: center;
font-size: 14px;
color: #666;
margin-top: 10px;
margin-bottom: 20px;
}
/* Hide keyboard shortcuts on devices likely without physical keyboards */
.no-physical-keyboard .keyboard-shortcuts {
display: none;
}
.keyboard-shortcuts kbd {
background-color: #f7f7f7;
border: 1px solid #ccc;
border-radius: 3px;
box-shadow: 0 1px 0 rgba(0,0,0,0.2);
color: #333;
display: inline-block;
font-family: monospace;
line-height: 1;
margin: 0 2px;
padding: 3px 5px;
white-space: nowrap;
}
.asciinema-player-theme-aider {
/* Foreground (default text) color */
--term-color-foreground: #444444; /* colour238 */
/* Background color */
--term-color-background: #dadada; /* colour253 */
/* Palette of 16 standard ANSI colors */
--term-color-0: #21222c;
--term-color-1: #ff5555;
--term-color-2: #50fa7b;
--term-color-3: #f1fa8c;
--term-color-4: #bd93f9;
--term-color-5: #ff79c6;
--term-color-6: #8be9fd;
--term-color-7: #f8f8f2;
--term-color-8: #6272a4;
--term-color-9: #ff6e6e;
--term-color-10: #69ff94;
--term-color-11: #ffffa5;
--term-color-12: #d6acff;
--term-color-13: #ff92df;
--term-color-14: #a4ffff;
--term-color-15: #ffffff;
}

View file

@ -0,0 +1,428 @@
document.addEventListener('DOMContentLoaded', function() {
let player; // Store player reference to make it accessible to click handlers
let globalAudio; // Global audio element to be reused
// Detect if device likely has no physical keyboard
function detectNoKeyboard() {
// Check if it's a touch device (most mobile devices)
const isTouchDevice = ('ontouchstart' in window) ||
(navigator.maxTouchPoints > 0) ||
(navigator.msMaxTouchPoints > 0);
// Check common mobile user agents as additional signal
const isMobileUA = /Android|webOS|iPhone|iPad|iPod|BlackBerry|IEMobile|Opera Mini/i.test(navigator.userAgent);
// If it's a touch device and has a mobile user agent, likely has no physical keyboard
if (isTouchDevice && isMobileUA) {
document.body.classList.add('no-physical-keyboard');
}
}
// Run detection
detectNoKeyboard();
// Parse the transcript section to create markers and convert timestamps to links
function parseTranscript() {
const markers = [];
// Find the Commentary heading
const transcriptHeading = Array.from(document.querySelectorAll('h2')).find(el => el.textContent.trim() === 'Commentary');
if (transcriptHeading) {
// Get all list items after the transcript heading
let currentElement = transcriptHeading.nextElementSibling;
while (currentElement && currentElement.tagName === 'UL') {
const listItems = currentElement.querySelectorAll('li');
listItems.forEach(item => {
const text = item.textContent.trim();
const match = text.match(/(\d+):(\d+)\s+(.*)/);
if (match) {
const minutes = parseInt(match[1], 10);
const seconds = parseInt(match[2], 10);
const timeInSeconds = minutes * 60 + seconds;
const formattedTime = `${minutes}:${seconds.toString().padStart(2, '0')}`;
const message = match[3].trim();
// Create link for the timestamp
const timeLink = document.createElement('a');
timeLink.href = '#';
timeLink.textContent = formattedTime;
timeLink.className = 'timestamp-link';
timeLink.dataset.time = timeInSeconds;
timeLink.dataset.message = message;
// Add click event to seek the player
timeLink.addEventListener('click', function(e) {
e.preventDefault();
if (player && typeof player.seek === 'function') {
player.seek(timeInSeconds);
player.play();
// Also trigger toast and speech
showToast(message);
speakText(message, timeInSeconds);
// Highlight this timestamp
highlightTimestamp(timeInSeconds);
}
});
// Replace text with the link + message
item.textContent = '';
item.appendChild(timeLink);
item.appendChild(document.createTextNode(' ' + message));
// Add class and click handler to the entire list item
item.classList.add('transcript-item');
item.dataset.time = timeInSeconds;
item.dataset.message = message;
item.addEventListener('click', function(e) {
// Prevent click event if the user clicked directly on the timestamp link
// This prevents double-firing of the event
if (e.target !== timeLink) {
e.preventDefault();
if (player && typeof player.seek === 'function') {
player.seek(timeInSeconds);
player.play();
// Also trigger toast and speech
showToast(message);
speakText(message, timeInSeconds);
// Highlight this timestamp
highlightTimestamp(timeInSeconds);
}
}
});
markers.push([timeInSeconds, message]);
}
});
currentElement = currentElement.nextElementSibling;
}
}
return markers;
}
// Parse transcript and create markers
const markers = parseTranscript();
// Create player with a single call
player = AsciinemaPlayer.create(
recording_url,
document.getElementById('demo'),
{
speed: 1.25,
idleTimeLimit: 1,
theme: "aider",
poster: "npt:0:01",
markers: markers,
controls: true
}
);
// Focus on the player element so keyboard shortcuts work immediately
setTimeout(() => {
// Use setTimeout to ensure the player is fully initialized
if (player && typeof player.focus === 'function') {
player.focus();
} else {
// If player doesn't have a focus method, try to find and focus the terminal element
const playerElement = document.querySelector('.asciinema-terminal');
if (playerElement) {
playerElement.focus();
} else {
// Last resort - try to find element with tabindex
const tabbableElement = document.querySelector('[tabindex]');
if (tabbableElement) {
tabbableElement.focus();
}
}
}
}, 100);
// Track active toast elements
let activeToast = null;
// Function to display toast notification
function showToast(text) {
// Get the appropriate container based on fullscreen state
let container = document.getElementById('toast-container');
const isFullscreen = document.fullscreenElement ||
document.webkitFullscreenElement ||
document.mozFullScreenElement ||
document.msFullscreenElement;
// If in fullscreen, check if we need to create a fullscreen toast container
if (isFullscreen) {
// Target the fullscreen element as the container parent
const fullscreenElement = document.fullscreenElement ||
document.webkitFullscreenElement ||
document.mozFullScreenElement ||
document.msFullscreenElement;
// Look for an existing fullscreen toast container
let fsContainer = fullscreenElement.querySelector('.fs-toast-container');
if (!fsContainer) {
// Create a new container for fullscreen mode
fsContainer = document.createElement('div');
fsContainer.className = 'toast-container fs-toast-container';
fsContainer.id = 'fs-toast-container';
fullscreenElement.appendChild(fsContainer);
}
container = fsContainer;
}
// Remove any existing toast
if (activeToast) {
hideToast(activeToast);
}
// Create toast element
const toast = document.createElement('div');
toast.className = 'toast-notification';
toast.textContent = text;
// Add to container
container.appendChild(toast);
// Store reference to active toast
activeToast = {
element: toast,
container: container
};
// Trigger animation
setTimeout(() => {
toast.style.opacity = '1';
}, 10);
return activeToast;
}
// Function to hide a toast
function hideToast(toastInfo) {
if (!toastInfo || !toastInfo.element) return;
toastInfo.element.style.opacity = '0';
setTimeout(() => {
if (toastInfo.container && toastInfo.container.contains(toastInfo.element)) {
toastInfo.container.removeChild(toastInfo.element);
}
// If this was the active toast, clear the reference
if (activeToast === toastInfo) {
activeToast = null;
}
}, 300); // Wait for fade out animation
}
// Track if TTS is currently in progress to prevent duplicates
let ttsInProgress = false;
let currentToast = null;
// Improved browser TTS function
function useBrowserTTS(text) {
// Don't start new speech if already in progress
if (ttsInProgress) {
console.log('Speech synthesis already in progress, skipping');
return false;
}
if ('speechSynthesis' in window) {
console.log('Using browser TTS fallback');
// Set flag to prevent duplicate speech
ttsInProgress = true;
// Cancel any ongoing speech
window.speechSynthesis.cancel();
const utterance = new SpeechSynthesisUtterance(text);
utterance.rate = 1.0;
utterance.pitch = 1.0;
utterance.volume = 1.0;
// For iOS, use a shorter utterance if possible
if (/iPad|iPhone|iPod/.test(navigator.userAgent) && !window.MSStream) {
utterance.text = text.length > 100 ? text.substring(0, 100) + '...' : text;
}
utterance.onstart = () => console.log('Speech started');
utterance.onend = () => {
console.log('Speech ended');
ttsInProgress = false; // Reset flag when speech completes
// Hide toast when speech ends
if (currentToast) {
hideToast(currentToast);
currentToast = null;
}
};
utterance.onerror = (e) => {
console.warn('Speech error:', e);
ttsInProgress = false; // Reset flag on error
// Also hide toast on error
if (currentToast) {
hideToast(currentToast);
currentToast = null;
}
};
window.speechSynthesis.speak(utterance);
return true;
}
console.warn('SpeechSynthesis not supported');
return false;
}
// Function to play pre-generated TTS audio files
function speakText(text, timeInSeconds) {
// Show the toast and keep reference
currentToast = showToast(text);
// Format time for filename (MM-SS)
const minutes = Math.floor(timeInSeconds / 60);
const seconds = timeInSeconds % 60;
const formattedTime = `${minutes.toString().padStart(2, '0')}-${seconds.toString().padStart(2, '0')}`;
// Get recording_id from the page or use default from the URL
const recordingId = typeof recording_id !== 'undefined' ? recording_id :
window.location.pathname.split('/').pop().replace('.html', '');
// Construct audio file path
const audioPath = `/assets/audio/${recordingId}/${formattedTime}.mp3`;
// Log for debugging
console.log(`Attempting to play audio: ${audioPath}`);
// Detect iOS
const isIOS = /iPad|iPhone|iPod/.test(navigator.userAgent) && !window.MSStream;
console.log(`Device is iOS: ${isIOS}`);
// Flag to track if we've already fallen back to TTS
let fallenBackToTTS = false;
try {
// Create or reuse audio element
if (!globalAudio) {
globalAudio = new Audio();
console.log("Created new global Audio element");
}
// Set up event handlers
globalAudio.onended = () => {
console.log('Audio playback ended');
// Hide toast when audio ends
if (currentToast) {
hideToast(currentToast);
currentToast = null;
}
};
globalAudio.onerror = (e) => {
console.warn(`Audio error: ${e.type}`, e);
if (!fallenBackToTTS) {
fallenBackToTTS = true;
useBrowserTTS(text);
} else if (currentToast) {
// If we've already tried TTS and that failed too, hide the toast
hideToast(currentToast);
currentToast = null;
}
};
// For iOS, preload might help with subsequent plays
if (isIOS) {
globalAudio.preload = "auto";
}
// Set the new source
globalAudio.src = audioPath;
// Play with proper error handling
const playPromise = globalAudio.play();
if (playPromise !== undefined) {
playPromise.catch(error => {
console.warn(`Play error: ${error.message}`);
// On iOS, a user gesture might be required
if (isIOS) {
console.log("iOS playback failed, trying SpeechSynthesis");
}
if (!fallenBackToTTS) {
fallenBackToTTS = true;
useBrowserTTS(text);
}
});
}
} catch (e) {
console.error(`Exception in audio playback: ${e.message}`);
useBrowserTTS(text);
}
}
// Function to highlight the active timestamp in the transcript
function highlightTimestamp(timeInSeconds) {
// Remove previous highlights
document.querySelectorAll('.timestamp-active').forEach(el => {
el.classList.remove('timestamp-active');
});
document.querySelectorAll('.active-marker').forEach(el => {
el.classList.remove('active-marker');
});
// Find the timestamp link with matching time
const timestampLinks = document.querySelectorAll('.timestamp-link');
let activeLink = null;
for (const link of timestampLinks) {
if (parseInt(link.dataset.time) === timeInSeconds) {
activeLink = link;
break;
}
}
if (activeLink) {
// Add highlight class to the link
activeLink.classList.add('timestamp-active');
// Also highlight the parent list item
const listItem = activeLink.closest('li');
if (listItem) {
listItem.classList.add('active-marker');
// No longer scrolling into view to avoid shifting focus
}
}
}
// Add event listener with safety checks
if (player && typeof player.addEventListener === 'function') {
player.addEventListener('marker', function(event) {
try {
const { index, time, label } = event;
console.log(`marker! ${index} - ${time} - ${label}`);
// Speak the marker label (toast is now shown within speakText)
speakText(label, time);
// Highlight the corresponding timestamp in the transcript
highlightTimestamp(time);
} catch (error) {
console.error('Error in marker event handler:', error);
}
});
}
});

View file

@ -0,0 +1,34 @@
<link rel="stylesheet" type="text/css" href="/assets/asciinema/asciinema-player.css" />
<style>
{% include recording.css %}
</style>
<script src="/assets/asciinema/asciinema-player.min.js"></script>
<script>
{% include recording.js %}
</script>
<div class="page-container">
<div class="toast-container" id="toast-container"></div>
<div class="macos-backdrop">
<div class="terminal-container">
<div class="terminal-header">
<div class="terminal-buttons">
<div class="terminal-button terminal-close"></div>
<div class="terminal-button terminal-minimize"></div>
<div class="terminal-button terminal-expand"></div>
</div>
<div class="terminal-title">aider</div>
</div>
<div id="demo"></div>
</div>
</div>
</div>
<div class="keyboard-shortcuts">
<kbd>Space</kbd> Play/pause —
<kbd>f</kbd> Fullscreen —
<kbd></kbd><kbd></kbd> ±5s
</div>

Binary file not shown.

File diff suppressed because it is too large Load diff

File diff suppressed because one or more lines are too long

View file

@ -0,0 +1,11 @@
{
"00-01": "We're going to add a new feature to automatically accept edits proposed by the architect model.",
"00-11": "First, let's add the new switch.",
"00-40": "Aider figured out that it should be passed to the Coder class.",
"00-48": "Now we need to implement the functionality.",
"01-00": "Let's do some manual testing.",
"01-28": "That worked. Let's make sure we can turn it off too.",
"02-00": "Let's quickly tidy up the changes to HISTORY.",
"02-05": "All done!",
"01-42": "That worked too. Let's have aider update the HISTORY file to document the new feature."
}

View file

@ -0,0 +1,11 @@
{
"00-10": "We've added files that handle the main CLI and in-chat slash commands like /drop.",
"00-20": "Let's explain the needed change to aider.",
"01-20": "Ok, let's look at the code.",
"01-30": "I'd prefer not to use \"hasattr()\", let's ask for improvements.",
"01-45": "Let's try some manual testing.",
"02-10": "Looks good. Let's check the existing test suite to ensure we didn't break anything.",
"02-19": "Let's ask aider to add tests for this.",
"02-50": "Tests look reasonable, we're done!",
"00-01": "We're going to update the /drop command to keep any read only files that were originally specified at launch."
}

Some files were not shown because too many files have changed in this diff Show more