Merge branch 'main' into register_settings

This commit is contained in:
paul-gauthier 2024-06-21 16:57:33 -07:00 committed by GitHub
commit b6fa02044f
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
51 changed files with 1973 additions and 238 deletions

View file

@ -1,5 +1,5 @@
repos:
- repo: https://github.com/pycqa/isort
- repo: https://github.com/PyCQA/isort
rev: 5.12.0
hooks:
- id: isort

View file

@ -1,9 +1,24 @@
---
nav_order: 999
---
# Release history
### v0.39.0
- Use `--sonnet` for Claude 3.5 Sonnet, which is the top model on [aider's LLM code editing leaderboard](https://aider.chat/docs/leaderboards/#claude-35-sonnet-takes-the-top-spot).
- All `AIDER_xxx` environment variables can now be set in `.env` (by @jpshack-at-palomar).
- Use `--llm-history-file` to log raw messages sent to the LLM (by @daniel-vainsencher).
- Commit messages are no longer prefixed with "aider:". Instead the git author and committer names have "(aider)" added.
### v0.38.0
- Use `--vim` for [vim keybindings](https://aider.chat/docs/commands.html#vi) in the chat.
- [Add LLM metadata](https://aider.chat/docs/llms/warnings.html#specifying-context-window-size-and-token-costs) via `.aider.models.json` file (by @caseymcc).
- More detailed [error messages on token limit errors](https://aider.chat/docs/troubleshooting/token-limits.html).
- Single line commit messages, without the recent chat messages.
- Ensure `--commit --dry-run` does nothing.
- Have playwright wait for idle network to better scrape js sites.
- Documentation updates, moved into website/ subdir.
- Moved tests/ into aider/tests/.
### v0.37.0
- Repo map is now optimized based on text of chat history as well as files added to chat.

View file

@ -1,11 +1,13 @@
<!-- Edit README.md, not index.md -->
# Aider is AI pair programming in your terminal
Aider lets you pair program with LLMs,
to edit code in your local git repository.
Start a new project or work with an existing git repo.
Aider works best with GPT-4o and Claude 3 Opus
and can [connect to almost any LLM](https://aider.chat/docs/llms.html).
Aider can [connect to almost any LLM](https://aider.chat/docs/llms.html).
and works best with GPT-4o, Claude 3.5 Sonnet, Claude 3 Opus and DeepSeek Coder V2.
<p align="center">
<img
@ -24,59 +26,82 @@ and can [connect to almost any LLM](https://aider.chat/docs/llms.html).
</p>
## Getting started
<!--[[[cog
# We can't do this here: {% include get-started.md %}
# Because this page is rendered by GitHub as the repo README
cog.out(open("website/_includes/get-started.md").read())
]]]-->
You can get started quickly like this:
{% include get-started.md %}
```
$ pip install aider-chat
**See the
# Change directory into a git repo
$ cd /to/your/git/repo
# Work with GPT-4o on your repo
$ export OPENAI_API_KEY=your-key-goes-here
$ aider
# Or, work with Anthropic's models
$ export ANTHROPIC_API_KEY=your-key-goes-here
# Claude 3 Opus
$ aider --opus
# Claude 3.5 Sonnet
$ aider --sonnet
```
<!--[[[end]]]-->
See the
[installation instructions](https://aider.chat/docs/install.html)
and other
[documentation](https://aider.chat/docs/usage.html)
for more details.**
for more details.
## Features
- Chat with aider about your code: `aider <file1> <file2> ...`
- Run aider with the files you want to edit: `aider <file1> <file2> ...`
- Ask for changes:
- New features, test cases, improvements.
- Bug fixes, updated docs or code refactors.
- Paste in a GitHub issue that needs to be solved.
- Add new features or test cases.
- Describe a bug.
- Paste in an error message or or GitHub issue URL.
- Refactor code.
- Update docs.
- Aider will edit your files to complete your request.
- Aider [automatically git commits](https://aider.chat/docs/git.html) changes with a sensible commit message.
- Aider works with [most popular languages](https://aider.chat/docs/languages.html): python, javascript, typescript, php, html, css, and more...
- Aider works best with GPT-4o and Claude 3 Opus
and can [connect to almost any LLM](https://aider.chat/docs/llms.html).
- Aider can make coordinated changes across multiple files at once.
- Aider can edit multiple files at once for complex requests.
- Aider uses a [map of your entire git repo](https://aider.chat/docs/repomap.html), which helps it work well in larger codebases.
- You can also edit files in your editor while chatting with aider.
Aider will notice and always use the latest version.
So you can bounce back and forth between aider and your editor, to collaboratively code with AI.
- Images can be added to the chat (GPT-4o, GPT-4 Turbo, etc).
- URLs can be added to the chat and aider will read their content.
- [Code with your voice](https://aider.chat/docs/voice.html) using speech recognition.
- Edit files in your editor while chatting with aider,
and it will always use the latest version.
Pair program with AI.
- Add images to the chat (GPT-4o, GPT-4 Turbo, etc).
- Add URLs to the chat and aider will read their content.
- [Code with your voice](https://aider.chat/docs/voice.html).
## State of the art
## Top tier performance
Aider has the
[top score on SWE Bench](https://aider.chat/2024/06/02/main-swe-bench.html).
[Aider has the one of the top scores on SWE Bench](https://aider.chat/2024/06/02/main-swe-bench.html).
SWE Bench is a challenging software engineering benchmark where aider
solved *real* GitHub issues from popular open source
projects like django, scikitlearn, matplotlib, etc.
<p align="center">
<a href="https://aider.chat/2024/06/02/main-swe-bench.html">
<img src="https://aider.chat/assets/swe_bench.svg" alt="aider swe bench">
</a>
</p>
## Documentation
## More info
- [Documentation](https://aider.chat/)
- [Installation](https://aider.chat/docs/install.html)
- [Usage](https://aider.chat/docs/usage.html)
- [Tutorial videos](https://aider.chat/docs/tutorials.html)
- [Connecting to LLMs](https://aider.chat/docs/llms.html)
- [Configuration](https://aider.chat/docs/config.html)
- [Troubleshooting](https://aider.chat/docs/troubleshooting.html)
- [LLM Leaderboards](https://aider.chat/docs/leaderboards/)
- [FAQ](https://aider.chat/docs/faq.html)
- [GitHub](https://github.com/paul-gauthier/aider)
- [Discord](https://discord.gg/Tv2uQnR88V)
- [Blog](https://aider.chat/blog/)

View file

@ -1 +1 @@
__version__ = "0.37.1-dev"
__version__ = "0.39.1-dev"

4
aider/__main__.py Normal file
View file

@ -0,0 +1,4 @@
from .main import main
if __name__ == "__main__":
main()

View file

@ -7,30 +7,47 @@ import sys
import configargparse
from aider import __version__, models
from aider.args_formatter import MarkdownHelpFormatter, YamlHelpFormatter
from aider.args_formatter import (
DotEnvFormatter,
MarkdownHelpFormatter,
YamlHelpFormatter,
)
from .dump import dump # noqa: F401
def default_env_file(git_root):
return os.path.join(git_root, ".env") if git_root else ".env"
def get_preparser(git_root):
parser = configargparse.ArgumentParser(add_help=False)
parser.add_argument(
"--env-file",
metavar="ENV_FILE",
default=default_env_file(git_root),
help="Specify the .env file to load (default: .env in git root)",
)
return parser
def get_parser(default_config_files, git_root):
parser = configargparse.ArgumentParser(
description="aider is GPT powered coding in your terminal",
add_config_file_help=True,
default_config_files=default_config_files,
config_file_parser_class=configargparse.YAMLConfigFileParser,
auto_env_var_prefix="AIDER_",
)
group = parser.add_argument_group("Main")
group.add_argument(
"--vim",
action="store_true",
help="Use VI editing mode in the terminal (default: False)",
default=False,
"--llm-history-file",
metavar="LLM_HISTORY_FILE",
default=None,
help="Log the conversation with the LLM to this file (for example, .aider.llm.history)",
)
group.add_argument(
"files",
metavar="FILE",
nargs="*",
help="files to edit with an LLM (optional)",
"files", metavar="FILE", nargs="*", help="files to edit with an LLM (optional)"
)
group.add_argument(
"--openai-api-key",
@ -59,7 +76,7 @@ def get_parser(default_config_files, git_root):
const=opus_model,
help=f"Use {opus_model} model for the main chat",
)
sonnet_model = "claude-3-sonnet-20240229"
sonnet_model = "claude-3-5-sonnet-20240620"
group.add_argument(
"--sonnet",
action="store_const",
@ -146,13 +163,18 @@ def get_parser(default_config_files, git_root):
metavar="MODEL_SETTINGS_FILE",
default=None,
help="Specify a file with aider model settings for unknown models",
)
group.add_argument(
"--model-metadata-file",
metavar="MODEL_METADATA_FILE",
default=None,
help="Specify a file with context window and costs for unknown models",
)
group.add_argument(
"--verify-ssl",
action=argparse.BooleanOptionalAction,
default=True,
help="Verify the SSL cert when connecting to models (default: True)",
)
group.add_argument(
"--edit-format",
metavar="EDIT_FORMAT",
@ -189,11 +211,12 @@ def get_parser(default_config_files, git_root):
" max_chat_history_tokens."
),
)
default_env_file = os.path.join(git_root, ".env") if git_root else ".env"
# This is a duplicate of the argument in the preparser and is a no-op by this time of
# argument parsing, but it's here so that the help is displayed as expected.
group.add_argument(
"--env-file",
metavar="ENV_FILE",
default=default_env_file,
default=default_env_file(git_root),
help="Specify the .env file to load (default: .env in git root)",
)
@ -375,6 +398,12 @@ def get_parser(default_config_files, git_root):
##########
group = parser.add_argument_group("Other Settings")
group.add_argument(
"--vim",
action="store_true",
help="Use VI editing mode in the terminal (default: False)",
default=False,
)
group.add_argument(
"--voice-language",
metavar="VOICE_LANGUAGE",
@ -500,11 +529,27 @@ def get_sample_yaml():
return parser.format_help()
def get_sample_dotenv():
os.environ["COLUMNS"] = "120"
sys.argv = ["aider"]
parser = get_parser([], None)
# This instantiates all the action.env_var values
parser.parse_known_args()
parser.formatter_class = DotEnvFormatter
return argparse.ArgumentParser.format_help(parser)
return parser.format_help()
def main():
arg = sys.argv[1] if len(sys.argv[1:]) else None
if arg == "md":
print(get_md_help())
elif arg == "dotenv":
print(get_sample_dotenv())
else:
print(get_sample_yaml())

View file

@ -1,8 +1,83 @@
import argparse
from aider import urls
from .dump import dump # noqa: F401
class DotEnvFormatter(argparse.HelpFormatter):
def start_section(self, heading):
res = "\n\n"
res += "#" * (len(heading) + 3)
res += f"\n# {heading}"
super().start_section(res)
def _format_usage(self, usage, actions, groups, prefix):
return ""
def _format_text(self, text):
return f"""
##########################################################
# Sample aider .env file.
# Place at the root of your git repo.
# Or use `aider --env <fname>` to specify.
##########################################################
#################
# LLM parameters:
#
# Include xxx_API_KEY parameters and other params needed for your LLMs.
# See {urls.llms} for details.
## OpenAI
#OPENAI_API_KEY=
## Anthropic
#ANTHROPIC_API_KEY=
##...
"""
def _format_action(self, action):
if not action.option_strings:
return ""
if not action.env_var:
return
parts = [""]
default = action.default
if default == argparse.SUPPRESS:
default = ""
elif isinstance(default, str):
pass
elif isinstance(default, list) and not default:
default = ""
elif action.default is not None:
default = "true" if default else "false"
else:
default = ""
if action.help:
parts.append(f"## {action.help}")
if action.env_var:
env_var = action.env_var
if default:
parts.append(f"#{env_var}={default}\n")
else:
parts.append(f"#{env_var}=\n")
return "\n".join(parts) + "\n"
def _format_action_invocation(self, action):
return ""
def _format_args(self, action, default_metavar):
return ""
class YamlHelpFormatter(argparse.HelpFormatter):
def start_section(self, heading):
res = "\n\n"
@ -17,6 +92,7 @@ class YamlHelpFormatter(argparse.HelpFormatter):
return """
##########################################################
# Sample .aider.conf.yaml
# This file lists *all* the valid configuration entries.
# Place in your home dir, or at the root of your git repo.
##########################################################

View file

@ -27,7 +27,7 @@ from aider.mdstream import MarkdownStream
from aider.repo import GitRepo
from aider.repomap import RepoMap
from aider.sendchat import send_with_retries
from aider.utils import is_image_file
from aider.utils import format_content, format_messages, is_image_file
from ..dump import dump # noqa: F401
@ -783,6 +783,8 @@ class Coder:
messages = self.format_messages()
self.io.log_llm_history("TO LLM", format_messages(messages))
if self.verbose:
utils.show_messages(messages, functions=self.functions)
@ -795,21 +797,23 @@ class Coder:
except ExhaustedContextWindow:
exhausted = True
except litellm.exceptions.BadRequestError as err:
self.io.tool_error(f"BadRequestError: {err}")
return
if "ContextWindowExceededError" in err.message:
exhausted = True
else:
self.io.tool_error(f"BadRequestError: {err}")
return
except openai.BadRequestError as err:
if "maximum context length" in str(err):
exhausted = True
else:
raise err
except Exception as err:
self.io.tool_error(f"Unexpected error: {err}")
return
if exhausted:
self.show_exhausted_error()
self.num_exhausted_context_windows += 1
self.io.tool_error("The chat session is larger than the context window!\n")
self.commands.cmd_tokens("")
self.io.tool_error("\nTo reduce token usage:")
self.io.tool_error(" - Use /drop to remove unneeded files from the chat session.")
self.io.tool_error(" - Use /clear to clear chat history.")
return
if self.partial_response_function_call:
@ -825,6 +829,8 @@ class Coder:
self.io.tool_output()
self.io.log_llm_history("LLM RESPONSE", format_content("ASSISTANT", content))
if interrupted:
content += "\n^C KeyboardInterrupt"
self.cur_messages += [dict(role="assistant", content=content)]
@ -878,6 +884,63 @@ class Coder:
else:
self.reflected_message = add_rel_files_message
def show_exhausted_error(self):
output_tokens = 0
if self.partial_response_content:
output_tokens = self.main_model.token_count(self.partial_response_content)
max_output_tokens = self.main_model.info.get("max_output_tokens", 0)
input_tokens = self.main_model.token_count(self.format_messages())
max_input_tokens = self.main_model.info.get("max_input_tokens", 0)
total_tokens = input_tokens + output_tokens
if output_tokens >= max_output_tokens:
out_err = " -- exceeded output limit!"
else:
out_err = ""
if input_tokens >= max_input_tokens:
inp_err = " -- context window exhausted!"
else:
inp_err = ""
if total_tokens >= max_input_tokens:
tot_err = " -- context window exhausted!"
else:
tot_err = ""
res = ["", ""]
res.append(f"Model {self.main_model.name} has hit a token limit!")
res.append("")
res.append(f"Input tokens: {input_tokens:,} of {max_input_tokens:,}{inp_err}")
res.append(f"Output tokens: {output_tokens:,} of {max_output_tokens:,}{out_err}")
res.append(f"Total tokens: {total_tokens:,} of {max_input_tokens:,}{tot_err}")
if output_tokens >= max_output_tokens:
res.append("")
res.append("To reduce output tokens:")
res.append("- Ask for smaller changes in each request.")
res.append("- Break your code into smaller source files.")
if "diff" not in self.main_model.edit_format:
res.append(
"- Try using a stronger model like gpt-4o or opus that can return diffs."
)
if input_tokens >= max_input_tokens or total_tokens >= max_input_tokens:
res.append("")
res.append("To reduce input tokens:")
res.append("- Use /tokens to see token usage.")
res.append("- Use /drop to remove unneeded files from the chat session.")
res.append("- Use /clear to clear the chat history.")
res.append("- Break your code into smaller source files.")
res.append("")
res.append(f"For more info: {urls.token_limits}")
res = "".join([line + "\n" for line in res])
self.io.tool_error(res)
def lint_edited(self, fnames):
res = ""
for fname in fnames:
@ -1321,7 +1384,7 @@ class Coder:
def auto_commit(self, edited):
# context = self.get_context_from_history(self.cur_messages)
res = self.repo.commit(fnames=edited, prefix="aider: ")
res = self.repo.commit(fnames=edited, aider_edits=True)
if res:
commit_hash, commit_message = res
self.last_aider_commit_hash = commit_hash

View file

@ -414,16 +414,8 @@ def find_original_update_blocks(content, fence=DEFAULT_FENCE):
processed.append(cur) # original_marker
filename = strip_filename(processed[-2].splitlines()[-1], fence)
try:
if not filename:
filename = strip_filename(processed[-2].splitlines()[-2], fence)
if not filename:
if current_filename:
filename = current_filename
else:
raise ValueError(missing_filename_err.format(fence=fence))
except IndexError:
filename = find_filename(processed[-2].splitlines(), fence)
if not filename:
if current_filename:
filename = current_filename
else:
@ -460,6 +452,35 @@ def find_original_update_blocks(content, fence=DEFAULT_FENCE):
raise ValueError(f"{processed}\n^^^ Error parsing SEARCH/REPLACE block.")
def find_filename(lines, fence):
"""
Deepseek Coder v2 has been doing this:
```python
word_count.py
```
```python
<<<<<<< SEARCH
...
This is a more flexible search back for filenames.
"""
# Go back through the 3 preceding lines
lines.reverse()
lines = lines[:3]
for line in lines:
# If we find a filename, done
filename = strip_filename(line, fence)
if filename:
return filename
# Only continue as long as we keep seeing fences
if not line.startswith(fence[0]):
return
if __name__ == "__main__":
edit = """
Here's the change:

View file

@ -332,7 +332,7 @@ class Commands:
last_commit = self.coder.repo.repo.head.commit
if (
not last_commit.message.startswith("aider:")
not last_commit.author.name.endswith(" (aider)")
or last_commit.hexsha[:7] != self.coder.last_aider_commit_hash
):
self.io.tool_error("The last commit was not made by aider in this chat session.")

View file

@ -110,9 +110,6 @@ class GUI:
show_undo = False
res = ""
if commit_hash:
prefix = "aider: "
if commit_message.startswith(prefix):
commit_message = commit_message[len(prefix) :]
res += f"Commit `{commit_hash}`: {commit_message} \n"
if commit_hash == self.coder.last_aider_commit_hash:
show_undo = True

View file

@ -107,6 +107,7 @@ class InputOutput:
tool_error_color="red",
encoding="utf-8",
dry_run=False,
llm_history_file=None,
editingmode=EditingMode.EMACS,
):
self.editingmode = editingmode
@ -128,6 +129,7 @@ class InputOutput:
self.yes = yes
self.input_history_file = input_history_file
self.llm_history_file = llm_history_file
if chat_history_file is not None:
self.chat_history_file = Path(chat_history_file)
else:
@ -209,10 +211,11 @@ class InputOutput:
else:
style = None
completer_instance = AutoCompleter(
root, rel_fnames, addable_rel_fnames, commands, self.encoding
)
while True:
completer_instance = AutoCompleter(
root, rel_fnames, addable_rel_fnames, commands, self.encoding
)
if multiline_input:
show = ". "
@ -271,6 +274,14 @@ class InputOutput:
fh = FileHistory(self.input_history_file)
return fh.load_history_strings()
def log_llm_history(self, role, content):
if not self.llm_history_file:
return
timestamp = datetime.now().isoformat(timespec='seconds')
with open(self.llm_history_file, 'a', encoding=self.encoding) as log_file:
log_file.write(f"{role.upper()} {timestamp}\n")
log_file.write(content + "\n")
def user_input(self, inp, log_only=True):
if not log_only:
style = dict(style=self.user_input_color) if self.user_input_color else dict()

View file

@ -5,12 +5,13 @@ import sys
from pathlib import Path
import git
import httpx
from dotenv import load_dotenv
from prompt_toolkit.enums import EditingMode
from streamlit.web import cli
from aider import __version__, models, utils
from aider.args import get_parser
from aider.args import get_parser, get_preparser
from aider.coders import Coder
from aider.commands import SwitchModel
from aider.io import InputOutput
@ -124,12 +125,18 @@ def check_gitignore(git_root, io, ask=True):
def format_settings(parser, args):
show = scrub_sensitive_info(args, parser.format_values())
# clean up the headings for consistency w/ new lines
heading_env = "Environment Variables:"
heading_defaults = "Defaults:"
if heading_env in show:
show = show.replace(heading_env, "\n" + heading_env)
show = show.replace(heading_defaults, "\n" + heading_defaults)
show += "\n"
show += "Option settings:\n"
for arg, val in sorted(vars(args).items()):
if val:
val = scrub_sensitive_info(args, str(val))
show += f" - {arg}: {val}\n"
show += f" - {arg}: {val}\n" # noqa: E221
return show
@ -266,9 +273,18 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
default_config_files.append(Path.home() / conf_fname) # homedir
default_config_files = list(map(str, default_config_files))
preparser = get_preparser(git_root)
pre_args, _ = preparser.parse_known_args(argv)
# Load the .env file specified in the arguments
load_dotenv(pre_args.env_file)
parser = get_parser(default_config_files, git_root)
args = parser.parse_args(argv)
if not args.verify_ssl:
litellm.client_session = httpx.Client(verify=False)
if args.gui and not return_coder:
launch_gui(argv)
return
@ -302,6 +318,7 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
tool_error_color=args.tool_error_color,
dry_run=args.dry_run,
encoding=args.encoding,
llm_history_file=args.llm_history_file,
editingmode=editing_mode,
)
@ -360,9 +377,6 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
cmd_line = scrub_sensitive_info(args, cmd_line)
io.tool_output(cmd_line, log_only=True)
if args.env_file:
load_dotenv(args.env_file)
if args.anthropic_api_key:
os.environ["ANTHROPIC_API_KEY"] = args.anthropic_api_key

View file

@ -179,6 +179,43 @@ MODEL_SETTINGS = [
"whole",
weak_model_name="claude-3-haiku-20240307",
),
ModelSettings(
"claude-3-5-sonnet-20240620",
"diff",
weak_model_name="claude-3-haiku-20240307",
use_repo_map=True,
),
ModelSettings(
"anthropic/claude-3-5-sonnet-20240620",
"diff",
weak_model_name="claude-3-haiku-20240307",
use_repo_map=True,
),
ModelSettings(
"openrouter/anthropic/claude-3.5-sonnet",
"diff",
weak_model_name="openrouter/anthropic/claude-3-haiku-20240307",
use_repo_map=True,
),
# Vertex AI Claude models
ModelSettings(
"vertex_ai/claude-3-5-sonnet@20240620",
"diff",
weak_model_name="vertex_ai/claude-3-haiku@20240307",
use_repo_map=True,
),
ModelSettings(
"vertex_ai/claude-3-opus@20240229",
"diff",
weak_model_name="vertex_ai/claude-3-haiku@20240307",
use_repo_map=True,
send_undo_reply=True,
),
ModelSettings(
"vertex_ai/claude-3-sonnet@20240229",
"whole",
weak_model_name="vertex_ai/claude-3-haiku@20240307",
),
# Cohere
ModelSettings(
"command-r-plus",
@ -219,7 +256,7 @@ MODEL_SETTINGS = [
send_undo_reply=True,
),
ModelSettings(
"openai/deepseek-chat",
"deepseek/deepseek-chat",
"diff",
use_repo_map=True,
send_undo_reply=True,
@ -227,7 +264,15 @@ MODEL_SETTINGS = [
reminder_as_sys_msg=True,
),
ModelSettings(
"deepseek/deepseek-chat",
"deepseek/deepseek-coder",
"diff",
use_repo_map=True,
send_undo_reply=True,
examples_as_sys_msg=True,
reminder_as_sys_msg=True,
),
ModelSettings(
"openrouter/deepseek/deepseek-coder",
"diff",
use_repo_map=True,
send_undo_reply=True,

View file

@ -59,7 +59,7 @@ class GitRepo:
if aider_ignore_file:
self.aider_ignore_file = Path(aider_ignore_file)
def commit(self, fnames=None, context=None, prefix=None, message=None):
def commit(self, fnames=None, context=None, message=None, aider_edits=False):
if not fnames and not self.repo.is_dirty():
return
@ -75,9 +75,6 @@ class GitRepo:
if not commit_message:
commit_message = "(no commit message provided)"
if prefix:
commit_message = prefix + commit_message
full_commit_message = commit_message
if context:
full_commit_message += "\n\n# Aider chat conversation:\n\n" + context
@ -91,10 +88,32 @@ class GitRepo:
else:
cmd += ["-a"]
original_user_name = self.repo.config_reader().get_value("user", "name")
original_committer_name_env = os.environ.get("GIT_COMMITTER_NAME")
committer_name = f"{original_user_name} (aider)"
os.environ["GIT_COMMITTER_NAME"] = committer_name
if aider_edits:
original_auther_name_env = os.environ.get("GIT_AUTHOR_NAME")
os.environ["GIT_AUTHOR_NAME"] = committer_name
self.repo.git.commit(cmd)
commit_hash = self.repo.head.commit.hexsha[:7]
self.io.tool_output(f"Commit {commit_hash} {commit_message}")
# Restore the original GIT_COMMITTER_NAME
if aider_edits:
if original_auther_name_env is not None:
os.environ["GIT_AUTHOR_NAME"] = original_auther_name_env
else:
del os.environ["GIT_AUTHOR_NAME"]
if original_committer_name_env is not None:
os.environ["GIT_COMMITTER_NAME"] = original_committer_name_env
else:
del os.environ["GIT_COMMITTER_NAME"]
return commit_hash, commit_message
def get_rel_repo_dir(self):

View file

@ -10,7 +10,7 @@ from bs4 import BeautifulSoup
from playwright.sync_api import sync_playwright
from aider import __version__, urls
from aider.dump import dump
from aider.dump import dump # noqa: F401
aider_user_agent = f"Aider/{__version__} +{urls.website}"
@ -21,7 +21,7 @@ For better web scraping, install Playwright chromium with this command in your t
playwright install --with-deps chromium
See {urls.enable_playwrite} for more info.
See {urls.enable_playwright} for more info.
"""
@ -53,7 +53,6 @@ class Scraper:
else:
content = self.scrape_with_httpx(url)
dump(content)
if not content:
return

View file

@ -523,16 +523,20 @@ class TestCommands(TestCase):
other_path.write_text("other content")
repo.git.add(str(other_path))
os.environ["GIT_AUTHOR_NAME"] = "Foo (aider)"
# Create and commit a file
filename = "test_file.txt"
file_path = Path(repo_dir) / filename
file_path.write_text("first content")
repo.git.add(filename)
repo.git.commit("-m", "aider: first commit")
repo.git.commit("-m", "first commit")
file_path.write_text("second content")
repo.git.add(filename)
repo.git.commit("-m", "aider: second commit")
repo.git.commit("-m", "second commit")
del os.environ["GIT_AUTHOR_NAME"]
# Store the commit hash
last_commit_hash = repo.head.commit.hexsha[:7]

View file

@ -398,6 +398,32 @@ Hope you like it!
],
)
def test_deepseek_coder_v2_filename_mangling(self):
edit = """
Here's the change:
```python
foo.txt
```
```python
<<<<<<< SEARCH
one
=======
two
>>>>>>> REPLACE
```
Hope you like it!
"""
edits = list(eb.find_original_update_blocks(edit))
self.assertEqual(
edits,
[
("foo.txt", "one\n", "two\n"),
],
)
if __name__ == "__main__":
unittest.main()

View file

@ -1,7 +1,7 @@
import os
import shutil
import subprocess
import tempfile
from io import StringIO
from pathlib import Path
from unittest import TestCase
from unittest.mock import MagicMock, patch
@ -13,24 +13,28 @@ from prompt_toolkit.output import DummyOutput
from aider.dump import dump # noqa: F401
from aider.io import InputOutput
from aider.main import check_gitignore, main, setup_git
from aider.utils import GitTemporaryDirectory, make_repo
from aider.utils import GitTemporaryDirectory, IgnorantTemporaryDirectory, make_repo
class TestMain(TestCase):
def setUp(self):
self.original_env = os.environ.copy()
os.environ["OPENAI_API_KEY"] = "deadbeef"
self.original_cwd = os.getcwd()
self.tempdir = tempfile.mkdtemp()
self.tempdir_obj = IgnorantTemporaryDirectory()
self.tempdir = self.tempdir_obj.name
os.chdir(self.tempdir)
def tearDown(self):
os.chdir(self.original_cwd)
shutil.rmtree(self.tempdir, ignore_errors=True)
self.tempdir_obj.cleanup()
os.environ.clear()
os.environ.update(self.original_env)
def test_main_with_empty_dir_no_files_on_command(self):
main(["--no-git"], input=DummyInput(), output=DummyOutput())
def test_main_with_empty_dir_new_file(self):
def test_main_with_emptqy_dir_new_file(self):
main(["foo.txt", "--yes", "--no-git"], input=DummyInput(), output=DummyOutput())
self.assertTrue(os.path.exists("foo.txt"))
@ -237,3 +241,82 @@ class TestMain(TestCase):
main(["--message", test_message])
args, kwargs = MockInputOutput.call_args
self.assertEqual(args[1], None)
def test_dark_mode_sets_code_theme(self):
# Mock Coder.create to capture the configuration
with patch("aider.coders.Coder.create") as MockCoder:
main(["--dark-mode", "--no-git"], input=DummyInput(), output=DummyOutput())
# Ensure Coder.create was called
MockCoder.assert_called_once()
# Check if the code_theme setting is for dark mode
_, kwargs = MockCoder.call_args
self.assertEqual(kwargs["code_theme"], "monokai")
def test_light_mode_sets_code_theme(self):
# Mock Coder.create to capture the configuration
with patch("aider.coders.Coder.create") as MockCoder:
main(["--light-mode", "--no-git"], input=DummyInput(), output=DummyOutput())
# Ensure Coder.create was called
MockCoder.assert_called_once()
# Check if the code_theme setting is for light mode
_, kwargs = MockCoder.call_args
self.assertEqual(kwargs["code_theme"], "default")
def create_env_file(self, file_name, content):
env_file_path = Path(self.tempdir) / file_name
env_file_path.write_text(content)
return env_file_path
def test_env_file_flag_sets_automatic_variable(self):
env_file_path = self.create_env_file(".env.test", "AIDER_DARK_MODE=True")
with patch("aider.coders.Coder.create") as MockCoder:
main(
["--env-file", str(env_file_path), "--no-git"],
input=DummyInput(),
output=DummyOutput(),
)
MockCoder.assert_called_once()
# Check if the color settings are for dark mode
_, kwargs = MockCoder.call_args
self.assertEqual(kwargs["code_theme"], "monokai")
def test_default_env_file_sets_automatic_variable(self):
self.create_env_file(".env", "AIDER_DARK_MODE=True")
with patch("aider.coders.Coder.create") as MockCoder:
main(["--no-git"], input=DummyInput(), output=DummyOutput())
# Ensure Coder.create was called
MockCoder.assert_called_once()
# Check if the color settings are for dark mode
_, kwargs = MockCoder.call_args
self.assertEqual(kwargs["code_theme"], "monokai")
def test_false_vals_in_env_file(self):
self.create_env_file(".env", "AIDER_SHOW_DIFFS=off")
with patch("aider.coders.Coder.create") as MockCoder:
main(["--no-git"], input=DummyInput(), output=DummyOutput())
MockCoder.assert_called_once()
_, kwargs = MockCoder.call_args
self.assertEqual(kwargs["show_diffs"], False)
def test_true_vals_in_env_file(self):
self.create_env_file(".env", "AIDER_SHOW_DIFFS=on")
with patch("aider.coders.Coder.create") as MockCoder:
main(["--no-git"], input=DummyInput(), output=DummyOutput())
MockCoder.assert_called_once()
_, kwargs = MockCoder.call_args
self.assertEqual(kwargs["show_diffs"], True)
def test_verbose_mode_lists_env_vars(self):
self.create_env_file(".env", "AIDER_DARK_MODE=on")
with patch("sys.stdout", new_callable=StringIO) as mock_stdout:
main(["--no-git", "--verbose"], input=DummyInput(), output=DummyOutput())
output = mock_stdout.getvalue()
relevant_output = "\n".join(
line
for line in output.splitlines()
if "AIDER_DARK_MODE" in line or "dark_mode" in line
) # this bit just helps failing assertions to be easier to read
self.assertIn("AIDER_DARK_MODE", relevant_output)
self.assertIn("dark_mode", relevant_output)
self.assertRegex(relevant_output, r"AIDER_DARK_MODE:\s+on")
self.assertRegex(relevant_output, r"dark_mode:\s+True")

View file

@ -1,4 +1,5 @@
import os
import platform
import tempfile
import unittest
from pathlib import Path
@ -137,6 +138,52 @@ class TestRepo(unittest.TestCase):
# Assert that the returned message is the expected one
self.assertEqual(result, 'a good "commit message"')
@patch("aider.repo.GitRepo.get_commit_message")
def test_commit_with_custom_committer_name(self, mock_send):
mock_send.return_value = '"a good commit message"'
# Cleanup of the git temp dir explodes on windows
if platform.system() == "Windows":
return
with GitTemporaryDirectory():
# new repo
raw_repo = git.Repo()
raw_repo.config_writer().set_value("user", "name", "Test User").release()
# add a file and commit it
fname = Path("file.txt")
fname.touch()
raw_repo.git.add(str(fname))
raw_repo.git.commit("-m", "initial commit")
io = InputOutput()
git_repo = GitRepo(io, None, None)
# commit a change
fname.write_text("new content")
git_repo.commit(fnames=[str(fname)], aider_edits=True)
# check the committer name
commit = raw_repo.head.commit
self.assertEqual(commit.author.name, "Test User (aider)")
self.assertEqual(commit.committer.name, "Test User (aider)")
# commit a change without aider_edits
fname.write_text("new content again!")
git_repo.commit(fnames=[str(fname)], aider_edits=False)
# check the committer name
commit = raw_repo.head.commit
self.assertEqual(commit.author.name, "Test User")
self.assertEqual(commit.committer.name, "Test User (aider)")
# check that the original committer name is restored
original_committer_name = os.environ.get("GIT_COMMITTER_NAME")
self.assertIsNone(original_committer_name)
original_author_name = os.environ.get("GIT_AUTHOR_NAME")
self.assertIsNone(original_author_name)
def test_get_tracked_files(self):
# Create a temporary directory
tempdir = Path(tempfile.mkdtemp())

View file

@ -2,6 +2,8 @@ website = "https://aider.chat/"
add_all_files = "https://aider.chat/docs/faq.html#how-can-i-add-all-the-files-to-the-chat"
edit_errors = "https://aider.chat/docs/troubleshooting/edit-errors.html"
git = "https://aider.chat/docs/git.html"
enable_playwrite = "https://aider.chat/docs/install/optional.html#enable-playwright"
enable_playwright = "https://aider.chat/docs/install/optional.html#enable-playwright"
favicon = "https://aider.chat/assets/icons/favicon-32x32.png"
model_warnings = "https://aider.chat/docs/llms/warnings.html"
token_limits = "https://aider.chat/docs/troubleshooting/token-limits.html"
llms = "https://aider.chat/docs/llms.html"

View file

@ -17,11 +17,17 @@ class IgnorantTemporaryDirectory:
return self.temp_dir.__enter__()
def __exit__(self, exc_type, exc_val, exc_tb):
self.cleanup()
def cleanup(self):
try:
self.temp_dir.__exit__(exc_type, exc_val, exc_tb)
self.temp_dir.cleanup()
except (OSError, PermissionError):
pass # Ignore errors (Windows)
def __getattr__(self, item):
return getattr(self.temp_dir, item)
class ChdirTemporaryDirectory(IgnorantTemporaryDirectory):
def __init__(self):
@ -84,24 +90,38 @@ def safe_abs_path(res):
return str(res)
def show_messages(messages, title=None, functions=None):
def format_content(role, content):
formatted_lines = []
for line in content.splitlines():
formatted_lines.append(f"{role} {line}")
return "\n".join(formatted_lines)
def format_messages(messages, title=None):
output = []
if title:
print(title.upper(), "*" * 50)
output.append(f"{title.upper()} {'*' * 50}")
for msg in messages:
print()
output.append("")
role = msg["role"].upper()
content = msg.get("content")
if isinstance(content, list): # Handle list content (e.g., image messages)
for item in content:
if isinstance(item, dict) and "image_url" in item:
print(role, "Image URL:", item["image_url"]["url"])
output.append(f"{role} Image URL: {item['image_url']['url']}")
elif isinstance(content, str): # Handle string content
for line in content.splitlines():
print(role, line)
output.append(format_content(role, content))
content = msg.get("function_call")
if content:
print(role, content)
output.append(f"{role} {content}")
return "\n".join(output)
def show_messages(messages, title=None, functions=None):
formatted_output = format_messages(messages, title)
print(formatted_output)
if functions:
dump(functions)

View file

@ -10,9 +10,9 @@ aiosignal==1.3.1
# via aiohttp
altair==5.3.0
# via streamlit
annotated-types==0.6.0
annotated-types==0.7.0
# via pydantic
anyio==4.3.0
anyio==4.4.0
# via
# httpx
# openai
@ -31,7 +31,7 @@ cachetools==5.3.3
# via
# google-auth
# streamlit
certifi==2024.2.2
certifi==2024.6.2
# via
# httpcore
# httpx
@ -54,15 +54,15 @@ diskcache==5.6.3
# via -r requirements.in
distro==1.9.0
# via openai
filelock==3.14.0
filelock==3.15.3
# via huggingface-hub
flake8==7.0.0
flake8==7.1.0
# via -r requirements.in
frozenlist==1.4.1
# via
# aiohttp
# aiosignal
fsspec==2024.5.0
fsspec==2024.6.0
# via huggingface-hub
gitdb==4.0.11
# via gitpython
@ -70,16 +70,16 @@ gitpython==3.1.43
# via
# -r requirements.in
# streamlit
google-ai-generativelanguage==0.6.4
google-ai-generativelanguage==0.6.5
# via google-generativeai
google-api-core[grpc]==2.19.0
# via
# google-ai-generativelanguage
# google-api-python-client
# google-generativeai
google-api-python-client==2.129.0
google-api-python-client==2.134.0
# via google-generativeai
google-auth==2.29.0
google-auth==2.30.0
# via
# google-ai-generativelanguage
# google-api-core
@ -88,9 +88,9 @@ google-auth==2.29.0
# google-generativeai
google-auth-httplib2==0.2.0
# via google-api-python-client
google-generativeai==0.5.4
google-generativeai==0.7.0
# via -r requirements.in
googleapis-common-protos==1.63.0
googleapis-common-protos==1.63.1
# via
# google-api-core
# grpcio-status
@ -98,7 +98,7 @@ greenlet==3.0.3
# via playwright
grep-ast==0.3.2
# via -r requirements.in
grpcio==1.63.0
grpcio==1.64.1
# via
# google-api-core
# grpcio-status
@ -114,7 +114,7 @@ httplib2==0.22.0
# google-auth-httplib2
httpx==0.27.0
# via openai
huggingface-hub==0.23.0
huggingface-hub==0.23.4
# via tokenizers
idna==3.7
# via
@ -122,7 +122,9 @@ idna==3.7
# httpx
# requests
# yarl
importlib-metadata==7.1.0
ijson==3.3.0
# via litellm
importlib-metadata==7.2.0
# via litellm
jinja2==3.1.4
# via
@ -135,7 +137,7 @@ jsonschema==4.22.0
# altair
jsonschema-specifications==2023.12.1
# via jsonschema
litellm==1.37.16
litellm==1.40.21
# via -r requirements.in
markdown-it-py==3.0.0
# via rich
@ -151,7 +153,7 @@ multidict==6.0.5
# yarl
networkx==3.2.1
# via -r requirements.in
numpy==1.26.4
numpy==2.0.0
# via
# -r requirements.in
# altair
@ -160,11 +162,11 @@ numpy==1.26.4
# pydeck
# scipy
# streamlit
openai==1.30.1
openai==1.35.3
# via
# -r requirements.in
# litellm
packaging==24.0
packaging==24.1
# via
# -r requirements.in
# altair
@ -184,9 +186,9 @@ pillow==10.3.0
# streamlit
playwright==1.44.0
# via -r requirements.in
prompt-toolkit==3.0.43
prompt-toolkit==3.0.47
# via -r requirements.in
proto-plus==1.23.0
proto-plus==1.24.0
# via
# google-ai-generativelanguage
# google-api-core
@ -207,15 +209,16 @@ pyasn1==0.6.0
# rsa
pyasn1-modules==0.4.0
# via google-auth
pycodestyle==2.11.1
pycodestyle==2.12.0
# via flake8
pycparser==2.22
# via cffi
pydantic==2.7.1
pydantic==2.7.4
# via
# google-generativeai
# litellm
# openai
pydantic-core==2.18.2
pydantic-core==2.18.4
# via pydantic
pydeck==0.9.1
# via streamlit
@ -245,7 +248,7 @@ referencing==0.35.1
# jsonschema-specifications
regex==2024.5.15
# via tiktoken
requests==2.31.0
requests==2.32.3
# via
# google-api-core
# huggingface-hub
@ -262,7 +265,7 @@ rpds-py==0.18.1
# referencing
rsa==4.9
# via google-auth
scipy==1.13.0
scipy==1.13.1
# via -r requirements.in
six==1.16.0
# via python-dateutil
@ -273,15 +276,15 @@ sniffio==1.3.1
# anyio
# httpx
# openai
sounddevice==0.4.6
sounddevice==0.4.7
# via -r requirements.in
soundfile==0.12.1
# via -r requirements.in
soupsieve==2.5
# via beautifulsoup4
streamlit==1.34.0
streamlit==1.36.0
# via -r requirements.in
tenacity==8.3.0
tenacity==8.4.1
# via streamlit
tiktoken==0.7.0
# via
@ -293,7 +296,7 @@ toml==0.10.2
# via streamlit
toolz==0.12.1
# via altair
tornado==6.4
tornado==6.4.1
# via streamlit
tqdm==4.66.4
# via
@ -306,7 +309,7 @@ tree-sitter==0.21.3
# tree-sitter-languages
tree-sitter-languages==1.10.2
# via grep-ast
typing-extensions==4.11.0
typing-extensions==4.12.2
# via
# google-generativeai
# huggingface-hub
@ -319,7 +322,7 @@ tzdata==2024.1
# via pandas
uritemplate==4.1.1
# via google-api-python-client
urllib3==2.2.1
urllib3==2.2.2
# via requests
watchdog==4.0.1
# via -r requirements.in
@ -327,5 +330,5 @@ wcwidth==0.2.13
# via prompt-toolkit
yarl==1.9.4
# via aiohttp
zipp==3.18.2
zipp==3.19.2
# via importlib-metadata

View file

@ -9,8 +9,12 @@ else
ARG=$1
fi
# README.md before index.md, because index.md uses cog to include README.md
cog $ARG \
README.md \
website/index.md \
website/HISTORY.md \
website/docs/dotenv.md \
website/docs/commands.md \
website/docs/languages.md \
website/docs/options.md \

425
website/HISTORY.md Normal file
View file

@ -0,0 +1,425 @@
---
title: Release history
parent: More info
nav_order: 999
---
<!--[[[cog
# This page is a copy of HISTORY.md, adding the front matter above.
text = open("HISTORY.md").read()
cog.out(text)
]]]-->
# Release history
### v0.39.0
- Use `--sonnet` for Claude 3.5 Sonnet, which is the top model on [aider's LLM code editing leaderboard](https://aider.chat/docs/leaderboards/#claude-35-sonnet-takes-the-top-spot).
- All `AIDER_xxx` environment variables can now be set in `.env` (by @jpshack-at-palomar).
- Use `--llm-history-file` to log raw messages sent to the LLM (by @daniel-vainsencher).
- Commit messages are no longer prefixed with "aider:". Instead the git author and committer names have "(aider)" added.
### v0.38.0
- Use `--vim` for [vim keybindings](https://aider.chat/docs/commands.html#vi) in the chat.
- [Add LLM metadata](https://aider.chat/docs/llms/warnings.html#specifying-context-window-size-and-token-costs) via `.aider.models.json` file (by @caseymcc).
- More detailed [error messages on token limit errors](https://aider.chat/docs/troubleshooting/token-limits.html).
- Single line commit messages, without the recent chat messages.
- Ensure `--commit --dry-run` does nothing.
- Have playwright wait for idle network to better scrape js sites.
- Documentation updates, moved into website/ subdir.
- Moved tests/ into aider/tests/.
### v0.37.0
- Repo map is now optimized based on text of chat history as well as files added to chat.
- Improved prompts when no files have been added to chat to solicit LLM file suggestions.
- Aider will notice if you paste a URL into the chat, and offer to scrape it.
- Performance improvements the repo map, especially in large repos.
- Aider will not offer to add bare filenames like `make` or `run` which may just be words.
- Properly override `GIT_EDITOR` env for commits if it is already set.
- Detect supported audio sample rates for `/voice`.
- Other small bug fixes.
### v0.36.0
- [Aider can now lint your code and fix any errors](https://aider.chat/2024/05/22/linting.html).
- Aider automatically lints and fixes after every LLM edit.
- You can manually lint-and-fix files with `/lint` in the chat or `--lint` on the command line.
- Aider includes built in basic linters for all supported tree-sitter languages.
- You can also configure aider to use your preferred linter with `--lint-cmd`.
- Aider has additional support for running tests and fixing problems.
- Configure your testing command with `--test-cmd`.
- Run tests with `/test` or from the command line with `--test`.
- Aider will automatically attempt to fix any test failures.
### v0.35.0
- Aider now uses GPT-4o by default.
- GPT-4o tops the [aider LLM code editing leaderboard](https://aider.chat/docs/leaderboards/) at 72.9%, versus 68.4% for Opus.
- GPT-4o takes second on [aider's refactoring leaderboard](https://aider.chat/docs/leaderboards/#code-refactoring-leaderboard) with 62.9%, versus Opus at 72.3%.
- Added `--restore-chat-history` to restore prior chat history on launch, so you can continue the last conversation.
- Improved reflection feedback to LLMs using the diff edit format.
- Improved retries on `httpx` errors.
### v0.34.0
- Updated prompting to use more natural phrasing about files, the git repo, etc. Removed reliance on read-write/read-only terminology.
- Refactored prompting to unify some phrasing across edit formats.
- Enhanced the canned assistant responses used in prompts.
- Added explicit model settings for `openrouter/anthropic/claude-3-opus`, `gpt-3.5-turbo`
- Added `--show-prompts` debug switch.
- Bugfix: catch and retry on all litellm exceptions.
### v0.33.0
- Added native support for [Deepseek models](https://aider.chat/docs/llms.html#deepseek) using `DEEPSEEK_API_KEY` and `deepseek/deepseek-chat`, etc rather than as a generic OpenAI compatible API.
### v0.32.0
- [Aider LLM code editing leaderboards](https://aider.chat/docs/leaderboards/) that rank popular models according to their ability to edit code.
- Leaderboards include GPT-3.5/4 Turbo, Opus, Sonnet, Gemini 1.5 Pro, Llama 3, Deepseek Coder & Command-R+.
- Gemini 1.5 Pro now defaults to a new diff-style edit format (diff-fenced), enabling it to work better with larger code bases.
- Support for Deepseek-V2, via more a flexible config of system messages in the diff edit format.
- Improved retry handling on errors from model APIs.
- Benchmark outputs results in YAML, compatible with leaderboard.
### v0.31.0
- [Aider is now also AI pair programming in your browser!](https://aider.chat/2024/05/02/browser.html) Use the `--browser` switch to launch an experimental browser based version of aider.
- Switch models during the chat with `/model <name>` and search the list of available models with `/models <query>`.
### v0.30.1
- Adding missing `google-generativeai` dependency
### v0.30.0
- Added [Gemini 1.5 Pro](https://aider.chat/docs/llms.html#free-models) as a recommended free model.
- Allow repo map for "whole" edit format.
- Added `--models <MODEL-NAME>` to search the available models.
- Added `--no-show-model-warnings` to silence model warnings.
### v0.29.2
- Improved [model warnings](https://aider.chat/docs/llms.html#model-warnings) for unknown or unfamiliar models
### v0.29.1
- Added better support for groq/llama3-70b-8192
### v0.29.0
- Added support for [directly connecting to Anthropic, Cohere, Gemini and many other LLM providers](https://aider.chat/docs/llms.html).
- Added `--weak-model <model-name>` which allows you to specify which model to use for commit messages and chat history summarization.
- New command line switches for working with popular models:
- `--4-turbo-vision`
- `--opus`
- `--sonnet`
- `--anthropic-api-key`
- Improved "whole" and "diff" backends to better support [Cohere's free to use Command-R+ model](https://aider.chat/docs/llms.html#cohere).
- Allow `/add` of images from anywhere in the filesystem.
- Fixed crash when operating in a repo in a detached HEAD state.
- Fix: Use the same default model in CLI and python scripting.
### v0.28.0
- Added support for new `gpt-4-turbo-2024-04-09` and `gpt-4-turbo` models.
- Benchmarked at 61.7% on Exercism benchmark, comparable to `gpt-4-0613` and worse than the `gpt-4-preview-XXXX` models. See [recent Exercism benchmark results](https://aider.chat/2024/03/08/claude-3.html).
- Benchmarked at 34.1% on the refactoring/laziness benchmark, significantly worse than the `gpt-4-preview-XXXX` models. See [recent refactor bencmark results](https://aider.chat/2024/01/25/benchmarks-0125.html).
- Aider continues to default to `gpt-4-1106-preview` as it performs best on both benchmarks, and significantly better on the refactoring/laziness benchmark.
### v0.27.0
- Improved repomap support for typescript, by @ryanfreckleton.
- Bugfix: Only /undo the files which were part of the last commit, don't stomp other dirty files
- Bugfix: Show clear error message when OpenAI API key is not set.
- Bugfix: Catch error for obscure languages without tags.scm file.
### v0.26.1
- Fixed bug affecting parsing of git config in some environments.
### v0.26.0
- Use GPT-4 Turbo by default.
- Added `-3` and `-4` switches to use GPT 3.5 or GPT-4 (non-Turbo).
- Bug fix to avoid reflecting local git errors back to GPT.
- Improved logic for opening git repo on launch.
### v0.25.0
- Issue a warning if user adds too much code to the chat.
- https://aider.chat/docs/faq.html#how-can-i-add-all-the-files-to-the-chat
- Vocally refuse to add files to the chat that match `.aiderignore`
- Prevents bug where subsequent git commit of those files will fail.
- Added `--openai-organization-id` argument.
- Show the user a FAQ link if edits fail to apply.
- Made past articles part of https://aider.chat/blog/
### v0.24.1
- Fixed bug with cost computations when --no-steam in effect
### v0.24.0
- New `/web <url>` command which scrapes the url, turns it into fairly clean markdown and adds it to the chat.
- Updated all OpenAI model names, pricing info
- Default GPT 3.5 model is now `gpt-3.5-turbo-0125`.
- Bugfix to the `!` alias for `/run`.
### v0.23.0
- Added support for `--model gpt-4-0125-preview` and OpenAI's alias `--model gpt-4-turbo-preview`. The `--4turbo` switch remains an alias for `--model gpt-4-1106-preview` at this time.
- New `/test` command that runs a command and adds the output to the chat on non-zero exit status.
- Improved streaming of markdown to the terminal.
- Added `/quit` as alias for `/exit`.
- Added `--skip-check-update` to skip checking for the update on launch.
- Added `--openrouter` as a shortcut for `--openai-api-base https://openrouter.ai/api/v1`
- Fixed bug preventing use of env vars `OPENAI_API_BASE, OPENAI_API_TYPE, OPENAI_API_VERSION, OPENAI_API_DEPLOYMENT_ID`.
### v0.22.0
- Improvements for unified diff editing format.
- Added ! as an alias for /run.
- Autocomplete for /add and /drop now properly quotes filenames with spaces.
- The /undo command asks GPT not to just retry reverted edit.
### v0.21.1
- Bugfix for unified diff editing format.
- Added --4turbo and --4 aliases for --4-turbo.
### v0.21.0
- Support for python 3.12.
- Improvements to unified diff editing format.
- New `--check-update` arg to check if updates are available and exit with status code.
### v0.20.0
- Add images to the chat to automatically use GPT-4 Vision, by @joshuavial
- Bugfixes:
- Improved unicode encoding for `/run` command output, by @ctoth
- Prevent false auto-commits on Windows, by @ctoth
### v0.19.1
- Removed stray debug output.
### v0.19.0
- [Significantly reduced "lazy" coding from GPT-4 Turbo due to new unified diff edit format](https://aider.chat/docs/unified-diffs.html)
- Score improves from 20% to 61% on new "laziness benchmark".
- Aider now uses unified diffs by default for `gpt-4-1106-preview`.
- New `--4-turbo` command line switch as a shortcut for `--model gpt-4-1106-preview`.
### v0.18.1
- Upgraded to new openai python client v1.3.7.
### v0.18.0
- Improved prompting for both GPT-4 and GPT-4 Turbo.
- Far fewer edit errors from GPT-4 Turbo (`gpt-4-1106-preview`).
- Significantly better benchmark results from the June GPT-4 (`gpt-4-0613`). Performance leaps from 47%/64% up to 51%/71%.
- Fixed bug where in-chat files were marked as both read-only and ready-write, sometimes confusing GPT.
- Fixed bug to properly handle repos with submodules.
### v0.17.0
- Support for OpenAI's new 11/06 models:
- gpt-4-1106-preview with 128k context window
- gpt-3.5-turbo-1106 with 16k context window
- [Benchmarks for OpenAI's new 11/06 models](https://aider.chat/docs/benchmarks-1106.html)
- Streamlined [API for scripting aider, added docs](https://aider.chat/docs/faq.html#can-i-script-aider)
- Ask for more concise SEARCH/REPLACE blocks. [Benchmarked](https://aider.chat/docs/benchmarks.html) at 63.9%, no regression.
- Improved repo-map support for elisp.
- Fixed crash bug when `/add` used on file matching `.gitignore`
- Fixed misc bugs to catch and handle unicode decoding errors.
### v0.16.3
- Fixed repo-map support for C#.
### v0.16.2
- Fixed docker image.
### v0.16.1
- Updated tree-sitter dependencies to streamline the pip install process
### v0.16.0
- [Improved repository map using tree-sitter](https://aider.chat/docs/repomap.html)
- Switched from "edit block" to "search/replace block", which reduced malformed edit blocks. [Benchmarked](https://aider.chat/docs/benchmarks.html) at 66.2%, no regression.
- Improved handling of malformed edit blocks targeting multiple edits to the same file. [Benchmarked](https://aider.chat/docs/benchmarks.html) at 65.4%, no regression.
- Bugfix to properly handle malformed `/add` wildcards.
### v0.15.0
- Added support for `.aiderignore` file, which instructs aider to ignore parts of the git repo.
- New `--commit` cmd line arg, which just commits all pending changes with a sensible commit message generated by gpt-3.5.
- Added universal ctags and multiple architectures to the [aider docker image](https://aider.chat/docs/docker.html)
- `/run` and `/git` now accept full shell commands, like: `/run (cd subdir; ls)`
- Restored missing `--encoding` cmd line switch.
### v0.14.2
- Easily [run aider from a docker image](https://aider.chat/docs/docker.html)
- Fixed bug with chat history summarization.
- Fixed bug if `soundfile` package not available.
### v0.14.1
- /add and /drop handle absolute filenames and quoted filenames
- /add checks to be sure files are within the git repo (or root)
- If needed, warn users that in-chat file paths are all relative to the git repo
- Fixed /add bug in when aider launched in repo subdir
- Show models supported by api/key if requested model isn't available
### v0.14.0
- [Support for Claude2 and other LLMs via OpenRouter](https://aider.chat/docs/faq.html#accessing-other-llms-with-openrouter) by @joshuavial
- Documentation for [running the aider benchmarking suite](https://github.com/paul-gauthier/aider/tree/main/benchmark)
- Aider now requires Python >= 3.9
### v0.13.0
- [Only git commit dirty files that GPT tries to edit](https://aider.chat/docs/faq.html#how-did-v0130-change-git-usage)
- Send chat history as prompt/context for Whisper voice transcription
- Added `--voice-language` switch to constrain `/voice` to transcribe to a specific language
- Late-bind importing `sounddevice`, as it was slowing down aider startup
- Improved --foo/--no-foo switch handling for command line and yml config settings
### v0.12.0
- [Voice-to-code](https://aider.chat/docs/voice.html) support, which allows you to code with your voice.
- Fixed bug where /diff was causing crash.
- Improved prompting for gpt-4, refactor of editblock coder.
- [Benchmarked](https://aider.chat/docs/benchmarks.html) at 63.2% for gpt-4/diff, no regression.
### v0.11.1
- Added a progress bar when initially creating a repo map.
- Fixed bad commit message when adding new file to empty repo.
- Fixed corner case of pending chat history summarization when dirty committing.
- Fixed corner case of undefined `text` when using `--no-pretty`.
- Fixed /commit bug from repo refactor, added test coverage.
- [Benchmarked](https://aider.chat/docs/benchmarks.html) at 53.4% for gpt-3.5/whole (no regression).
### v0.11.0
- Automatically summarize chat history to avoid exhausting context window.
- More detail on dollar costs when running with `--no-stream`
- Stronger GPT-3.5 prompt against skipping/eliding code in replies (51.9% [benchmark](https://aider.chat/docs/benchmarks.html), no regression)
- Defend against GPT-3.5 or non-OpenAI models suggesting filenames surrounded by asterisks.
- Refactored GitRepo code out of the Coder class.
### v0.10.1
- /add and /drop always use paths relative to the git root
- Encourage GPT to use language like "add files to the chat" to ask users for permission to edit them.
### v0.10.0
- Added `/git` command to run git from inside aider chats.
- Use Meta-ENTER (Esc+ENTER in some environments) to enter multiline chat messages.
- Create a `.gitignore` with `.aider*` to prevent users from accidentaly adding aider files to git.
- Check pypi for newer versions and notify user.
- Updated keyboard interrupt logic so that 2 ^C in 2 seconds always forces aider to exit.
- Provide GPT with detailed error if it makes a bad edit block, ask for a retry.
- Force `--no-pretty` if aider detects it is running inside a VSCode terminal.
- [Benchmarked](https://aider.chat/docs/benchmarks.html) at 64.7% for gpt-4/diff (no regression)
### v0.9.0
- Support for the OpenAI models in [Azure](https://aider.chat/docs/faq.html#azure)
- Added `--show-repo-map`
- Improved output when retrying connections to the OpenAI API
- Redacted api key from `--verbose` output
- Bugfix: recognize and add files in subdirectories mentioned by user or GPT
- [Benchmarked](https://aider.chat/docs/benchmarks.html) at 53.8% for gpt-3.5-turbo/whole (no regression)
### v0.8.3
- Added `--dark-mode` and `--light-mode` to select colors optimized for terminal background
- Install docs link to [NeoVim plugin](https://github.com/joshuavial/aider.nvim) by @joshuavial
- Reorganized the `--help` output
- Bugfix/improvement to whole edit format, may improve coding editing for GPT-3.5
- Bugfix and tests around git filenames with unicode characters
- Bugfix so that aider throws an exception when OpenAI returns InvalidRequest
- Bugfix/improvement to /add and /drop to recurse selected directories
- Bugfix for live diff output when using "whole" edit format
### v0.8.2
- Disabled general availability of gpt-4 (it's rolling out, not 100% available yet)
### v0.8.1
- Ask to create a git repo if none found, to better track GPT's code changes
- Glob wildcards are now supported in `/add` and `/drop` commands
- Pass `--encoding` into ctags, require it to return `utf-8`
- More robust handling of filepaths, to avoid 8.3 windows filenames
- Added [FAQ](https://aider.chat/docs/faq.html)
- Marked GPT-4 as generally available
- Bugfix for live diffs of whole coder with missing filenames
- Bugfix for chats with multiple files
- Bugfix in editblock coder prompt
### v0.8.0
- [Benchmark comparing code editing in GPT-3.5 and GPT-4](https://aider.chat/docs/benchmarks.html)
- Improved Windows support:
- Fixed bugs related to path separators in Windows
- Added a CI step to run all tests on Windows
- Improved handling of Unicode encoding/decoding
- Explicitly read/write text files with utf-8 encoding by default (mainly benefits Windows)
- Added `--encoding` switch to specify another encoding
- Gracefully handle decoding errors
- Added `--code-theme` switch to control the pygments styling of code blocks (by @kwmiebach)
- Better status messages explaining the reason when ctags is disabled
### v0.7.2:
- Fixed a bug to allow aider to edit files that contain triple backtick fences.
### v0.7.1:
- Fixed a bug in the display of streaming diffs in GPT-3.5 chats
### v0.7.0:
- Graceful handling of context window exhaustion, including helpful tips.
- Added `--message` to give GPT that one instruction and then exit after it replies and any edits are performed.
- Added `--no-stream` to disable streaming GPT responses.
- Non-streaming responses include token usage info.
- Enables display of cost info based on OpenAI advertised pricing.
- Coding competence benchmarking tool against suite of programming tasks based on Execism's python repo.
- https://github.com/exercism/python
- Major refactor in preparation for supporting new function calls api.
- Initial implementation of a function based code editing backend for 3.5.
- Initial experiments show that using functions makes 3.5 less competent at coding.
- Limit automatic retries when GPT returns a malformed edit response.
### v0.6.2
* Support for `gpt-3.5-turbo-16k`, and all OpenAI chat models
* Improved ability to correct when gpt-4 omits leading whitespace in code edits
* Added `--openai-api-base` to support API proxies, etc.
### v0.5.0
- Added support for `gpt-3.5-turbo` and `gpt-4-32k`.
- Added `--map-tokens` to set a token budget for the repo map, along with a PageRank based algorithm for prioritizing which files and identifiers to include in the map.
- Added in-chat command `/tokens` to report on context window token usage.
- Added in-chat command `/clear` to clear the conversation history.
<!--[[[end]]]-->

View file

@ -4,6 +4,7 @@ url: "https://aider.chat"
plugins:
- jekyll-redirect-from
- jekyll-sitemap
- jekyll-feed
defaults:
- scope:
@ -19,6 +20,7 @@ exclude:
- "**/OLD/**"
- "OLD/**"
- vendor
- feed.xml
aux_links:
"GitHub":

View file

@ -44,29 +44,6 @@
seconds_per_case: 23.1
total_cost: 0.0000
- dirname: 2024-04-29-19-17-28--deepseek-coder-whole
test_cases: 132
model: deepseek-coder
released: 2024-01-25
edit_format: whole
commit_hash: c07f793-dirty
pass_rate_1: 47.0
pass_rate_2: 54.5
percent_cases_well_formed: 100.0
error_outputs: 0
num_malformed_responses: 0
user_asks: 0
lazy_comments: 2
syntax_errors: 13
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 2
command: aider --model deepseek/deepseek-coder
date: 2024-04-29
versions: 0.30.2-dev
seconds_per_case: 26.7
total_cost: 0.0000
- dirname: 2024-05-03-20-47-24--gemini-1.5-pro-diff-fenced
test_cases: 133
model: gemini-1.5-pro-latest
@ -611,4 +588,97 @@
date: 2024-06-08
versions: 0.37.1-dev
seconds_per_case: 280.6
total_cost: 0.0000
total_cost: 0.0000
- dirname: 2024-06-20-15-09-26--claude-3.5-sonnet-whole
test_cases: 133
model: claude-3.5-sonnet (whole)
edit_format: whole
commit_hash: 068609e
pass_rate_1: 61.7
pass_rate_2: 78.2
percent_cases_well_formed: 100.0
error_outputs: 4
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 2
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 0
command: aider --model openrouter/anthropic/claude-3.5-sonnet --edit-format whole
date: 2024-06-20
versions: 0.38.1-dev
seconds_per_case: 15.4
total_cost: 0.0000
- dirname: 2024-06-20-15-16-41--claude-3.5-sonnet-diff
test_cases: 133
model: claude-3.5-sonnet (diff)
edit_format: diff
commit_hash: 068609e-dirty
pass_rate_1: 57.9
pass_rate_2: 74.4
percent_cases_well_formed: 97.0
error_outputs: 48
num_malformed_responses: 11
num_with_malformed_responses: 4
user_asks: 0
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-06-20
versions: 0.38.1-dev
seconds_per_case: 21.6
total_cost: 0.0000
- dirname: 2024-06-17-14-45-54--deepseek-coder2-whole
test_cases: 133
model: DeepSeek Coder V2 (whole)
edit_format: whole
commit_hash: ca8672b
pass_rate_1: 63.9
pass_rate_2: 75.2
percent_cases_well_formed: 100.0
error_outputs: 1
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 1
lazy_comments: 0
syntax_errors: 1
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 7
command: aider --model deepseek/deepseek-coder
date: 2024-06-17
versions: 0.38.1-dev
seconds_per_case: 21.1
total_cost: 0.0537
- dirname: 2024-06-21-15-29-08--deepseek-coder2-diff-again3
test_cases: 133
model: DeepSeek Coder V2 (diff)
edit_format: diff
commit_hash: 515ab3e
pass_rate_1: 58.6
pass_rate_2: 66.2
percent_cases_well_formed: 98.5
error_outputs: 23
num_malformed_responses: 5
num_with_malformed_responses: 2
user_asks: 2
lazy_comments: 0
syntax_errors: 0
indentation_errors: 1
exhausted_context_windows: 0
test_timeouts: 2
command: aider --model deepseek/deepseek-coder
date: 2024-06-21
versions: 0.39.1-dev
seconds_per_case: 30.2
total_cost: 0.0857

View file

@ -143,4 +143,25 @@
seconds_per_case: 67.8
total_cost: 20.4889
- dirname: 2024-06-20-16-39-18--refac-claude-3.5-sonnet-diff
test_cases: 89
model: claude-3.5-sonnet (diff)
edit_format: diff
commit_hash: e5e07f9
pass_rate_1: 55.1
percent_cases_well_formed: 70.8
error_outputs: 240
num_malformed_responses: 54
num_with_malformed_responses: 26
user_asks: 10
lazy_comments: 2
syntax_errors: 0
indentation_errors: 3
exhausted_context_windows: 0
test_timeouts: 0
command: aider --model openrouter/anthropic/claude-3.5-sonnet
date: 2024-06-20
versions: 0.38.1-dev
seconds_per_case: 51.9
total_cost: 0.0000

View file

@ -10,7 +10,12 @@ $ cd /to/your/git/repo
$ export OPENAI_API_KEY=your-key-goes-here
$ aider
# Or, work with Claude 3 Opus on your repo
# Or, work with Anthropic's models
$ export ANTHROPIC_API_KEY=your-key-goes-here
# Claude 3 Opus
$ aider --opus
# Claude 3.5 Sonnet
$ aider --sonnet
```

View file

@ -5,6 +5,7 @@
<meta property="og:image" content="{{ site.url }}/assets/aider.jpg">
<meta property="twitter:image" content="{{ site.url }}/assets/aider-square.jpg">
{% endif %}
<link rel="alternate" type="application/rss+xml" title="RSS Feed" href="{{ site.url }}/feed.xml">
<link rel="preconnect" href="https://fonts.gstatic.com">
<link rel="preload" href="https://fonts.googleapis.com/css?family=Open+Sans:400,700&display=swap" as="style" type="text/css" crossorigin>
<meta name="viewport" content="width=device-width, initial-scale=1">

View file

@ -0,0 +1,9 @@
Aider has special support for providing
OpenAI and Anthropic API keys
via command line switches and yaml config settings.
*All other LLM providers* must
have their keys and settings
specified in environment variables.
This can be done in your shell,
or by using a
[`.env` file](/docs/dotenv.html).

View file

@ -1,5 +1,6 @@
##########################################################
# Sample .aider.conf.yaml
# This file lists *all* the valid configuration entries.
# Place in your home dir, or at the root of your git repo.
##########################################################
@ -12,8 +13,8 @@
#######
# Main:
## Use VI editing mode in the terminal (default: False)
#vim: false
## Log the conversation with the LLM to this file (for example, .aider.llm.history)
#llm-history-file:
## Specify the OpenAI API key
#openai-api-key:
@ -27,7 +28,7 @@
## Use claude-3-opus-20240229 model for the main chat
#opus: false
## Use claude-3-sonnet-20240229 model for the main chat
## Use claude-3-5-sonnet-20240620 model for the main chat
#sonnet: false
## Use gpt-4-0613 model for the main chat
@ -63,6 +64,9 @@
## Specify the OpenAI organization ID
#openai-organization-id:
## Verify the SSL cert when connecting to models (default: True)
#verify-ssl: true
## Specify a file with context window and costs for unknown models
#model-metadata-file:
@ -177,6 +181,9 @@
#################
# Other Settings:
## Use VI editing mode in the terminal (default: False)
#vim: false
## Specify the language for voice using ISO 639-1 code (default: auto)
#voice-language: en

229
website/assets/sample.env Normal file
View file

@ -0,0 +1,229 @@
##########################################################
# Sample aider .env file.
# Place at the root of your git repo.
# Or use `aider --env <fname>` to specify.
##########################################################
#################
# LLM parameters:
#
# Include xxx_API_KEY parameters and other params needed for your LLMs.
# See https://aider.chat/docs/llms.html for details.
## OpenAI
#OPENAI_API_KEY=
## Anthropic
#ANTHROPIC_API_KEY=
##...
#######
# Main:
## Log the conversation with the LLM to this file (for example, .aider.llm.history)
#AIDER_LLM_HISTORY_FILE=
## Specify the OpenAI API key
#OPENAI_API_KEY=
## Specify the Anthropic API key
#ANTHROPIC_API_KEY=
## Specify the model to use for the main chat (default: gpt-4o)
#AIDER_MODEL=gpt-4o
## Use claude-3-opus-20240229 model for the main chat
#AIDER_OPUS=
## Use claude-3-5-sonnet-20240620 model for the main chat
#AIDER_SONNET=
## Use gpt-4-0613 model for the main chat
#AIDER_4=
## Use gpt-4o model for the main chat
#AIDER_4O=
## Use gpt-4-1106-preview model for the main chat
#AIDER_4_TURBO=
## Use gpt-3.5-turbo model for the main chat
#AIDER_35TURBO=
#################
# Model Settings:
## List known models which match the (partial) MODEL name
#AIDER_MODELS=
## Specify the api base url
#OPENAI_API_BASE=
## Specify the api_type
#OPENAI_API_TYPE=
## Specify the api_version
#OPENAI_API_VERSION=
## Specify the deployment_id
#OPENAI_API_DEPLOYMENT_ID=
## Specify the OpenAI organization ID
#OPENAI_ORGANIZATION_ID=
## Verify the SSL cert when connecting to models (default: True)
#AIDER_VERIFY_SSL=true
## Specify a file with context window and costs for unknown models
#AIDER_MODEL_METADATA_FILE=
## Specify what edit format the LLM should use (default depends on model)
#AIDER_EDIT_FORMAT=
## Specify the model to use for commit messages and chat history summarization (default depends on --model)
#AIDER_WEAK_MODEL=
## Only work with models that have meta-data available (default: True)
#AIDER_SHOW_MODEL_WARNINGS=true
## Max number of tokens to use for repo map, use 0 to disable (default: 1024)
#AIDER_MAP_TOKENS=true
## Maximum number of tokens to use for chat history. If not specified, uses the model's max_chat_history_tokens.
#AIDER_MAX_CHAT_HISTORY_TOKENS=
## Specify the .env file to load (default: .env in git root)
#AIDER_ENV_FILE=.env
################
# History Files:
## Specify the chat input history file (default: .aider.input.history)
#AIDER_INPUT_HISTORY_FILE=.aider.input.history
## Specify the chat history file (default: .aider.chat.history.md)
#AIDER_CHAT_HISTORY_FILE=.aider.chat.history.md
## Restore the previous chat history messages (default: False)
#AIDER_RESTORE_CHAT_HISTORY=false
##################
# Output Settings:
## Use colors suitable for a dark terminal background (default: False)
#AIDER_DARK_MODE=false
## Use colors suitable for a light terminal background (default: False)
#AIDER_LIGHT_MODE=false
## Enable/disable pretty, colorized output (default: True)
#AIDER_PRETTY=true
## Enable/disable streaming responses (default: True)
#AIDER_STREAM=true
## Set the color for user input (default: #00cc00)
#AIDER_USER_INPUT_COLOR=#00cc00
## Set the color for tool output (default: None)
#AIDER_TOOL_OUTPUT_COLOR=
## Set the color for tool error messages (default: red)
#AIDER_TOOL_ERROR_COLOR=#FF2222
## Set the color for assistant output (default: #0088ff)
#AIDER_ASSISTANT_OUTPUT_COLOR=#0088ff
## Set the markdown code theme (default: default, other options include monokai, solarized-dark, solarized-light)
#AIDER_CODE_THEME=default
## Show diffs when committing changes (default: False)
#AIDER_SHOW_DIFFS=false
###############
# Git Settings:
## Enable/disable looking for a git repo (default: True)
#AIDER_GIT=true
## Enable/disable adding .aider* to .gitignore (default: True)
#AIDER_GITIGNORE=true
## Specify the aider ignore file (default: .aiderignore in git root)
#AIDER_AIDERIGNORE=.aiderignore
## Enable/disable auto commit of LLM changes (default: True)
#AIDER_AUTO_COMMITS=true
## Enable/disable commits when repo is found dirty (default: True)
#AIDER_DIRTY_COMMITS=true
## Perform a dry run without modifying files (default: False)
#AIDER_DRY_RUN=false
########################
# Fixing and committing:
## Commit all pending changes with a suitable commit message, then exit
#AIDER_COMMIT=false
## Lint and fix provided files, or dirty files if none provided
#AIDER_LINT=false
## Specify lint commands to run for different languages, eg: "python: flake8 --select=..." (can be used multiple times)
#AIDER_LINT_CMD=
## Enable/disable automatic linting after changes (default: True)
#AIDER_AUTO_LINT=true
## Specify command to run tests
#AIDER_TEST_CMD=
## Enable/disable automatic testing after changes (default: False)
#AIDER_AUTO_TEST=false
## Run tests and fix problems found
#AIDER_TEST=false
#################
# Other Settings:
## Use VI editing mode in the terminal (default: False)
#AIDER_VIM=false
## Specify the language for voice using ISO 639-1 code (default: auto)
#AIDER_VOICE_LANGUAGE=en
## Check for updates and return status in the exit code
#AIDER_CHECK_UPDATE=false
## Skips checking for the update when the program runs
#AIDER_SKIP_CHECK_UPDATE=false
## Apply the changes from the given file instead of running the chat (debug)
#AIDER_APPLY=
## Always say yes to every confirmation
#AIDER_YES=
## Enable verbose output
#AIDER_VERBOSE=false
## Print the repo map and exit (debug)
#AIDER_SHOW_REPO_MAP=false
## Print the system prompts and exit (debug)
#AIDER_SHOW_PROMPTS=false
## Specify a single message to send the LLM, process reply then exit (disables chat mode)
#AIDER_MESSAGE=
## Specify a file containing the message to send the LLM, process reply, then exit (disables chat mode)
#AIDER_MESSAGE_FILE=
## Specify the encoding for input and output (default: utf-8)
#AIDER_ENCODING=utf-8
## Run aider in your browser
#AIDER_GUI=false

View file

@ -10,6 +10,8 @@ Most of aider's options can be set in an `.aider.conf.yml` file,
which can be placed in your home directory or at the root of
your git repo.
{% include special-keys.md %}
Below is a sample of the file, which you
can also
[download from GitHub](https://github.com/paul-gauthier/aider/blob/main/website/assets/sample.aider.conf.yml).
@ -26,6 +28,7 @@ cog.outl("```")
```
##########################################################
# Sample .aider.conf.yaml
# This file lists *all* the valid configuration entries.
# Place in your home dir, or at the root of your git repo.
##########################################################
@ -38,8 +41,8 @@ cog.outl("```")
#######
# Main:
## Use VI editing mode in the terminal (default: False)
#vim: false
## Log the conversation with the LLM to this file (for example, .aider.llm.history)
#llm-history-file:
## Specify the OpenAI API key
#openai-api-key:
@ -53,7 +56,7 @@ cog.outl("```")
## Use claude-3-opus-20240229 model for the main chat
#opus: false
## Use claude-3-sonnet-20240229 model for the main chat
## Use claude-3-5-sonnet-20240620 model for the main chat
#sonnet: false
## Use gpt-4-0613 model for the main chat
@ -89,6 +92,9 @@ cog.outl("```")
## Specify the OpenAI organization ID
#openai-organization-id:
## Verify the SSL cert when connecting to models (default: True)
#verify-ssl: true
## Specify a file with context window and costs for unknown models
#model-metadata-file:
@ -203,6 +209,9 @@ cog.outl("```")
#################
# Other Settings:
## Use VI editing mode in the terminal (default: False)
#vim: false
## Specify the language for voice using ISO 639-1 code (default: auto)
#voice-language: en

View file

@ -4,28 +4,263 @@ nav_order: 900
description: Using a .env file to store LLM API keys for aider.
---
# Storing LLM params in .env
# Config with .env
You can use a `.env` file to store API keys and other settings for the
models you use with aider.
You currently can not set general aider options
in the `.env` file, only LLM environment variables.
You can also set many general aider options
in the `.env` file.
{% include special-keys.md %}
Aider will look for a `.env` file in the
root of your git repo or in the current directory.
You can give it an explicit file to load with the `--env-file <filename>` parameter.
Here is an example `.env` file:
Below is a sample `.env` file, which you
can also
[download from GitHub](https://github.com/paul-gauthier/aider/blob/main/website/assets/sample.env).
<!--[[[cog
from aider.args import get_sample_dotenv
from pathlib import Path
text=get_sample_dotenv()
Path("website/assets/sample.env").write_text(text)
cog.outl("```")
cog.out(text)
cog.outl("```")
]]]-->
```
OPENAI_API_KEY=<key>
ANTHROPIC_API_KEY=<key>
GROQ_API_KEY=<key>
OPENROUTER_API_KEY=<key>
##########################################################
# Sample aider .env file.
# Place at the root of your git repo.
# Or use `aider --env <fname>` to specify.
##########################################################
AZURE_API_KEY=<key>
AZURE_API_VERSION=2023-05-15
AZURE_API_BASE=https://example-endpoint.openai.azure.com
#################
# LLM parameters:
#
# Include xxx_API_KEY parameters and other params needed for your LLMs.
# See https://aider.chat/docs/llms.html for details.
OLLAMA_API_BASE=http://127.0.0.1:11434
## OpenAI
#OPENAI_API_KEY=
## Anthropic
#ANTHROPIC_API_KEY=
##...
#######
# Main:
## Log the conversation with the LLM to this file (for example, .aider.llm.history)
#AIDER_LLM_HISTORY_FILE=
## Specify the OpenAI API key
#OPENAI_API_KEY=
## Specify the Anthropic API key
#ANTHROPIC_API_KEY=
## Specify the model to use for the main chat (default: gpt-4o)
#AIDER_MODEL=gpt-4o
## Use claude-3-opus-20240229 model for the main chat
#AIDER_OPUS=
## Use claude-3-5-sonnet-20240620 model for the main chat
#AIDER_SONNET=
## Use gpt-4-0613 model for the main chat
#AIDER_4=
## Use gpt-4o model for the main chat
#AIDER_4O=
## Use gpt-4-1106-preview model for the main chat
#AIDER_4_TURBO=
## Use gpt-3.5-turbo model for the main chat
#AIDER_35TURBO=
#################
# Model Settings:
## List known models which match the (partial) MODEL name
#AIDER_MODELS=
## Specify the api base url
#OPENAI_API_BASE=
## Specify the api_type
#OPENAI_API_TYPE=
## Specify the api_version
#OPENAI_API_VERSION=
## Specify the deployment_id
#OPENAI_API_DEPLOYMENT_ID=
## Specify the OpenAI organization ID
#OPENAI_ORGANIZATION_ID=
## Verify the SSL cert when connecting to models (default: True)
#AIDER_VERIFY_SSL=true
## Specify a file with context window and costs for unknown models
#AIDER_MODEL_METADATA_FILE=
## Specify what edit format the LLM should use (default depends on model)
#AIDER_EDIT_FORMAT=
## Specify the model to use for commit messages and chat history summarization (default depends on --model)
#AIDER_WEAK_MODEL=
## Only work with models that have meta-data available (default: True)
#AIDER_SHOW_MODEL_WARNINGS=true
## Max number of tokens to use for repo map, use 0 to disable (default: 1024)
#AIDER_MAP_TOKENS=true
## Maximum number of tokens to use for chat history. If not specified, uses the model's max_chat_history_tokens.
#AIDER_MAX_CHAT_HISTORY_TOKENS=
## Specify the .env file to load (default: .env in git root)
#AIDER_ENV_FILE=.env
################
# History Files:
## Specify the chat input history file (default: .aider.input.history)
#AIDER_INPUT_HISTORY_FILE=.aider.input.history
## Specify the chat history file (default: .aider.chat.history.md)
#AIDER_CHAT_HISTORY_FILE=.aider.chat.history.md
## Restore the previous chat history messages (default: False)
#AIDER_RESTORE_CHAT_HISTORY=false
##################
# Output Settings:
## Use colors suitable for a dark terminal background (default: False)
#AIDER_DARK_MODE=false
## Use colors suitable for a light terminal background (default: False)
#AIDER_LIGHT_MODE=false
## Enable/disable pretty, colorized output (default: True)
#AIDER_PRETTY=true
## Enable/disable streaming responses (default: True)
#AIDER_STREAM=true
## Set the color for user input (default: #00cc00)
#AIDER_USER_INPUT_COLOR=#00cc00
## Set the color for tool output (default: None)
#AIDER_TOOL_OUTPUT_COLOR=
## Set the color for tool error messages (default: red)
#AIDER_TOOL_ERROR_COLOR=#FF2222
## Set the color for assistant output (default: #0088ff)
#AIDER_ASSISTANT_OUTPUT_COLOR=#0088ff
## Set the markdown code theme (default: default, other options include monokai, solarized-dark, solarized-light)
#AIDER_CODE_THEME=default
## Show diffs when committing changes (default: False)
#AIDER_SHOW_DIFFS=false
###############
# Git Settings:
## Enable/disable looking for a git repo (default: True)
#AIDER_GIT=true
## Enable/disable adding .aider* to .gitignore (default: True)
#AIDER_GITIGNORE=true
## Specify the aider ignore file (default: .aiderignore in git root)
#AIDER_AIDERIGNORE=.aiderignore
## Enable/disable auto commit of LLM changes (default: True)
#AIDER_AUTO_COMMITS=true
## Enable/disable commits when repo is found dirty (default: True)
#AIDER_DIRTY_COMMITS=true
## Perform a dry run without modifying files (default: False)
#AIDER_DRY_RUN=false
########################
# Fixing and committing:
## Commit all pending changes with a suitable commit message, then exit
#AIDER_COMMIT=false
## Lint and fix provided files, or dirty files if none provided
#AIDER_LINT=false
## Specify lint commands to run for different languages, eg: "python: flake8 --select=..." (can be used multiple times)
#AIDER_LINT_CMD=
## Enable/disable automatic linting after changes (default: True)
#AIDER_AUTO_LINT=true
## Specify command to run tests
#AIDER_TEST_CMD=
## Enable/disable automatic testing after changes (default: False)
#AIDER_AUTO_TEST=false
## Run tests and fix problems found
#AIDER_TEST=false
#################
# Other Settings:
## Use VI editing mode in the terminal (default: False)
#AIDER_VIM=false
## Specify the language for voice using ISO 639-1 code (default: auto)
#AIDER_VOICE_LANGUAGE=en
## Check for updates and return status in the exit code
#AIDER_CHECK_UPDATE=false
## Skips checking for the update when the program runs
#AIDER_SKIP_CHECK_UPDATE=false
## Apply the changes from the given file instead of running the chat (debug)
#AIDER_APPLY=
## Always say yes to every confirmation
#AIDER_YES=
## Enable verbose output
#AIDER_VERBOSE=false
## Print the repo map and exit (debug)
#AIDER_SHOW_REPO_MAP=false
## Print the system prompts and exit (debug)
#AIDER_SHOW_PROMPTS=false
## Specify a single message to send the LLM, process reply then exit (disables chat mode)
#AIDER_MESSAGE=
## Specify a file containing the message to send the LLM, process reply, then exit (disables chat mode)
#AIDER_MESSAGE_FILE=
## Specify the encoding for input and output (default: utf-8)
#AIDER_ENCODING=utf-8
## Run aider in your browser
#AIDER_GUI=false
```
<!--[[[end]]]-->

View file

@ -1,9 +1,9 @@
---
nav_order: 85
nav_order: 90
description: Frequently asked questions about aider.
---
# Frequently asked questions
# FAQ
{: .no_toc }
- TOC

View file

@ -1,4 +1,5 @@
---
parent: More info
nav_order: 800
description: Aider is tightly integrated with git.
---
@ -14,14 +15,21 @@ Aider is tightly integrated with git, which makes it easy to:
Aider specifically uses git in these ways:
- It asks to create a git repo if you launch it in a directory without one.
- Whenever aider edits a file, it commits those changes with a descriptive commit message. This makes it easy to undo or review aider's changes.
- Aider takes special care before editing files that already have uncommitted changes (dirty files). Aider will first commit any preexisting changes with a descriptive commit message. This keeps your edits separate from aider's edits, and makes sure you never lose your work if aider makes an inappropriate change.
- It asks to create a git repo if you launch it in a directory without one.
- Whenever aider edits a file, it commits those changes with a descriptive commit message. This makes it easy to undo or review aider's changes.
These commits will have "(aider)" appended to their git author and git committer metadata.
- Aider takes special care before editing files that already have uncommitted changes (dirty files). Aider will first commit any preexisting changes with a descriptive commit message.
This keeps your edits separate from aider's edits, and makes sure you never lose your work if aider makes an inappropriate change.
These commits will have "(aider)" appended to their git committer metadata.
## In-chat commands
Aider also allows you to use in-chat commands to `/diff` or `/undo` the last change.
To do more complex management of your git history, you cat use raw `git` commands,
either by using `/git` within the chat, or with standard git tools outside of aider.
## Disabling git integration
While it is not recommended, you can disable aider's use of git in a few ways:
- `--no-auto-commits` will stop aider from git committing each of its changes.

View file

@ -1,4 +1,5 @@
---
parent: More info
nav_order: 900
description: Aider supports pretty much all popular coding languages.
---

View file

@ -16,6 +16,20 @@ While [aider can connect to almost any LLM](/docs/llms.html),
it works best with models that score well on the benchmarks.
## Claude 3.5 Sonnet takes the top spot
Claude 3.5 Sonnet is now the top ranked model on aider's code editing leaderboard.
DeepSeek Coder V2 only spent 4 days in the top spot.
The new Sonnet came in 3rd on aider's refactoring leaderboard, behind GPT-4o and Opus.
Sonnet ranked #1 when using the "whole" editing format,
but it also scored very well with
aider's "diff" editing format.
This format allows it to return code changes as diffs -- saving time and token costs,
and making it practical to work with larger source files.
As such, aider uses "diff" by default with this new Sonnet model.
## Code editing leaderboard
[Aider's code editing benchmark](/docs/benchmarks.html#the-benchmark) asks the LLM to edit python source files to complete 133 small coding exercises. This benchmark measures the LLM's coding ability, but also whether it can consistently emit code edits in the format specified in the system prompt.

View file

@ -10,22 +10,27 @@ description: Aider can connect to most LLMs for AI pair programming.
[![connecting to many LLMs](/assets/llms.jpg)](https://aider.chat/assets/llms.jpg)
## Best models
{: .no_toc }
**Aider works best with [GPT-4o](/docs/llms/openai.html) and
[Claude 3 Opus](/docs/llms/anthropic.html),**
as they are the very best models for editing code.
Aider works best with these models, which are skilled at editing code:
- [GPT-4o](/docs/llms/openai.html)
- [Claude 3.5 Sonnet](/docs/llms/anthropic.html)
- [Claude 3 Opus](/docs/llms/anthropic.html)
- [DeepSeek Coder V2](/docs/llms/deepseek.html)
## Free models
{: .no_toc }
Aider works with a number of **free** API providers:
- Google's [Gemini 1.5 Pro](/docs/llms/gemini.html) is the most capable free model to use with aider, with
- The [DeepSeek Coder V2](/docs/llms/deepseek.html) model gets the top score on aider's code editing benchmark. DeepSeek has been offering 5M free tokens of API usage.
- Google's [Gemini 1.5 Pro](/docs/llms/gemini.html) works with aider, with
code editing capabilities similar to GPT-3.5.
- You can use [Llama 3 70B on Groq](/docs/llms/groq.html) which is comparable to GPT-3.5 in code editing performance.
- The [Deepseek Chat v2](/docs/llms/deepseek.html) model work well with aider, better than GPT-3.5. Deepseek currently offers 5M free tokens of API usage.
- Cohere also offers free API access to their [Command-R+ model](/docs/llms/cohere.html), which works with aider as a *very basic* coding assistant.
## Local models

View file

@ -22,7 +22,7 @@ setx ANTHROPIC_API_KEY <key> # Windows
# Claude 3 Opus
aider --opus
# Claude 3 Sonnet
# Claude 3.5 Sonnet
aider --sonnet
# List models available from Anthropic

View file

@ -25,3 +25,6 @@ aider --model azure/<your_deployment_name>
# List models available from Azure
aider --models azure/
```
Note that aider will also use environment variables
like `AZURE_OPENAI_API_xxx`.

View file

@ -3,10 +3,11 @@ parent: Connecting to LLMs
nav_order: 500
---
# Deepseek
# DeepSeek
Aider can connect to the Deepseek.com API.
Deepseek appears to grant 5M tokens of free API usage to new accounts.
Aider can connect to the DeepSeek.com API.
The DeepSeek Coder V2 model gets the top score on aider's code editing benchmark.
DeepSeek appears to grant 5M tokens of free API usage to new accounts.
```
pip install aider-chat
@ -14,8 +15,8 @@ pip install aider-chat
export DEEPSEEK_API_KEY=<key> # Mac/Linux
setx DEEPSEEK_API_KEY <key> # Windows
# Use Deepseek Chat v2
aider --model deepseek/deepseek-chat
# Use DeepSeek Coder V2
aider --model deepseek/deepseek-coder
```
See the [model warnings](warnings.html)

View file

@ -0,0 +1,8 @@
---
has_children: true
nav_order: 85
---
# More info
See below for more info about aider, including some advanced topics.

View file

@ -21,6 +21,15 @@ usage: aider [-h] [--vim] [--openai-api-key] [--anthropic-api-key]
[--openai-api-deployment-id] [--openai-organization-id]
[--model-settings-file] [--model-metadata-file]
[--edit-format] [--weak-model]
=======
usage: aider [-h] [--llm-history-file] [--openai-api-key]
[--anthropic-api-key] [--model] [--opus] [--sonnet]
[--4] [--4o] [--4-turbo] [--35turbo] [--models]
[--openai-api-base] [--openai-api-type]
[--openai-api-version] [--openai-api-deployment-id]
[--openai-organization-id]
[--verify-ssl | --no-verify-ssl]
[--model-metadata-file] [--edit-format] [--weak-model]
[--show-model-warnings | --no-show-model-warnings]
[--map-tokens] [--max-chat-history-tokens] [--env-file]
[--input-history-file] [--chat-history-file]
@ -36,7 +45,7 @@ usage: aider [-h] [--vim] [--openai-api-key] [--anthropic-api-key]
[--dry-run | --no-dry-run] [--commit] [--lint]
[--lint-cmd] [--auto-lint | --no-auto-lint]
[--test-cmd] [--auto-test | --no-auto-test] [--test]
[--voice-language] [--version] [--check-update]
[--vim] [--voice-language] [--version] [--check-update]
[--skip-check-update] [--apply] [--yes] [-v]
[--show-repo-map] [--show-prompts] [--message]
[--message-file] [--encoding] [-c] [--gui]
@ -53,10 +62,9 @@ Aliases:
## Main:
### `--vim`
Use VI editing mode in the terminal (default: False)
Default: False
Environment variable: `AIDER_VIM`
### `--llm-history-file LLM_HISTORY_FILE`
Log the conversation with the LLM to this file (for example, .aider.llm.history)
Environment variable: `AIDER_LLM_HISTORY_FILE`
### `--openai-api-key OPENAI_API_KEY`
Specify the OpenAI API key
@ -76,7 +84,7 @@ Use claude-3-opus-20240229 model for the main chat
Environment variable: `AIDER_OPUS`
### `--sonnet`
Use claude-3-sonnet-20240229 model for the main chat
Use claude-3-5-sonnet-20240620 model for the main chat
Environment variable: `AIDER_SONNET`
### `--4`
@ -132,6 +140,14 @@ Environment variable: `OPENAI_ORGANIZATION_ID`
### `--model-settings-file MODEL_FILE`
Specify a file with aider model settings for unknown models
Environment variable: `AIDER_MODEL_SETTINGS_FILE`
=======
### `--verify-ssl`
Verify the SSL cert when connecting to models (default: True)
Default: True
Environment variable: `AIDER_VERIFY_SSL`
Aliases:
- `--verify-ssl`
- `--no-verify-ssl`
### `--model-metadata-file MODEL_FILE`
Specify a file with context window and costs for unknown models
@ -336,6 +352,11 @@ Environment variable: `AIDER_TEST`
## Other Settings:
### `--vim`
Use VI editing mode in the terminal (default: False)
Default: False
Environment variable: `AIDER_VIM`
### `--voice-language VOICE_LANGUAGE`
Specify the language for voice using ISO 639-1 code (default: auto)
Default: en

View file

@ -1,4 +1,5 @@
---
parent: More info
highlight_image: /assets/robot-ast.png
nav_order: 900
description: Aider uses a map of your git repository to provide code context to LLMs.
@ -12,23 +13,22 @@ Aider
uses a **concise map of your whole git repository**
that includes
the most important classes and functions along with their types and call signatures.
This helps the LLM understand the code it needs to change,
This helps aider understand the code it's editing
and how it relates to the other parts of the codebase.
The repo map also helps the LLM write new code
The repo map also helps aider write new code
that respects and utilizes existing libraries, modules and abstractions
found elsewhere in the codebase.
## Using a repo map to provide context
Aider sends a **repo map** to the LLM along with
each request from the user to make a code change.
The map contains a list of the files in the
each change request from the user.
The repo map contains a list of the files in the
repo, along with the key symbols which are defined in each file.
It shows how each of these symbols are defined in the
source code, by including the critical lines of code for each definition.
It shows how each of these symbols are defined, by including the critical lines of code for each definition.
Here's a
sample of the map of the aider repo, just showing the maps of
Here's a part of
the repo map of aider's repo, for
[base_coder.py](https://github.com/paul-gauthier/aider/blob/main/aider/coders/base_coder.py)
and
[commands.py](https://github.com/paul-gauthier/aider/blob/main/aider/commands.py)
@ -71,7 +71,7 @@ aider/commands.py:
Mapping out the repo like this provides some key benefits:
- The LLM can see classes, methods and function signatures from everywhere in the repo. This alone may give it enough context to solve many tasks. For example, it can probably figure out how to use the API exported from a module just based on the details shown in the map.
- If it needs to see more code, the LLM can use the map to figure out by itself which files it needs to look at in more detail. The LLM will then ask to see these specific files, and aider will automatically add them to the chat context.
- If it needs to see more code, the LLM can use the map to figure out which files it needs to look at. The LLM can ask to see these specific files, and aider will offer to add them to the chat context.
## Optimizing the map

View file

@ -1,4 +1,5 @@
---
parent: More info
nav_order: 900
description: You can script aider via the command line or python.
---

View file

@ -33,8 +33,8 @@ so editing errors are probably unavoidable.
## Reduce distractions
Many LLM now have very large context windows,
but filling them with irrelevant code often
cofuses the model.
but filling them with irrelevant code or conversation
can cofuse the model.
- Don't add too many files to the chat, *just* add the files you think need to be edited.
Aider also sends the LLM a [map of your entire git repo](https://aider.chat/docs/repomap.html), so other relevant code will be included automatically.

View file

@ -0,0 +1,86 @@
---
parent: Troubleshooting
nav_order: 25
---
# Token limits
Every LLM has limits on how many tokens it can process for each request:
- The model's **context window** limits how many total tokens of
*input and output* it can process.
- Each model has limit on how many **output tokens** it can
produce.
Aider will report an error if a model responds indicating that
it has exceeded a token limit.
The error will include suggested actions to try and
avoid hitting token limits.
Here's an example error:
```
Model gpt-3.5-turbo has hit a token limit!
Input tokens: 768 of 16385
Output tokens: 4096 of 4096 -- exceeded output limit!
Total tokens: 4864 of 16385
To reduce output tokens:
- Ask for smaller changes in each request.
- Break your code into smaller source files.
- Try using a stronger model like gpt-4o or opus that can return diffs.
For more info: https://aider.chat/docs/token-limits.html
```
## Input tokens & context window size
The most common problem is trying to send too much data to a
model,
overflowing its context window.
Technically you can exhaust the context window if the input is
too large or if the input plus output are too large.
Strong models like GPT-4o and Opus have quite
large context windows, so this sort of error is
typically only an issue when working with weaker models.
The easiest solution is to try and reduce the input tokens
by removing files from the chat.
It's best to only add the files that aider will need to *edit*
to complete your request.
- Use `/tokens` to see token usage.
- Use `/drop` to remove unneeded files from the chat session.
- Use `/clear` to clear the chat history.
- Break your code into smaller source files.
## Output token limits
Most models have quite small output limits, often as low
as 4k tokens.
If you ask aider to make a large change that affects a lot
of code, the LLM may hit output token limits
as it tries to send back all the changes.
To avoid hitting output token limits:
- Ask for smaller changes in each request.
- Break your code into smaller source files.
- Try using a stronger model like gpt-4o or opus that can return diffs.
## Other causes
Sometimes token limit errors are caused by
non-compliant API proxy servers
or bugs in the API server you are using to host a local model.
Aider has been well tested when directly connecting to
major
[LLM provider cloud APIs](https://aider.chat/docs/llms.html).
For serving local models,
[Ollama](https://aider.chat/docs/llms/ollama.html) is known to work well with aider.
Try using aider without an API proxy server
or directly with one of the recommended cloud APIs
and see if your token limit problems resolve.

View file

@ -10,11 +10,31 @@ Run `aider` with the source code files you want to edit.
These files will be "added to the chat session", so that
aider can see their
contents and edit them for you.
They can be existing files or the name of files you want
aider to create for you.
```
aider <file1> <file2> ...
```
At the aider `>` prompt, ask for code changes and aider
will edit those files to accomplish your request.
```
$ aider factorial.py
Aider v0.37.1-dev
Models: gpt-4o with diff edit format, weak model gpt-3.5-turbo
Git repo: .git with 258 files
Repo-map: using 1024 tokens
Use /help to see in-chat commands, run with --help to see cmd line args
───────────────────────────────────────────────────────────────────────
>`Make a program that asks for a number and prints its factorial
...
```
## Adding files
Just add the files that the aider will need to *edit*.

View file

@ -4,16 +4,24 @@ nav_order: 1
---
<!--[[[cog
cog.out(open("README.md").read())
# This page is a copy of README.md, adding the front matter above.
# Remove any cog markup before inserting the README text.
text = open("README.md").read()
text = text.replace('['*3 + 'cog', ' NOOP ')
text = text.replace('['*3 + 'end', ' NOOP ')
text = text.replace(']'*3, '')
cog.out(text)
]]]-->
<!-- Edit README.md, not index.md -->
# Aider is AI pair programming in your terminal
Aider lets you pair program with LLMs,
to edit code in your local git repository.
Start a new project or work with an existing git repo.
Aider works best with GPT-4o and Claude 3 Opus
and can [connect to almost any LLM](https://aider.chat/docs/llms.html).
Aider can [connect to almost any LLM](https://aider.chat/docs/llms.html).
and works best with GPT-4o, Claude 3.5 Sonnet, Claude 3 Opus and DeepSeek Coder V2.
<p align="center">
<img
@ -32,59 +40,82 @@ and can [connect to almost any LLM](https://aider.chat/docs/llms.html).
</p>
## Getting started
<!-- NOOP
# We can't do this here: {% include get-started.md %}
# Because this page is rendered by GitHub as the repo README
cog.out(open("website/_includes/get-started.md").read())
-->
You can get started quickly like this:
{% include get-started.md %}
```
$ pip install aider-chat
**See the
# Change directory into a git repo
$ cd /to/your/git/repo
# Work with GPT-4o on your repo
$ export OPENAI_API_KEY=your-key-goes-here
$ aider
# Or, work with Anthropic's models
$ export ANTHROPIC_API_KEY=your-key-goes-here
# Claude 3 Opus
$ aider --opus
# Claude 3.5 Sonnet
$ aider --sonnet
```
<!-- NOOP -->
See the
[installation instructions](https://aider.chat/docs/install.html)
and other
[documentation](https://aider.chat/docs/usage.html)
for more details.**
for more details.
## Features
- Chat with aider about your code: `aider <file1> <file2> ...`
- Run aider with the files you want to edit: `aider <file1> <file2> ...`
- Ask for changes:
- New features, test cases, improvements.
- Bug fixes, updated docs or code refactors.
- Paste in a GitHub issue that needs to be solved.
- Add new features or test cases.
- Describe a bug.
- Paste in an error message or or GitHub issue URL.
- Refactor code.
- Update docs.
- Aider will edit your files to complete your request.
- Aider [automatically git commits](https://aider.chat/docs/git.html) changes with a sensible commit message.
- Aider works with [most popular languages](https://aider.chat/docs/languages.html): python, javascript, typescript, php, html, css, and more...
- Aider works best with GPT-4o and Claude 3 Opus
and can [connect to almost any LLM](https://aider.chat/docs/llms.html).
- Aider can make coordinated changes across multiple files at once.
- Aider can edit multiple files at once for complex requests.
- Aider uses a [map of your entire git repo](https://aider.chat/docs/repomap.html), which helps it work well in larger codebases.
- You can also edit files in your editor while chatting with aider.
Aider will notice and always use the latest version.
So you can bounce back and forth between aider and your editor, to collaboratively code with AI.
- Images can be added to the chat (GPT-4o, GPT-4 Turbo, etc).
- URLs can be added to the chat and aider will read their content.
- [Code with your voice](https://aider.chat/docs/voice.html) using speech recognition.
- Edit files in your editor while chatting with aider,
and it will always use the latest version.
Pair program with AI.
- Add images to the chat (GPT-4o, GPT-4 Turbo, etc).
- Add URLs to the chat and aider will read their content.
- [Code with your voice](https://aider.chat/docs/voice.html).
## State of the art
## Top tier performance
Aider has the
[top score on SWE Bench](https://aider.chat/2024/06/02/main-swe-bench.html).
[Aider has the one of the top scores on SWE Bench](https://aider.chat/2024/06/02/main-swe-bench.html).
SWE Bench is a challenging software engineering benchmark where aider
solved *real* GitHub issues from popular open source
projects like django, scikitlearn, matplotlib, etc.
<p align="center">
<a href="https://aider.chat/2024/06/02/main-swe-bench.html">
<img src="https://aider.chat/assets/swe_bench.svg" alt="aider swe bench">
</a>
</p>
## Documentation
## More info
- [Documentation](https://aider.chat/)
- [Installation](https://aider.chat/docs/install.html)
- [Usage](https://aider.chat/docs/usage.html)
- [Tutorial videos](https://aider.chat/docs/tutorials.html)
- [Connecting to LLMs](https://aider.chat/docs/llms.html)
- [Configuration](https://aider.chat/docs/config.html)
- [Troubleshooting](https://aider.chat/docs/troubleshooting.html)
- [LLM Leaderboards](https://aider.chat/docs/leaderboards/)
- [FAQ](https://aider.chat/docs/faq.html)
- [GitHub](https://github.com/paul-gauthier/aider)
- [Discord](https://discord.gg/Tv2uQnR88V)
- [Blog](https://aider.chat/blog/)