Merge branch 'main' into json-coders

This commit is contained in:
Paul Gauthier 2024-08-13 17:03:30 -07:00
commit e1b83ba6b5
81 changed files with 2210 additions and 720 deletions

View file

@ -21,12 +21,12 @@ jobs:
- name: Install dependencies - name: Install dependencies
run: | run: |
python -m pip install --upgrade pip python -m pip install --upgrade pip
pip install setuptools wheel twine importlib-metadata==7.2.1 pip install build setuptools wheel twine importlib-metadata==7.2.1
- name: Build and publish - name: Build and publish
env: env:
TWINE_USERNAME: __token__ TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ secrets.PYPI_API_TOKEN }} TWINE_PASSWORD: ${{ secrets.PYPI_API_TOKEN }}
run: | run: |
python setup.py sdist bdist_wheel python -m build
twine upload dist/* twine upload dist/*

1
.gitignore vendored
View file

@ -5,6 +5,7 @@ aider.code-workspace
.aider* .aider*
aider_chat.egg-info/ aider_chat.egg-info/
build build
dist/
Gemfile.lock Gemfile.lock
_site _site
.jekyll-cache/ .jekyll-cache/

View file

@ -1,6 +1,51 @@
# Release history # Release history
### Aider v0.50.0
- Infinite output for DeepSeek Coder, Mistral models in addition to Anthropic's models.
- New `--deepseek` switch to use DeepSeek Coder.
- DeepSeek Coder uses 8k token output.
- New `--chat-mode <mode>` switch to launch in ask/help/code modes.
- New `/code <message>` command request a code edit while in `ask` mode.
- Web scraper is more robust if page never idles.
- Improved token and cost reporting for infinite output.
- Improvements and bug fixes for `/read` only files.
- Switched from `setup.py` to `pyproject.toml`, by @branchvincent.
- Bug fix to persist files added during `/ask`.
- Bug fix for chat history size in `/tokens`.
- Aider wrote 66% of the code in this release.
### Aider v0.49.1
- Bugfix to `/help`.
### Aider v0.49.0
- Add read-only files to the chat context with `/read` and `--read`, including from outside the git repo.
- `/diff` now shows diffs of all changes resulting from your request, including lint and test fixes.
- New `/clipboard` command to paste images or text from the clipboard, replaces `/add-clipboard-image`.
- Now shows the markdown scraped when you add a url with `/web`.
- When [scripting aider](https://aider.chat/docs/scripting.html) messages can now contain in-chat `/` commands.
- Aider in docker image now suggests the correct command to update to latest version.
- Improved retries on API errors (was easy to test during Sonnet outage).
- Added `--mini` for `gpt-4o-mini`.
- Bugfix to keep session cost accurate when using `/ask` and `/help`.
- Performance improvements for repo map calculation.
- `/tokens` now shows the active model.
- Enhanced commit message attribution options:
- New `--attribute-commit-message-author` to prefix commit messages with 'aider: ' if aider authored the changes, replaces `--attribute-commit-message`.
- New `--attribute-commit-message-committer` to prefix all commit messages with 'aider: '.
- Aider wrote 61% of the code in this release.
### Aider v0.48.1
- Added `openai/gpt-4o-2024-08-06`.
- Worked around litellm bug that removes OpenRouter app headers when using `extra_headers`.
- Improved progress indication during repo map processing.
- Corrected instructions for upgrading the docker container to latest aider version.
- Removed obsolete 16k token limit on commit diffs, use per-model limits.
### Aider v0.48.0 ### Aider v0.48.0
- Performance improvements for large/mono repos. - Performance improvements for large/mono repos.

View file

@ -1 +0,0 @@
include requirements.txt

View file

@ -35,18 +35,18 @@ cog.out(open("aider/website/_includes/get-started.md").read())
You can get started quickly like this: You can get started quickly like this:
``` ```
$ pip install aider-chat python -m pip install aider-chat
# Change directory into a git repo # Change directory into a git repo
$ cd /to/your/git/repo cd /to/your/git/repo
# Work with Claude 3.5 Sonnet on your repo # Work with Claude 3.5 Sonnet on your repo
$ export ANTHROPIC_API_KEY=your-key-goes-here export ANTHROPIC_API_KEY=your-key-goes-here
$ aider aider
# Work with GPT-4o on your repo # Work with GPT-4o on your repo
$ export OPENAI_API_KEY=your-key-goes-here export OPENAI_API_KEY=your-key-goes-here
$ aider aider
``` ```
<!--[[[end]]]--> <!--[[[end]]]-->

View file

@ -1 +1 @@
__version__ = "0.48.1-dev" __version__ = "0.50.1-dev"

View file

@ -82,6 +82,14 @@ def get_parser(default_config_files, git_root):
const=gpt_4o_model, const=gpt_4o_model,
help=f"Use {gpt_4o_model} model for the main chat", help=f"Use {gpt_4o_model} model for the main chat",
) )
gpt_4o_mini_model = "gpt-4o-mini"
group.add_argument(
"--mini",
action="store_const",
dest="model",
const=gpt_4o_mini_model,
help=f"Use {gpt_4o_mini_model} model for the main chat",
)
gpt_4_turbo_model = "gpt-4-1106-preview" gpt_4_turbo_model = "gpt-4-1106-preview"
group.add_argument( group.add_argument(
"--4-turbo", "--4-turbo",
@ -101,6 +109,14 @@ def get_parser(default_config_files, git_root):
const=gpt_3_model_name, const=gpt_3_model_name,
help=f"Use {gpt_3_model_name} model for the main chat", help=f"Use {gpt_3_model_name} model for the main chat",
) )
deepseek_model = "deepseek/deepseek-coder"
group.add_argument(
"--deepseek",
action="store_const",
dest="model",
const=deepseek_model,
help=f"Use {deepseek_model} model for the main chat",
)
########## ##########
group = parser.add_argument_group("Model Settings") group = parser.add_argument_group("Model Settings")
@ -159,6 +175,7 @@ def get_parser(default_config_files, git_root):
) )
group.add_argument( group.add_argument(
"--edit-format", "--edit-format",
"--chat-mode",
metavar="EDIT_FORMAT", metavar="EDIT_FORMAT",
default=None, default=None,
help="Specify what edit format the LLM should use (default depends on model)", help="Specify what edit format the LLM should use (default depends on model)",
@ -350,10 +367,16 @@ def get_parser(default_config_files, git_root):
help="Attribute aider commits in the git committer name (default: True)", help="Attribute aider commits in the git committer name (default: True)",
) )
group.add_argument( group.add_argument(
"--attribute-commit-message", "--attribute-commit-message-author",
action=argparse.BooleanOptionalAction, action=argparse.BooleanOptionalAction,
default=False, default=False,
help="Prefix commit messages with 'aider: ' (default: False)", help="Prefix commit messages with 'aider: ' if aider authored the changes (default: False)",
)
group.add_argument(
"--attribute-commit-message-committer",
action=argparse.BooleanOptionalAction,
default=False,
help="Prefix all commit messages with 'aider: ' (default: False)",
) )
group.add_argument( group.add_argument(
"--commit", "--commit",
@ -420,6 +443,12 @@ def get_parser(default_config_files, git_root):
metavar="FILE", metavar="FILE",
help="specify a file to edit (can be used multiple times)", help="specify a file to edit (can be used multiple times)",
) )
group.add_argument(
"--read",
action="append",
metavar="FILE",
help="specify a read-only file (can be used multiple times)",
)
group.add_argument( group.add_argument(
"--vim", "--vim",
action="store_true", action="store_true",

View file

@ -30,7 +30,7 @@ from aider.llm import litellm
from aider.mdstream import MarkdownStream from aider.mdstream import MarkdownStream
from aider.repo import GitRepo from aider.repo import GitRepo
from aider.repomap import RepoMap from aider.repomap import RepoMap
from aider.sendchat import send_with_retries from aider.sendchat import retry_exceptions, send_completion
from aider.utils import format_content, format_messages, is_image_file from aider.utils import format_content, format_messages, is_image_file
from ..dump import dump # noqa: F401 from ..dump import dump # noqa: F401
@ -50,6 +50,7 @@ def wrap_fence(name):
class Coder: class Coder:
abs_fnames = None abs_fnames = None
abs_read_only_fnames = None
repo = None repo = None
last_aider_commit_hash = None last_aider_commit_hash = None
aider_edited_files = None aider_edited_files = None
@ -70,6 +71,11 @@ class Coder:
lint_outcome = None lint_outcome = None
test_outcome = None test_outcome = None
multi_response_content = "" multi_response_content = ""
partial_response_content = ""
commit_before_message = []
message_cost = 0.0
message_tokens_sent = 0
message_tokens_received = 0
@classmethod @classmethod
def create( def create(
@ -89,6 +95,8 @@ class Coder:
else: else:
main_model = models.Model(models.DEFAULT_MODEL_NAME) main_model = models.Model(models.DEFAULT_MODEL_NAME)
if edit_format == "code":
edit_format = None
if edit_format is None: if edit_format is None:
if from_coder: if from_coder:
edit_format = from_coder.edit_format edit_format = from_coder.edit_format
@ -112,6 +120,7 @@ class Coder:
# Bring along context from the old Coder # Bring along context from the old Coder
update = dict( update = dict(
fnames=list(from_coder.abs_fnames), fnames=list(from_coder.abs_fnames),
read_only_fnames=list(from_coder.abs_read_only_fnames), # Copy read-only files
done_messages=done_messages, done_messages=done_messages,
cur_messages=from_coder.cur_messages, cur_messages=from_coder.cur_messages,
aider_commit_hashes=from_coder.aider_commit_hashes, aider_commit_hashes=from_coder.aider_commit_hashes,
@ -143,7 +152,10 @@ class Coder:
main_model = self.main_model main_model = self.main_model
weak_model = main_model.weak_model weak_model = main_model.weak_model
prefix = "Model:" prefix = "Model:"
output = f" {main_model.name} with {self.edit_format} edit format" output = f" {main_model.name} with"
if main_model.info.get("supports_assistant_prefill"):
output += " ♾️"
output += f" {self.edit_format} edit format"
if weak_model is not main_model: if weak_model is not main_model:
prefix = "Models:" prefix = "Models:"
output += f", weak model {weak_model.name}" output += f", weak model {weak_model.name}"
@ -193,7 +205,7 @@ class Coder:
io, io,
repo=None, repo=None,
fnames=None, fnames=None,
pretty=True, read_only_fnames=None,
show_diffs=False, show_diffs=False,
auto_commits=True, auto_commits=True,
dirty_commits=True, dirty_commits=True,
@ -217,6 +229,7 @@ class Coder:
summarizer=None, summarizer=None,
total_cost=0.0, total_cost=0.0,
): ):
self.commit_before_message = []
self.aider_commit_hashes = set() self.aider_commit_hashes = set()
self.rejected_urls = set() self.rejected_urls = set()
self.abs_root_path_cache = {} self.abs_root_path_cache = {}
@ -240,6 +253,7 @@ class Coder:
self.verbose = verbose self.verbose = verbose
self.abs_fnames = set() self.abs_fnames = set()
self.abs_read_only_fnames = set()
if cur_messages: if cur_messages:
self.cur_messages = cur_messages self.cur_messages = cur_messages
@ -263,9 +277,9 @@ class Coder:
self.code_theme = code_theme self.code_theme = code_theme
self.dry_run = dry_run self.dry_run = dry_run
self.pretty = pretty self.pretty = self.io.pretty
if pretty: if self.pretty:
self.console = Console() self.console = Console()
else: else:
self.console = Console(force_terminal=False, no_color=True) self.console = Console(force_terminal=False, no_color=True)
@ -314,6 +328,15 @@ class Coder:
if not self.repo: if not self.repo:
self.find_common_root() self.find_common_root()
if read_only_fnames:
self.abs_read_only_fnames = set()
for fname in read_only_fnames:
abs_fname = self.abs_root_path(fname)
if os.path.exists(abs_fname):
self.abs_read_only_fnames.add(abs_fname)
else:
self.io.tool_error(f"Error: Read-only file {fname} does not exist. Skipping.")
if map_tokens is None: if map_tokens is None:
use_repo_map = main_model.use_repo_map use_repo_map = main_model.use_repo_map
map_tokens = 1024 map_tokens = 1024
@ -376,8 +399,10 @@ class Coder:
self.linter.set_linter(lang, cmd) self.linter.set_linter(lang, cmd)
def show_announcements(self): def show_announcements(self):
bold = True
for line in self.get_announcements(): for line in self.get_announcements():
self.io.tool_output(line) self.io.tool_output(line, bold=bold)
bold = False
def find_common_root(self): def find_common_root(self):
if len(self.abs_fnames) == 1: if len(self.abs_fnames) == 1:
@ -444,6 +469,10 @@ class Coder:
all_content = "" all_content = ""
for _fname, content in self.get_abs_fnames_content(): for _fname, content in self.get_abs_fnames_content():
all_content += content + "\n" all_content += content + "\n"
for _fname in self.abs_read_only_fnames:
content = self.io.read_text(_fname)
if content is not None:
all_content += content + "\n"
good = False good = False
for fence_open, fence_close in self.fences: for fence_open, fence_close in self.fences:
@ -485,6 +514,19 @@ class Coder:
return prompt return prompt
def get_read_only_files_content(self):
prompt = ""
for fname in self.abs_read_only_fnames:
content = self.io.read_text(fname)
if content is not None and not is_image_file(fname):
relative_fname = self.get_rel_fname(fname)
prompt += "\n"
prompt += relative_fname
prompt += f"\n{self.fence[0]}\n"
prompt += content
prompt += f"{self.fence[1]}\n"
return prompt
def get_cur_message_text(self): def get_cur_message_text(self):
text = "" text = ""
for msg in self.cur_messages: for msg in self.cur_messages:
@ -522,9 +564,13 @@ class Coder:
mentioned_fnames.update(self.get_ident_filename_matches(mentioned_idents)) mentioned_fnames.update(self.get_ident_filename_matches(mentioned_idents))
other_files = set(self.get_all_abs_files()) - set(self.abs_fnames) all_abs_files = set(self.get_all_abs_files())
repo_abs_read_only_fnames = set(self.abs_read_only_fnames) & all_abs_files
chat_files = set(self.abs_fnames) | repo_abs_read_only_fnames
other_files = all_abs_files - chat_files
repo_content = self.repo_map.get_repo_map( repo_content = self.repo_map.get_repo_map(
self.abs_fnames, chat_files,
other_files, other_files,
mentioned_fnames=mentioned_fnames, mentioned_fnames=mentioned_fnames,
mentioned_idents=mentioned_idents, mentioned_idents=mentioned_idents,
@ -534,7 +580,7 @@ class Coder:
if not repo_content: if not repo_content:
repo_content = self.repo_map.get_repo_map( repo_content = self.repo_map.get_repo_map(
set(), set(),
set(self.get_all_abs_files()), all_abs_files,
mentioned_fnames=mentioned_fnames, mentioned_fnames=mentioned_fnames,
mentioned_idents=mentioned_idents, mentioned_idents=mentioned_idents,
) )
@ -543,7 +589,7 @@ class Coder:
if not repo_content: if not repo_content:
repo_content = self.repo_map.get_repo_map( repo_content = self.repo_map.get_repo_map(
set(), set(),
set(self.get_all_abs_files()), all_abs_files,
) )
return repo_content return repo_content
@ -572,12 +618,6 @@ class Coder:
files_content = self.gpt_prompts.files_no_full_files files_content = self.gpt_prompts.files_no_full_files
files_reply = "Ok." files_reply = "Ok."
if files_content:
files_messages += [
dict(role="user", content=files_content),
dict(role="assistant", content=files_reply),
]
images_message = self.get_images_message() images_message = self.get_images_message()
if images_message is not None: if images_message is not None:
files_messages += [ files_messages += [
@ -585,6 +625,24 @@ class Coder:
dict(role="assistant", content="Ok."), dict(role="assistant", content="Ok."),
] ]
read_only_content = self.get_read_only_files_content()
if read_only_content:
files_messages += [
dict(
role="user", content=self.gpt_prompts.read_only_files_prefix + read_only_content
),
dict(
role="assistant",
content="Ok, I will use these files as references.",
),
]
if files_content:
files_messages += [
dict(role="user", content=files_content),
dict(role="assistant", content=files_reply),
]
return files_messages return files_messages
def get_images_message(self): def get_images_message(self):
@ -597,9 +655,11 @@ class Coder:
mime_type, _ = mimetypes.guess_type(fname) mime_type, _ = mimetypes.guess_type(fname)
if mime_type and mime_type.startswith("image/"): if mime_type and mime_type.startswith("image/"):
image_url = f"data:{mime_type};base64,{content}" image_url = f"data:{mime_type};base64,{content}"
image_messages.append( rel_fname = self.get_rel_fname(fname)
{"type": "image_url", "image_url": {"url": image_url, "detail": "high"}} image_messages += [
) {"type": "text", "text": f"Image file: {rel_fname}"},
{"type": "image_url", "image_url": {"url": image_url, "detail": "high"}},
]
if not image_messages: if not image_messages:
return None return None
@ -609,7 +669,7 @@ class Coder:
def run_stream(self, user_message): def run_stream(self, user_message):
self.io.user_input(user_message) self.io.user_input(user_message)
self.init_before_message() self.init_before_message()
yield from self.send_new_user_message(user_message) yield from self.send_message(user_message)
def init_before_message(self): def init_before_message(self):
self.reflected_message = None self.reflected_message = None
@ -617,48 +677,39 @@ class Coder:
self.lint_outcome = None self.lint_outcome = None
self.test_outcome = None self.test_outcome = None
self.edit_outcome = None self.edit_outcome = None
if self.repo:
self.commit_before_message.append(self.repo.get_head())
def run(self, with_message=None): def run(self, with_message=None, preproc=True):
while True: try:
self.init_before_message() if with_message:
self.io.user_input(with_message)
self.run_one(with_message, preproc)
return self.partial_response_content
try: while True:
if with_message: try:
new_user_message = with_message user_message = self.get_input()
self.io.user_input(with_message) self.run_one(user_message, preproc)
else: self.show_undo_hint()
new_user_message = self.run_loop() except KeyboardInterrupt:
self.keyboard_interrupt()
except EOFError:
return
while new_user_message: def get_input(self):
self.reflected_message = None inchat_files = self.get_inchat_relative_files()
list(self.send_new_user_message(new_user_message)) read_only_files = [self.get_rel_fname(fname) for fname in self.abs_read_only_fnames]
all_files = sorted(set(inchat_files + read_only_files))
new_user_message = None return self.io.get_input(
if self.reflected_message:
if self.num_reflections < self.max_reflections:
self.num_reflections += 1
new_user_message = self.reflected_message
else:
self.io.tool_error(
f"Only {self.max_reflections} reflections allowed, stopping."
)
if with_message:
return self.partial_response_content
except KeyboardInterrupt:
self.keyboard_interrupt()
except EOFError:
return
def run_loop(self):
inp = self.io.get_input(
self.root, self.root,
self.get_inchat_relative_files(), all_files,
self.get_addable_relative_files(), self.get_addable_relative_files(),
self.commands, self.commands,
self.abs_read_only_fnames,
) )
def preproc_user_input(self, inp):
if not inp: if not inp:
return return
@ -670,6 +721,28 @@ class Coder:
return inp return inp
def run_one(self, user_message, preproc):
self.init_before_message()
if preproc:
message = self.preproc_user_input(user_message)
else:
message = user_message
while message:
self.reflected_message = None
list(self.send_message(message))
if not self.reflected_message:
break
if self.num_reflections >= self.max_reflections:
self.io.tool_error(f"Only {self.max_reflections} reflections allowed, stopping.")
return
self.num_reflections += 1
message = self.reflected_message
def check_for_urls(self, inp): def check_for_urls(self, inp):
url_pattern = re.compile(r"(https?://[^\s/$.?#].[^\s]*[^\s,.])") url_pattern = re.compile(r"(https?://[^\s/$.?#].[^\s]*[^\s,.])")
urls = list(set(url_pattern.findall(inp))) # Use set to remove duplicates urls = list(set(url_pattern.findall(inp))) # Use set to remove duplicates
@ -678,7 +751,7 @@ class Coder:
if url not in self.rejected_urls: if url not in self.rejected_urls:
if self.io.confirm_ask(f"Add {url} to the chat?"): if self.io.confirm_ask(f"Add {url} to the chat?"):
inp += "\n\n" inp += "\n\n"
inp += self.commands.cmd_web(url) inp += self.commands.cmd_web(url, paginate=False)
added_urls.append(url) added_urls.append(url)
else: else:
self.rejected_urls.add(url) self.rejected_urls.add(url)
@ -826,6 +899,7 @@ class Coder:
self.summarize_end() self.summarize_end()
messages += self.done_messages messages += self.done_messages
messages += self.get_files_messages() messages += self.get_files_messages()
if self.gpt_prompts.system_reminder: if self.gpt_prompts.system_reminder:
@ -852,7 +926,7 @@ class Coder:
final = messages[-1] final = messages[-1]
max_input_tokens = self.main_model.info.get("max_input_tokens") max_input_tokens = self.main_model.info.get("max_input_tokens") or 0
# Add the reminder prompt if we still have room to include it. # Add the reminder prompt if we still have room to include it.
if ( if (
max_input_tokens is None max_input_tokens is None
@ -872,7 +946,7 @@ class Coder:
return messages return messages
def send_new_user_message(self, inp): def send_message(self, inp):
self.aider_edited_files = None self.aider_edited_files = None
self.cur_messages += [ self.cur_messages += [
@ -891,6 +965,8 @@ class Coder:
else: else:
self.mdstream = None self.mdstream = None
retry_delay = 0.125
self.usage_report = None self.usage_report = None
exhausted = False exhausted = False
interrupted = False interrupted = False
@ -899,6 +975,14 @@ class Coder:
try: try:
yield from self.send(messages, functions=self.functions) yield from self.send(messages, functions=self.functions)
break break
except retry_exceptions() as err:
self.io.tool_error(str(err))
retry_delay *= 2
if retry_delay > 60:
break
self.io.tool_output(f"Retrying in {retry_delay:.1f} seconds...")
time.sleep(retry_delay)
continue
except KeyboardInterrupt: except KeyboardInterrupt:
interrupted = True interrupted = True
break break
@ -911,7 +995,7 @@ class Coder:
return return
except FinishReasonLength: except FinishReasonLength:
# We hit the output limit! # We hit the output limit!
if not self.main_model.can_prefill: if not self.main_model.info.get("supports_assistant_prefill"):
exhausted = True exhausted = True
break break
@ -920,7 +1004,9 @@ class Coder:
if messages[-1]["role"] == "assistant": if messages[-1]["role"] == "assistant":
messages[-1]["content"] = self.multi_response_content messages[-1]["content"] = self.multi_response_content
else: else:
messages.append(dict(role="assistant", content=self.multi_response_content)) messages.append(
dict(role="assistant", content=self.multi_response_content, prefix=True)
)
except Exception as err: except Exception as err:
self.io.tool_error(f"Unexpected error: {err}") self.io.tool_error(f"Unexpected error: {err}")
traceback.print_exc() traceback.print_exc()
@ -935,8 +1021,7 @@ class Coder:
self.io.tool_output() self.io.tool_output()
if self.usage_report: self.show_usage_report()
self.io.tool_output(self.usage_report)
if exhausted: if exhausted:
self.show_exhausted_error() self.show_exhausted_error()
@ -1011,10 +1096,10 @@ class Coder:
output_tokens = 0 output_tokens = 0
if self.partial_response_content: if self.partial_response_content:
output_tokens = self.main_model.token_count(self.partial_response_content) output_tokens = self.main_model.token_count(self.partial_response_content)
max_output_tokens = self.main_model.info.get("max_output_tokens", 0) max_output_tokens = self.main_model.info.get("max_output_tokens") or 0
input_tokens = self.main_model.token_count(self.format_messages()) input_tokens = self.main_model.token_count(self.format_messages())
max_input_tokens = self.main_model.info.get("max_input_tokens", 0) max_input_tokens = self.main_model.info.get("max_input_tokens") or 0
total_tokens = input_tokens + output_tokens total_tokens = input_tokens + output_tokens
@ -1159,9 +1244,8 @@ class Coder:
self.io.log_llm_history("TO LLM", format_messages(messages)) self.io.log_llm_history("TO LLM", format_messages(messages))
interrupted = False
try: try:
hash_object, completion = send_with_retries( hash_object, completion = send_completion(
model.name, model.name,
messages, messages,
functions, functions,
@ -1176,9 +1260,9 @@ class Coder:
yield from self.show_send_output_stream(completion) yield from self.show_send_output_stream(completion)
else: else:
self.show_send_output(completion) self.show_send_output(completion)
except KeyboardInterrupt: except KeyboardInterrupt as kbi:
self.keyboard_interrupt() self.keyboard_interrupt()
interrupted = True raise kbi
finally: finally:
self.io.log_llm_history( self.io.log_llm_history(
"LLM RESPONSE", "LLM RESPONSE",
@ -1193,10 +1277,7 @@ class Coder:
if args: if args:
self.io.ai_output(json.dumps(args, indent=4)) self.io.ai_output(json.dumps(args, indent=4))
if interrupted: self.calculate_and_show_tokens_and_cost(messages, completion)
raise KeyboardInterrupt
self.calculate_and_show_tokens_and_cost(messages, completion)
def show_send_output(self, completion): def show_send_output(self, completion):
if self.verbose: if self.verbose:
@ -1218,7 +1299,7 @@ class Coder:
show_func_err = func_err show_func_err = func_err
try: try:
self.partial_response_content = completion.choices[0].message.content self.partial_response_content = completion.choices[0].message.content or ""
except AttributeError as content_err: except AttributeError as content_err:
show_content_err = content_err show_content_err = content_err
@ -1312,13 +1393,19 @@ class Coder:
prompt_tokens = self.main_model.token_count(messages) prompt_tokens = self.main_model.token_count(messages)
completion_tokens = self.main_model.token_count(self.partial_response_content) completion_tokens = self.main_model.token_count(self.partial_response_content)
self.usage_report = f"Tokens: {prompt_tokens:,} sent, {completion_tokens:,} received." self.message_tokens_sent += prompt_tokens
self.message_tokens_received += completion_tokens
tokens_report = (
f"Tokens: {self.message_tokens_sent:,} sent, {self.message_tokens_received:,} received."
)
if self.main_model.info.get("input_cost_per_token"): if self.main_model.info.get("input_cost_per_token"):
cost += prompt_tokens * self.main_model.info.get("input_cost_per_token") cost += prompt_tokens * self.main_model.info.get("input_cost_per_token")
if self.main_model.info.get("output_cost_per_token"): if self.main_model.info.get("output_cost_per_token"):
cost += completion_tokens * self.main_model.info.get("output_cost_per_token") cost += completion_tokens * self.main_model.info.get("output_cost_per_token")
self.total_cost += cost self.total_cost += cost
self.message_cost += cost
def format_cost(value): def format_cost(value):
if value == 0: if value == 0:
@ -1329,13 +1416,24 @@ class Coder:
else: else:
return f"{value:.{max(2, 2 - int(math.log10(magnitude)))}f}" return f"{value:.{max(2, 2 - int(math.log10(magnitude)))}f}"
self.usage_report += ( cost_report = (
f" Cost: ${format_cost(cost)} request, ${format_cost(self.total_cost)} session." f" Cost: ${format_cost(self.message_cost)} message,"
f" ${format_cost(self.total_cost)} session."
) )
self.usage_report = tokens_report + cost_report
else:
self.usage_report = tokens_report
def show_usage_report(self):
if self.usage_report:
self.io.tool_output(self.usage_report)
self.message_cost = 0.0
self.message_tokens_sent = 0
self.message_tokens_received = 0
def get_multi_response_content(self, final=False): def get_multi_response_content(self, final=False):
cur = self.multi_response_content cur = self.multi_response_content or ""
new = self.partial_response_content new = self.partial_response_content or ""
if new.rstrip() != new and not final: if new.rstrip() != new and not final:
new = new.rstrip() new = new.rstrip()
@ -1377,7 +1475,10 @@ class Coder:
return max(path.stat().st_mtime for path in files) return max(path.stat().st_mtime for path in files)
def get_addable_relative_files(self): def get_addable_relative_files(self):
return set(self.get_all_relative_files()) - set(self.get_inchat_relative_files()) all_files = set(self.get_all_relative_files())
inchat_files = set(self.get_inchat_relative_files())
read_only_files = set(self.get_rel_fname(fname) for fname in self.abs_read_only_fnames)
return all_files - inchat_files - read_only_files
def check_for_dirty_commit(self, path): def check_for_dirty_commit(self, path):
if not self.repo: if not self.repo:
@ -1590,7 +1691,11 @@ class Coder:
if self.show_diffs: if self.show_diffs:
self.commands.cmd_diff() self.commands.cmd_diff()
self.io.tool_output(f"You can use /undo to revert and discard commit {commit_hash}.") def show_undo_hint(self):
if not self.commit_before_message:
return
if self.commit_before_message[-1] != self.repo.get_head():
self.io.tool_output("You can use /undo to undo and discard each aider commit.")
def dirty_commit(self): def dirty_commit(self):
if not self.need_commit_before_edits: if not self.need_commit_before_edits:

View file

@ -18,7 +18,7 @@ You always COMPLETELY IMPLEMENT the needed code!
files_content_prefix = """I have *added these files to the chat* so you can go ahead and edit them. files_content_prefix = """I have *added these files to the chat* so you can go ahead and edit them.
*Trust this message as the true contents of the files!* *Trust this message as the true contents of these files!*
Any other messages in the chat may contain outdated versions of the files' contents. Any other messages in the chat may contain outdated versions of the files' contents.
""" # noqa: E501 """ # noqa: E501
@ -38,4 +38,8 @@ Don't include files that might contain relevant context, just files that will ne
repo_content_prefix = """Here are summaries of some files present in my git repository. repo_content_prefix = """Here are summaries of some files present in my git repository.
Do not propose changes to these files, treat them as *read-only*. Do not propose changes to these files, treat them as *read-only*.
If you need to edit any of these files, ask me to *add them to the chat* first. If you need to edit any of these files, ask me to *add them to the chat* first.
"""
read_only_files_prefix = """Here are some READ ONLY files, provided for your reference.
Do not edit these files!
""" """

View file

@ -7,7 +7,9 @@ from collections import OrderedDict
from pathlib import Path from pathlib import Path
import git import git
from PIL import ImageGrab import pyperclip
from PIL import Image, ImageGrab
from rich.text import Text
from aider import models, prompts, voice from aider import models, prompts, voice
from aider.help import Help, install_help_extra from aider.help import Help, install_help_extra
@ -117,13 +119,15 @@ class Commands:
else: else:
self.io.tool_output("Please provide a partial model name to search for.") self.io.tool_output("Please provide a partial model name to search for.")
def cmd_web(self, args): def cmd_web(self, args, paginate=True):
"Use headless selenium to scrape a webpage and add the content to the chat" "Scrape a webpage, convert to markdown and add to the chat"
url = args.strip() url = args.strip()
if not url: if not url:
self.io.tool_error("Please provide a URL to scrape.") self.io.tool_error("Please provide a URL to scrape.")
return return
self.io.tool_output(f"Scraping {url}...")
if not self.scraper: if not self.scraper:
res = install_playwright(self.io) res = install_playwright(self.io)
if not res: if not res:
@ -134,11 +138,14 @@ class Commands:
) )
content = self.scraper.scrape(url) or "" content = self.scraper.scrape(url) or ""
# if content:
# self.io.tool_output(content)
content = f"{url}:\n\n" + content content = f"{url}:\n\n" + content
self.io.tool_output("... done.")
if paginate:
with self.io.console.pager():
self.io.console.print(Text(content))
return content return content
def is_command(self, inp): def is_command(self, inp):
@ -304,7 +311,6 @@ class Commands:
# chat history # chat history
msgs = self.coder.done_messages + self.coder.cur_messages msgs = self.coder.done_messages + self.coder.cur_messages
if msgs: if msgs:
msgs = [dict(role="dummy", content=msg) for msg in msgs]
tokens = self.coder.main_model.token_count(msgs) tokens = self.coder.main_model.token_count(msgs)
res.append((tokens, "chat history", "use /clear to clear")) res.append((tokens, "chat history", "use /clear to clear"))
@ -316,6 +322,8 @@ class Commands:
tokens = self.coder.main_model.token_count(repo_content) tokens = self.coder.main_model.token_count(repo_content)
res.append((tokens, "repository map", "use --map-tokens to resize")) res.append((tokens, "repository map", "use --map-tokens to resize"))
fence = "`" * 3
# files # files
for fname in self.coder.abs_fnames: for fname in self.coder.abs_fnames:
relative_fname = self.coder.get_rel_fname(fname) relative_fname = self.coder.get_rel_fname(fname)
@ -324,11 +332,23 @@ class Commands:
tokens = self.coder.main_model.token_count_for_image(fname) tokens = self.coder.main_model.token_count_for_image(fname)
else: else:
# approximate # approximate
content = f"{relative_fname}\n```\n" + content + "```\n" content = f"{relative_fname}\n{fence}\n" + content + "{fence}\n"
tokens = self.coder.main_model.token_count(content) tokens = self.coder.main_model.token_count(content)
res.append((tokens, f"{relative_fname}", "use /drop to drop from chat")) res.append((tokens, f"{relative_fname}", "/drop to remove"))
self.io.tool_output("Approximate context window usage, in tokens:") # read-only files
for fname in self.coder.abs_read_only_fnames:
relative_fname = self.coder.get_rel_fname(fname)
content = self.io.read_text(fname)
if content is not None and not is_image_file(relative_fname):
# approximate
content = f"{relative_fname}\n{fence}\n" + content + "{fence}\n"
tokens = self.coder.main_model.token_count(content)
res.append((tokens, f"{relative_fname} (read-only)", "/drop to remove"))
self.io.tool_output(
f"Approximate context window usage for {self.coder.main_model.name}, in tokens:"
)
self.io.tool_output() self.io.tool_output()
width = 8 width = 8
@ -344,7 +364,7 @@ class Commands:
total_cost = 0.0 total_cost = 0.0
for tk, msg, tip in res: for tk, msg, tip in res:
total += tk total += tk
cost = tk * self.coder.main_model.info.get("input_cost_per_token", 0) cost = tk * (self.coder.main_model.info.get("input_cost_per_token") or 0)
total_cost += cost total_cost += cost
msg = msg.ljust(col_width) msg = msg.ljust(col_width)
self.io.tool_output(f"${cost:7.4f} {fmt(tk)} {msg} {tip}") # noqa: E231 self.io.tool_output(f"${cost:7.4f} {fmt(tk)} {msg} {tip}") # noqa: E231
@ -352,7 +372,7 @@ class Commands:
self.io.tool_output("=" * (width + cost_width + 1)) self.io.tool_output("=" * (width + cost_width + 1))
self.io.tool_output(f"${total_cost:7.4f} {fmt(total)} tokens total") # noqa: E231 self.io.tool_output(f"${total_cost:7.4f} {fmt(total)} tokens total") # noqa: E231
limit = self.coder.main_model.info.get("max_input_tokens", 0) limit = self.coder.main_model.info.get("max_input_tokens") or 0
if not limit: if not limit:
return return
@ -440,27 +460,36 @@ class Commands:
# Get the current HEAD after undo # Get the current HEAD after undo
current_head_hash = self.coder.repo.repo.head.commit.hexsha[:7] current_head_hash = self.coder.repo.repo.head.commit.hexsha[:7]
current_head_message = self.coder.repo.repo.head.commit.message.strip() current_head_message = self.coder.repo.repo.head.commit.message.strip()
self.io.tool_output(f"HEAD is: {current_head_hash} {current_head_message}") self.io.tool_output(f"Now at: {current_head_hash} {current_head_message}")
if self.coder.main_model.send_undo_reply: if self.coder.main_model.send_undo_reply:
return prompts.undo_command_reply return prompts.undo_command_reply
def cmd_diff(self, args=""): def cmd_diff(self, args=""):
"Display the diff of the last aider commit" "Display the diff of changes since the last message"
if not self.coder.repo: if not self.coder.repo:
self.io.tool_error("No git repository found.") self.io.tool_error("No git repository found.")
return return
last_commit_hash = self.coder.repo.repo.head.commit.hexsha[:7] current_head = self.coder.repo.get_head()
if current_head is None:
if last_commit_hash not in self.coder.aider_commit_hashes: self.io.tool_error("Unable to get current commit. The repository might be empty.")
self.io.tool_error(f"Last commit {last_commit_hash} was not an aider commit.")
self.io.tool_error("You could try `/git diff` or `/git diff HEAD^`.")
return return
if len(self.coder.commit_before_message) < 2:
commit_before_message = current_head + "^"
else:
commit_before_message = self.coder.commit_before_message[-2]
if not commit_before_message or commit_before_message == current_head:
self.io.tool_error("No changes to display since the last message.")
return
self.io.tool_output(f"Diff since {commit_before_message[:7]}...")
diff = self.coder.repo.diff_commits( diff = self.coder.repo.diff_commits(
self.coder.pretty, self.coder.pretty,
"HEAD^", commit_before_message,
"HEAD", "HEAD",
) )
@ -472,6 +501,9 @@ class Commands:
fname = f'"{fname}"' fname = f'"{fname}"'
return fname return fname
def completions_read(self):
return self.completions_add()
def completions_add(self): def completions_add(self):
files = set(self.coder.get_all_relative_files()) files = set(self.coder.get_all_relative_files())
files = files - set(self.coder.get_inchat_relative_files()) files = files - set(self.coder.get_inchat_relative_files())
@ -558,6 +590,18 @@ class Commands:
if abs_file_path in self.coder.abs_fnames: if abs_file_path in self.coder.abs_fnames:
self.io.tool_error(f"{matched_file} is already in the chat") self.io.tool_error(f"{matched_file} is already in the chat")
elif abs_file_path in self.coder.abs_read_only_fnames:
if self.coder.repo and self.coder.repo.path_in_repo(matched_file):
self.coder.abs_read_only_fnames.remove(abs_file_path)
self.coder.abs_fnames.add(abs_file_path)
self.io.tool_output(
f"Moved {matched_file} from read-only to editable files in the chat"
)
added_fnames.append(matched_file)
else:
self.io.tool_error(
f"Cannot add {matched_file} as it's not part of the repository"
)
else: else:
if is_image_file(matched_file) and not self.coder.main_model.accepts_images: if is_image_file(matched_file) and not self.coder.main_model.accepts_images:
self.io.tool_error( self.io.tool_error(
@ -575,20 +619,12 @@ class Commands:
self.coder.check_added_files() self.coder.check_added_files()
added_fnames.append(matched_file) added_fnames.append(matched_file)
if not added_fnames:
return
# only reply if there's been some chatting since the last edit
if not self.coder.cur_messages:
return
reply = prompts.added_files.format(fnames=", ".join(added_fnames))
return reply
def completions_drop(self): def completions_drop(self):
files = self.coder.get_inchat_relative_files() files = self.coder.get_inchat_relative_files()
files = [self.quote_fname(fn) for fn in files] read_only_files = [self.coder.get_rel_fname(fn) for fn in self.coder.abs_read_only_fnames]
return files all_files = files + read_only_files
all_files = [self.quote_fname(fn) for fn in all_files]
return all_files
def cmd_drop(self, args=""): def cmd_drop(self, args=""):
"Remove files from the chat session to free up context space" "Remove files from the chat session to free up context space"
@ -596,9 +632,19 @@ class Commands:
if not args.strip(): if not args.strip():
self.io.tool_output("Dropping all files from the chat session.") self.io.tool_output("Dropping all files from the chat session.")
self.coder.abs_fnames = set() self.coder.abs_fnames = set()
self.coder.abs_read_only_fnames = set()
return
filenames = parse_quoted_filenames(args) filenames = parse_quoted_filenames(args)
for word in filenames: for word in filenames:
# Handle read-only files separately, without glob_filtered_to_repo
read_only_matched = [f for f in self.coder.abs_read_only_fnames if word in f]
if read_only_matched:
for matched_file in read_only_matched:
self.coder.abs_read_only_fnames.remove(matched_file)
self.io.tool_output(f"Removed read-only file {matched_file} from the chat")
matched_files = self.glob_filtered_to_repo(word) matched_files = self.glob_filtered_to_repo(word)
if not matched_files: if not matched_files:
@ -678,7 +724,7 @@ class Commands:
add = result.returncode != 0 add = result.returncode != 0
else: else:
response = self.io.prompt_ask( response = self.io.prompt_ask(
"Add the output to the chat? (y/n/instructions): ", default="y" "Add the output to the chat?\n(y/n/instructions)", default=""
).strip() ).strip()
if response.lower() in ["yes", "y"]: if response.lower() in ["yes", "y"]:
@ -718,6 +764,7 @@ class Commands:
other_files = [] other_files = []
chat_files = [] chat_files = []
read_only_files = []
for file in files: for file in files:
abs_file_path = self.coder.abs_root_path(file) abs_file_path = self.coder.abs_root_path(file)
if abs_file_path in self.coder.abs_fnames: if abs_file_path in self.coder.abs_fnames:
@ -725,8 +772,13 @@ class Commands:
else: else:
other_files.append(file) other_files.append(file)
if not chat_files and not other_files: # Add read-only files
self.io.tool_output("\nNo files in chat or git repo.") for abs_file_path in self.coder.abs_read_only_fnames:
rel_file_path = self.coder.get_rel_fname(abs_file_path)
read_only_files.append(rel_file_path)
if not chat_files and not other_files and not read_only_files:
self.io.tool_output("\nNo files in chat, git repo, or read-only list.")
return return
if other_files: if other_files:
@ -734,6 +786,11 @@ class Commands:
for file in other_files: for file in other_files:
self.io.tool_output(f" {file}") self.io.tool_output(f" {file}")
if read_only_files:
self.io.tool_output("\nRead-only files:\n")
for file in read_only_files:
self.io.tool_output(f" {file}")
if chat_files: if chat_files:
self.io.tool_output("\nFiles in chat:\n") self.io.tool_output("\nFiles in chat:\n")
for file in chat_files: for file in chat_files:
@ -787,13 +844,23 @@ class Commands:
""" """
user_msg += "\n".join(self.coder.get_announcements()) + "\n" user_msg += "\n".join(self.coder.get_announcements()) + "\n"
assistant_msg = coder.run(user_msg) coder.run(user_msg, preproc=False)
self.coder.cur_messages += [ if self.coder.repo_map:
dict(role="user", content=user_msg), map_tokens = self.coder.repo_map.max_map_tokens
dict(role="assistant", content=assistant_msg), map_mul_no_files = self.coder.repo_map.map_mul_no_files
] else:
self.coder.total_cost += coder.total_cost map_tokens = 0
map_mul_no_files = 1
raise SwitchCoder(
edit_format=self.coder.edit_format,
summarize_from_coder=False,
from_coder=coder,
map_tokens=map_tokens,
map_mul_no_files=map_mul_no_files,
show_announcements=False,
)
def clone(self): def clone(self):
return Commands( return Commands(
@ -805,28 +872,35 @@ class Commands:
def cmd_ask(self, args): def cmd_ask(self, args):
"Ask questions about the code base without editing any files" "Ask questions about the code base without editing any files"
return self._generic_chat_command(args, "ask")
def cmd_code(self, args):
"Ask for changes to your code"
return self._generic_chat_command(args, self.coder.main_model.edit_format)
def _generic_chat_command(self, args, edit_format):
if not args.strip(): if not args.strip():
self.io.tool_error("Please provide a question or topic for the chat.") self.io.tool_error(f"Please provide a question or topic for the {edit_format} chat.")
return return
from aider.coders import Coder from aider.coders import Coder
chat_coder = Coder.create( coder = Coder.create(
io=self.io, io=self.io,
from_coder=self.coder, from_coder=self.coder,
edit_format="ask", edit_format=edit_format,
summarize_from_coder=False, summarize_from_coder=False,
) )
user_msg = args user_msg = args
assistant_msg = chat_coder.run(user_msg) coder.run(user_msg)
self.coder.cur_messages += [ raise SwitchCoder(
dict(role="user", content=user_msg), edit_format=self.coder.edit_format,
dict(role="assistant", content=assistant_msg), summarize_from_coder=False,
] from_coder=coder,
self.coder.total_cost += chat_coder.total_cost show_announcements=False,
)
def get_help_md(self): def get_help_md(self):
"Show help about all commands in markdown" "Show help about all commands in markdown"
@ -894,27 +968,82 @@ class Commands:
return text return text
def cmd_add_clipboard_image(self, args): def cmd_clipboard(self, args):
"Add an image from the clipboard to the chat" "Add image/text from the clipboard to the chat (optionally provide a name for the image)"
try: try:
# Check for image first
image = ImageGrab.grabclipboard() image = ImageGrab.grabclipboard()
if image is None: if isinstance(image, Image.Image):
self.io.tool_error("No image found in clipboard.") if args.strip():
filename = args.strip()
ext = os.path.splitext(filename)[1].lower()
if ext in (".jpg", ".jpeg", ".png"):
basename = filename
else:
basename = f"{filename}.png"
else:
basename = "clipboard_image.png"
temp_dir = tempfile.mkdtemp()
temp_file_path = os.path.join(temp_dir, basename)
image_format = "PNG" if basename.lower().endswith(".png") else "JPEG"
image.save(temp_file_path, image_format)
abs_file_path = Path(temp_file_path).resolve()
# Check if a file with the same name already exists in the chat
existing_file = next(
(f for f in self.coder.abs_fnames if Path(f).name == abs_file_path.name), None
)
if existing_file:
self.coder.abs_fnames.remove(existing_file)
self.io.tool_output(f"Replaced existing image in the chat: {existing_file}")
self.coder.abs_fnames.add(str(abs_file_path))
self.io.tool_output(f"Added clipboard image to the chat: {abs_file_path}")
self.coder.check_added_files()
return return
with tempfile.NamedTemporaryFile(suffix=".png", delete=False) as temp_file: # If not an image, try to get text
image.save(temp_file.name, "PNG") text = pyperclip.paste()
temp_file_path = temp_file.name if text:
self.io.tool_output(text)
return text
abs_file_path = Path(temp_file_path).resolve() self.io.tool_error("No image or text content found in clipboard.")
self.coder.abs_fnames.add(str(abs_file_path)) return
self.io.tool_output(f"Added clipboard image to the chat: {abs_file_path}")
self.coder.check_added_files()
return prompts.added_files.format(fnames=str(abs_file_path))
except Exception as e: except Exception as e:
self.io.tool_error(f"Error adding clipboard image: {e}") self.io.tool_error(f"Error processing clipboard content: {e}")
def cmd_read(self, args):
"Add a file to the chat that is for reference, not to be edited"
if not args.strip():
self.io.tool_error("Please provide a filename to read.")
return
filename = args.strip()
abs_path = os.path.abspath(filename)
if not os.path.exists(abs_path):
self.io.tool_error(f"File not found: {abs_path}")
return
if not os.path.isfile(abs_path):
self.io.tool_error(f"Not a file: {abs_path}")
return
self.coder.abs_read_only_fnames.add(abs_path)
self.io.tool_output(f"Added {abs_path} to read-only files.")
def cmd_map(self, args):
"Print out the current repository map"
repo_map = self.coder.get_repo_map()
if repo_map:
self.io.tool_output(repo_map)
else:
self.io.tool_output("No repository map available.")
def expand_subdir(file_path): def expand_subdir(file_path):

View file

@ -15,6 +15,7 @@ from pygments.lexers import MarkdownLexer, guess_lexer_for_filename
from pygments.token import Token from pygments.token import Token
from pygments.util import ClassNotFound from pygments.util import ClassNotFound
from rich.console import Console from rich.console import Console
from rich.style import Style as RichStyle
from rich.text import Text from rich.text import Text
from .dump import dump # noqa: F401 from .dump import dump # noqa: F401
@ -22,10 +23,13 @@ from .utils import is_image_file
class AutoCompleter(Completer): class AutoCompleter(Completer):
def __init__(self, root, rel_fnames, addable_rel_fnames, commands, encoding): def __init__(
self, root, rel_fnames, addable_rel_fnames, commands, encoding, abs_read_only_fnames=None
):
self.addable_rel_fnames = addable_rel_fnames self.addable_rel_fnames = addable_rel_fnames
self.rel_fnames = rel_fnames self.rel_fnames = rel_fnames
self.encoding = encoding self.encoding = encoding
self.abs_read_only_fnames = abs_read_only_fnames or []
fname_to_rel_fnames = defaultdict(list) fname_to_rel_fnames = defaultdict(list)
for rel_fname in addable_rel_fnames: for rel_fname in addable_rel_fnames:
@ -47,7 +51,11 @@ class AutoCompleter(Completer):
for rel_fname in rel_fnames: for rel_fname in rel_fnames:
self.words.add(rel_fname) self.words.add(rel_fname)
fname = Path(root) / rel_fname all_fnames = [Path(root) / rel_fname for rel_fname in rel_fnames]
if abs_read_only_fnames:
all_fnames.extend(abs_read_only_fnames)
for fname in all_fnames:
try: try:
with open(fname, "r", encoding=self.encoding) as f: with open(fname, "r", encoding=self.encoding) as f:
content = f.read() content = f.read()
@ -217,7 +225,7 @@ class InputOutput:
with open(str(filename), "w", encoding=self.encoding) as f: with open(str(filename), "w", encoding=self.encoding) as f:
f.write(content) f.write(content)
def get_input(self, root, rel_fnames, addable_rel_fnames, commands): def get_input(self, root, rel_fnames, addable_rel_fnames, commands, abs_read_only_fnames=None):
if self.pretty: if self.pretty:
style = dict(style=self.user_input_color) if self.user_input_color else dict() style = dict(style=self.user_input_color) if self.user_input_color else dict()
self.console.rule(**style) self.console.rule(**style)
@ -244,7 +252,12 @@ class InputOutput:
style = None style = None
completer_instance = AutoCompleter( completer_instance = AutoCompleter(
root, rel_fnames, addable_rel_fnames, commands, self.encoding root,
rel_fnames,
addable_rel_fnames,
commands,
self.encoding,
abs_read_only_fnames=abs_read_only_fnames,
) )
while True: while True:
@ -317,7 +330,7 @@ class InputOutput:
def user_input(self, inp, log_only=True): def user_input(self, inp, log_only=True):
if not log_only: if not log_only:
style = dict(style=self.user_input_color) if self.user_input_color else dict() style = dict(style=self.user_input_color) if self.user_input_color else dict()
self.console.print(inp, **style) self.console.print(Text(inp), **style)
prefix = "####" prefix = "####"
if inp: if inp:
@ -341,18 +354,19 @@ class InputOutput:
self.num_user_asks += 1 self.num_user_asks += 1
if self.yes is True: if self.yes is True:
res = "yes" res = "y"
elif self.yes is False: elif self.yes is False:
res = "no" res = "n"
else: else:
res = prompt(question + " ", default=default) res = prompt(question + " ", default=default)
hist = f"{question.strip()} {res.strip()}" res = res.lower().strip()
is_yes = res in ("y", "yes")
hist = f"{question.strip()} {'y' if is_yes else 'n'}"
self.append_chat_history(hist, linebreak=True, blockquote=True) self.append_chat_history(hist, linebreak=True, blockquote=True)
if not res or not res.strip(): return is_yes
return
return res.strip().lower().startswith("y")
def prompt_ask(self, question, default=None): def prompt_ask(self, question, default=None):
self.num_user_asks += 1 self.num_user_asks += 1
@ -389,7 +403,7 @@ class InputOutput:
style = dict(style=self.tool_error_color) if self.tool_error_color else dict() style = dict(style=self.tool_error_color) if self.tool_error_color else dict()
self.console.print(message, **style) self.console.print(message, **style)
def tool_output(self, *messages, log_only=False): def tool_output(self, *messages, log_only=False, bold=False):
if messages: if messages:
hist = " ".join(messages) hist = " ".join(messages)
hist = f"{hist.strip()}" hist = f"{hist.strip()}"
@ -397,8 +411,10 @@ class InputOutput:
if not log_only: if not log_only:
messages = list(map(Text, messages)) messages = list(map(Text, messages))
style = dict(style=self.tool_output_color) if self.tool_output_color else dict() style = dict(color=self.tool_output_color) if self.tool_output_color else dict()
self.console.print(*messages, **style) style["reverse"] = bold
style = RichStyle(**style)
self.console.print(*messages, style=style)
def append_chat_history(self, text, linebreak=False, blockquote=False, strip=True): def append_chat_history(self, text, linebreak=False, blockquote=False, strip=True):
if blockquote: if blockquote:

View file

@ -4,8 +4,11 @@ import warnings
warnings.filterwarnings("ignore", category=UserWarning, module="pydantic") warnings.filterwarnings("ignore", category=UserWarning, module="pydantic")
os.environ["OR_SITE_URL"] = "http://aider.chat" AIDER_SITE_URL = "https://aider.chat"
os.environ["OR_APP_NAME"] = "Aider" AIDER_APP_NAME = "Aider"
os.environ["OR_SITE_URL"] = AIDER_SITE_URL
os.environ["OR_APP_NAME"] = AIDER_APP_NAME
# `import litellm` takes 1.5 seconds, defer it! # `import litellm` takes 1.5 seconds, defer it!

View file

@ -384,6 +384,7 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
all_files = args.files + (args.file or []) all_files = args.files + (args.file or [])
fnames = [str(Path(fn).resolve()) for fn in all_files] fnames = [str(Path(fn).resolve()) for fn in all_files]
read_only_fnames = [str(Path(fn).resolve()) for fn in (args.read or [])]
if len(all_files) > 1: if len(all_files) > 1:
good = True good = True
for fname in all_files: for fname in all_files:
@ -415,11 +416,11 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
return main(argv, input, output, right_repo_root, return_coder=return_coder) return main(argv, input, output, right_repo_root, return_coder=return_coder)
if args.just_check_update: if args.just_check_update:
update_available = check_version(io, just_check=True) update_available = check_version(io, just_check=True, verbose=args.verbose)
return 0 if not update_available else 1 return 0 if not update_available else 1
if args.check_update: if args.check_update:
check_version(io) check_version(io, verbose=args.verbose)
if args.models: if args.models:
models.print_matching_models(io, args.models) models.print_matching_models(io, args.models)
@ -475,12 +476,13 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
repo = GitRepo( repo = GitRepo(
io, io,
fnames, fnames,
git_dname or ".", git_dname,
args.aiderignore, args.aiderignore,
models=main_model.commit_message_models(), models=main_model.commit_message_models(),
attribute_author=args.attribute_author, attribute_author=args.attribute_author,
attribute_committer=args.attribute_committer, attribute_committer=args.attribute_committer,
attribute_commit_message=args.attribute_commit_message, attribute_commit_message_author=args.attribute_commit_message_author,
attribute_commit_message_committer=args.attribute_commit_message_committer,
commit_prompt=args.commit_prompt, commit_prompt=args.commit_prompt,
subtree_only=args.subtree_only, subtree_only=args.subtree_only,
) )
@ -501,7 +503,7 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
io=io, io=io,
repo=repo, repo=repo,
fnames=fnames, fnames=fnames,
pretty=args.pretty, read_only_fnames=read_only_fnames,
show_diffs=args.show_diffs, show_diffs=args.show_diffs,
auto_commits=args.auto_commits, auto_commits=args.auto_commits,
dirty_commits=args.dirty_commits, dirty_commits=args.dirty_commits,
@ -618,8 +620,15 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
coder.run() coder.run()
return return
except SwitchCoder as switch: except SwitchCoder as switch:
coder = Coder.create(io=io, from_coder=coder, **switch.kwargs) kwargs = dict(io=io, from_coder=coder)
coder.show_announcements() kwargs.update(switch.kwargs)
if "show_announcements" in kwargs:
del kwargs["show_announcements"]
coder = Coder.create(**kwargs)
if switch.kwargs.get("show_announcements") is not False:
coder.show_announcements()
def load_slow_imports(): def load_slow_imports():

View file

@ -3,6 +3,7 @@ import importlib
import json import json
import math import math
import os import os
import platform
import sys import sys
from dataclasses import dataclass, fields from dataclasses import dataclass, fields
from pathlib import Path from pathlib import Path
@ -13,7 +14,7 @@ from PIL import Image
from aider import urls from aider import urls
from aider.dump import dump # noqa: F401 from aider.dump import dump # noqa: F401
from aider.llm import litellm from aider.llm import AIDER_APP_NAME, AIDER_SITE_URL, litellm
DEFAULT_MODEL_NAME = "gpt-4o" DEFAULT_MODEL_NAME = "gpt-4o"
@ -70,7 +71,6 @@ class ModelSettings:
lazy: bool = False lazy: bool = False
reminder_as_sys_msg: bool = False reminder_as_sys_msg: bool = False
examples_as_sys_msg: bool = False examples_as_sys_msg: bool = False
can_prefill: bool = False
extra_headers: Optional[dict] = None extra_headers: Optional[dict] = None
max_tokens: Optional[int] = None max_tokens: Optional[int] = None
@ -152,6 +152,16 @@ MODEL_SETTINGS = [
lazy=True, lazy=True,
reminder_as_sys_msg=True, reminder_as_sys_msg=True,
), ),
ModelSettings(
"gpt-4o-2024-08-06",
"diff",
weak_model_name="gpt-4o-mini",
use_repo_map=True,
send_undo_reply=True,
accepts_images=True,
lazy=True,
reminder_as_sys_msg=True,
),
ModelSettings( ModelSettings(
"gpt-4o", "gpt-4o",
"diff", "diff",
@ -238,7 +248,6 @@ MODEL_SETTINGS = [
weak_model_name="claude-3-haiku-20240307", weak_model_name="claude-3-haiku-20240307",
use_repo_map=True, use_repo_map=True,
send_undo_reply=True, send_undo_reply=True,
can_prefill=True,
), ),
ModelSettings( ModelSettings(
"openrouter/anthropic/claude-3-opus", "openrouter/anthropic/claude-3-opus",
@ -246,13 +255,11 @@ MODEL_SETTINGS = [
weak_model_name="openrouter/anthropic/claude-3-haiku", weak_model_name="openrouter/anthropic/claude-3-haiku",
use_repo_map=True, use_repo_map=True,
send_undo_reply=True, send_undo_reply=True,
can_prefill=True,
), ),
ModelSettings( ModelSettings(
"claude-3-sonnet-20240229", "claude-3-sonnet-20240229",
"whole", "whole",
weak_model_name="claude-3-haiku-20240307", weak_model_name="claude-3-haiku-20240307",
can_prefill=True,
), ),
ModelSettings( ModelSettings(
"claude-3-5-sonnet-20240620", "claude-3-5-sonnet-20240620",
@ -260,7 +267,6 @@ MODEL_SETTINGS = [
weak_model_name="claude-3-haiku-20240307", weak_model_name="claude-3-haiku-20240307",
use_repo_map=True, use_repo_map=True,
examples_as_sys_msg=True, examples_as_sys_msg=True,
can_prefill=True,
accepts_images=True, accepts_images=True,
max_tokens=8192, max_tokens=8192,
extra_headers={"anthropic-beta": "max-tokens-3-5-sonnet-2024-07-15"}, extra_headers={"anthropic-beta": "max-tokens-3-5-sonnet-2024-07-15"},
@ -271,9 +277,12 @@ MODEL_SETTINGS = [
weak_model_name="claude-3-haiku-20240307", weak_model_name="claude-3-haiku-20240307",
use_repo_map=True, use_repo_map=True,
examples_as_sys_msg=True, examples_as_sys_msg=True,
can_prefill=True,
max_tokens=8192, max_tokens=8192,
extra_headers={"anthropic-beta": "max-tokens-3-5-sonnet-2024-07-15"}, extra_headers={
"anthropic-beta": "max-tokens-3-5-sonnet-2024-07-15",
"HTTP-Referer": AIDER_SITE_URL,
"X-Title": AIDER_APP_NAME,
},
), ),
ModelSettings( ModelSettings(
"openrouter/anthropic/claude-3.5-sonnet", "openrouter/anthropic/claude-3.5-sonnet",
@ -281,10 +290,13 @@ MODEL_SETTINGS = [
weak_model_name="openrouter/anthropic/claude-3-haiku-20240307", weak_model_name="openrouter/anthropic/claude-3-haiku-20240307",
use_repo_map=True, use_repo_map=True,
examples_as_sys_msg=True, examples_as_sys_msg=True,
can_prefill=True,
accepts_images=True, accepts_images=True,
max_tokens=8192, max_tokens=8192,
extra_headers={"anthropic-beta": "max-tokens-3-5-sonnet-2024-07-15"}, extra_headers={
"anthropic-beta": "max-tokens-3-5-sonnet-2024-07-15",
"HTTP-Referer": "https://aider.chat",
"X-Title": "Aider",
},
), ),
# Vertex AI Claude models # Vertex AI Claude models
# Does not yet support 8k token # Does not yet support 8k token
@ -294,7 +306,6 @@ MODEL_SETTINGS = [
weak_model_name="vertex_ai/claude-3-haiku@20240307", weak_model_name="vertex_ai/claude-3-haiku@20240307",
use_repo_map=True, use_repo_map=True,
examples_as_sys_msg=True, examples_as_sys_msg=True,
can_prefill=True,
accepts_images=True, accepts_images=True,
), ),
ModelSettings( ModelSettings(
@ -303,13 +314,11 @@ MODEL_SETTINGS = [
weak_model_name="vertex_ai/claude-3-haiku@20240307", weak_model_name="vertex_ai/claude-3-haiku@20240307",
use_repo_map=True, use_repo_map=True,
send_undo_reply=True, send_undo_reply=True,
can_prefill=True,
), ),
ModelSettings( ModelSettings(
"vertex_ai/claude-3-sonnet@20240229", "vertex_ai/claude-3-sonnet@20240229",
"whole", "whole",
weak_model_name="vertex_ai/claude-3-haiku@20240307", weak_model_name="vertex_ai/claude-3-haiku@20240307",
can_prefill=True,
), ),
# Cohere # Cohere
ModelSettings( ModelSettings(
@ -405,9 +414,7 @@ class Model:
self.missing_keys = res.get("missing_keys") self.missing_keys = res.get("missing_keys")
self.keys_in_environment = res.get("keys_in_environment") self.keys_in_environment = res.get("keys_in_environment")
max_input_tokens = self.info.get("max_input_tokens") max_input_tokens = self.info.get("max_input_tokens") or 0
if not max_input_tokens:
max_input_tokens = 0
if max_input_tokens < 32 * 1024: if max_input_tokens < 32 * 1024:
self.max_chat_history_tokens = 1024 self.max_chat_history_tokens = 1024
else: else:
@ -470,14 +477,10 @@ class Model:
if "gpt-3.5" in model or "gpt-4" in model: if "gpt-3.5" in model or "gpt-4" in model:
self.reminder_as_sys_msg = True self.reminder_as_sys_msg = True
if "anthropic" in model:
self.can_prefill = True
if "3.5-sonnet" in model or "3-5-sonnet" in model: if "3.5-sonnet" in model or "3-5-sonnet" in model:
self.edit_format = "diff" self.edit_format = "diff"
self.use_repo_map = True self.use_repo_map = True
self.examples_as_sys_msg = True self.examples_as_sys_msg = True
self.can_prefill = True
# use the defaults # use the defaults
if self.edit_format == "diff": if self.edit_format == "diff":
@ -512,6 +515,9 @@ class Model:
return litellm.encode(model=self.name, text=text) return litellm.encode(model=self.name, text=text)
def token_count(self, messages): def token_count(self, messages):
if type(messages) is list:
return litellm.token_counter(model=self.name, messages=messages)
if not self.tokenizer: if not self.tokenizer:
return return
@ -669,6 +675,13 @@ def sanity_check_model(io, model):
io.tool_error(f"Model {model}: Missing these environment variables:") io.tool_error(f"Model {model}: Missing these environment variables:")
for key in model.missing_keys: for key in model.missing_keys:
io.tool_error(f"- {key}") io.tool_error(f"- {key}")
if platform.system() == "Windows" or True:
io.tool_output(
"If you just set these environment variables using `setx` you may need to restart"
" your terminal or command prompt for the changes to take effect."
)
elif not model.keys_in_environment: elif not model.keys_in_environment:
show = True show = True
io.tool_output(f"Model {model}: Unknown which environment variables are required.") io.tool_output(f"Model {model}: Unknown which environment variables are required.")

View file

@ -29,7 +29,8 @@ class GitRepo:
models=None, models=None,
attribute_author=True, attribute_author=True,
attribute_committer=True, attribute_committer=True,
attribute_commit_message=False, attribute_commit_message_author=False,
attribute_commit_message_committer=False,
commit_prompt=None, commit_prompt=None,
subtree_only=False, subtree_only=False,
): ):
@ -41,7 +42,8 @@ class GitRepo:
self.attribute_author = attribute_author self.attribute_author = attribute_author
self.attribute_committer = attribute_committer self.attribute_committer = attribute_committer
self.attribute_commit_message = attribute_commit_message self.attribute_commit_message_author = attribute_commit_message_author
self.attribute_commit_message_committer = attribute_commit_message_committer
self.commit_prompt = commit_prompt self.commit_prompt = commit_prompt
self.subtree_only = subtree_only self.subtree_only = subtree_only
self.ignore_file_cache = {} self.ignore_file_cache = {}
@ -98,7 +100,9 @@ class GitRepo:
else: else:
commit_message = self.get_commit_message(diffs, context) commit_message = self.get_commit_message(diffs, context)
if aider_edits and self.attribute_commit_message: if aider_edits and self.attribute_commit_message_author:
commit_message = "aider: " + commit_message
elif self.attribute_commit_message_committer:
commit_message = "aider: " + commit_message commit_message = "aider: " + commit_message
if not commit_message: if not commit_message:
@ -130,7 +134,7 @@ class GitRepo:
self.repo.git.commit(cmd) self.repo.git.commit(cmd)
commit_hash = self.repo.head.commit.hexsha[:7] commit_hash = self.repo.head.commit.hexsha[:7]
self.io.tool_output(f"Commit {commit_hash} {commit_message}") self.io.tool_output(f"Commit {commit_hash} {commit_message}", bold=True)
# Restore the env # Restore the env
@ -155,10 +159,6 @@ class GitRepo:
return self.repo.git_dir return self.repo.git_dir
def get_commit_message(self, diffs, context): def get_commit_message(self, diffs, context):
if len(diffs) >= 4 * 1024 * 4:
self.io.tool_error("Diff is too large to generate a commit message.")
return
diffs = "# Diffs:\n" + diffs diffs = "# Diffs:\n" + diffs
content = "" content = ""
@ -172,7 +172,12 @@ class GitRepo:
dict(role="user", content=content), dict(role="user", content=content),
] ]
commit_message = None
for model in self.models: for model in self.models:
num_tokens = model.token_count(messages)
max_tokens = model.info.get("max_input_tokens") or 0
if max_tokens and num_tokens > max_tokens:
continue
commit_message = simple_send_with_retries(model.name, messages) commit_message = simple_send_with_retries(model.name, messages)
if commit_message: if commit_message:
break break
@ -226,6 +231,8 @@ class GitRepo:
args = [] args = []
if pretty: if pretty:
args += ["--color"] args += ["--color"]
else:
args += ["--color=never"]
args += [from_commit, to_commit] args += [from_commit, to_commit]
diffs = self.repo.git.diff(*args) diffs = self.repo.git.diff(*args)
@ -355,3 +362,9 @@ class GitRepo:
return True return True
return self.repo.is_dirty(path=path) return self.repo.is_dirty(path=path)
def get_head(self):
try:
return self.repo.head.commit.hexsha
except ValueError:
return None

View file

@ -60,6 +60,9 @@ class RepoMap:
self.main_model = main_model self.main_model = main_model
self.tree_cache = {}
self.tree_context_cache = {}
def token_count(self, text): def token_count(self, text):
len_text = len(text) len_text = len(text)
if len_text < 200: if len_text < 200:
@ -471,24 +474,28 @@ class RepoMap:
if key in self.tree_cache: if key in self.tree_cache:
return self.tree_cache[key] return self.tree_cache[key]
code = self.io.read_text(abs_fname) or "" if rel_fname not in self.tree_context_cache:
if not code.endswith("\n"): code = self.io.read_text(abs_fname) or ""
code += "\n" if not code.endswith("\n"):
code += "\n"
context = TreeContext( context = TreeContext(
rel_fname, rel_fname,
code, code,
color=False, color=False,
line_number=False, line_number=False,
child_context=False, child_context=False,
last_line=False, last_line=False,
margin=0, margin=0,
mark_lois=False, mark_lois=False,
loi_pad=0, loi_pad=0,
# header_max=30, # header_max=30,
show_top_of_file_parent_scope=False, show_top_of_file_parent_scope=False,
) )
self.tree_context_cache[rel_fname] = context
context = self.tree_context_cache[rel_fname]
context.lines_of_interest = set()
context.add_lines_of_interest(lois) context.add_lines_of_interest(lois)
context.add_context() context.add_context()
res = context.format() res = context.format()

View file

@ -87,26 +87,48 @@ class Scraper:
def scrape(self, url): def scrape(self, url):
""" """
Scrape a url and turn it into readable markdown. Scrape a url and turn it into readable markdown if it's HTML.
If it's plain text or non-HTML, return it as-is.
`url` - the URLto scrape. `url` - the URL to scrape.
""" """
if self.playwright_available: if self.playwright_available:
content = self.scrape_with_playwright(url) content, mime_type = self.scrape_with_playwright(url)
else: else:
content = self.scrape_with_httpx(url) content, mime_type = self.scrape_with_httpx(url)
if not content: if not content:
self.print_error(f"Failed to retrieve content from {url}") self.print_error(f"Failed to retrieve content from {url}")
return None return None
self.try_pandoc() # Check if the content is HTML based on MIME type or content
if (mime_type and mime_type.startswith("text/html")) or (
content = self.html_to_markdown(content) mime_type is None and self.looks_like_html(content)
):
self.try_pandoc()
content = self.html_to_markdown(content)
return content return content
def looks_like_html(self, content):
"""
Check if the content looks like HTML.
"""
if isinstance(content, str):
# Check for common HTML tags
html_patterns = [
r"<!DOCTYPE\s+html",
r"<html",
r"<head",
r"<body",
r"<div",
r"<p>",
r"<a\s+href=",
]
return any(re.search(pattern, content, re.IGNORECASE) for pattern in html_patterns)
return False
# Internals... # Internals...
def scrape_with_playwright(self, url): def scrape_with_playwright(self, url):
import playwright import playwright
@ -118,7 +140,7 @@ class Scraper:
except Exception as e: except Exception as e:
self.playwright_available = False self.playwright_available = False
self.print_error(str(e)) self.print_error(str(e))
return return None, None
try: try:
context = browser.new_context(ignore_https_errors=not self.verify_ssl) context = browser.new_context(ignore_https_errors=not self.verify_ssl)
@ -131,23 +153,28 @@ class Scraper:
page.set_extra_http_headers({"User-Agent": user_agent}) page.set_extra_http_headers({"User-Agent": user_agent})
response = None
try: try:
page.goto(url, wait_until="networkidle", timeout=5000) response = page.goto(url, wait_until="networkidle", timeout=5000)
except playwright._impl._errors.TimeoutError: except playwright._impl._errors.TimeoutError:
self.print_error(f"Timeout while loading {url}") self.print_error(f"Timeout while loading {url}")
except playwright._impl._errors.Error as e: except playwright._impl._errors.Error as e:
self.print_error(f"Error navigating to {url}: {str(e)}") self.print_error(f"Error navigating to {url}: {str(e)}")
return None return None, None
try: try:
content = page.content() content = page.content()
mime_type = (
response.header_value("content-type").split(";")[0] if response else None
)
except playwright._impl._errors.Error as e: except playwright._impl._errors.Error as e:
self.print_error(f"Error retrieving page content: {str(e)}") self.print_error(f"Error retrieving page content: {str(e)}")
content = None content = None
mime_type = None
finally: finally:
browser.close() browser.close()
return content return content, mime_type
def scrape_with_httpx(self, url): def scrape_with_httpx(self, url):
import httpx import httpx
@ -157,12 +184,12 @@ class Scraper:
with httpx.Client(headers=headers, verify=self.verify_ssl) as client: with httpx.Client(headers=headers, verify=self.verify_ssl) as client:
response = client.get(url) response = client.get(url)
response.raise_for_status() response.raise_for_status()
return response.text return response.text, response.headers.get("content-type", "").split(";")[0]
except httpx.HTTPError as http_err: except httpx.HTTPError as http_err:
self.print_error(f"HTTP error occurred: {http_err}") self.print_error(f"HTTP error occurred: {http_err}")
except Exception as err: except Exception as err:
self.print_error(f"An error occurred: {err}") self.print_error(f"An error occurred: {err}")
return None return None, None
def try_pandoc(self): def try_pandoc(self):
if self.pandoc_available: if self.pandoc_available:

View file

@ -14,24 +14,28 @@ CACHE = None
# CACHE = Cache(CACHE_PATH) # CACHE = Cache(CACHE_PATH)
def retry_exceptions():
import httpx
return (
httpx.ConnectError,
httpx.RemoteProtocolError,
httpx.ReadTimeout,
litellm.exceptions.APIConnectionError,
litellm.exceptions.APIError,
litellm.exceptions.RateLimitError,
litellm.exceptions.ServiceUnavailableError,
litellm.exceptions.Timeout,
litellm.exceptions.InternalServerError,
litellm.llms.anthropic.AnthropicError,
)
def lazy_litellm_retry_decorator(func): def lazy_litellm_retry_decorator(func):
def wrapper(*args, **kwargs): def wrapper(*args, **kwargs):
import httpx
decorated_func = backoff.on_exception( decorated_func = backoff.on_exception(
backoff.expo, backoff.expo,
( retry_exceptions(),
httpx.ConnectError,
httpx.RemoteProtocolError,
httpx.ReadTimeout,
litellm.exceptions.APIConnectionError,
litellm.exceptions.APIError,
litellm.exceptions.RateLimitError,
litellm.exceptions.ServiceUnavailableError,
litellm.exceptions.Timeout,
litellm.exceptions.InternalServerError,
litellm.llms.anthropic.AnthropicError,
),
max_time=60, max_time=60,
on_backoff=lambda details: print( on_backoff=lambda details: print(
f"{details.get('exception', 'Exception')}\nRetry in {details['wait']:.1f} seconds." f"{details.get('exception', 'Exception')}\nRetry in {details['wait']:.1f} seconds."
@ -42,8 +46,7 @@ def lazy_litellm_retry_decorator(func):
return wrapper return wrapper
@lazy_litellm_retry_decorator def send_completion(
def send_with_retries(
model_name, messages, functions, stream, temperature=0, extra_headers=None, max_tokens=None model_name, messages, functions, stream, temperature=0, extra_headers=None, max_tokens=None
): ):
from aider.llm import litellm from aider.llm import litellm
@ -54,6 +57,7 @@ def send_with_retries(
temperature=temperature, temperature=temperature,
stream=stream, stream=stream,
) )
if functions is not None: if functions is not None:
kwargs["tools"] = [dict(type="functions", function=functions[0])] kwargs["tools"] = [dict(type="functions", function=functions[0])]
if extra_headers is not None: if extra_headers is not None:
@ -79,9 +83,10 @@ def send_with_retries(
return hash_object, res return hash_object, res
@lazy_litellm_retry_decorator
def simple_send_with_retries(model_name, messages): def simple_send_with_retries(model_name, messages):
try: try:
_hash, response = send_with_retries( _hash, response = send_completion(
model_name=model_name, model_name=model_name,
messages=messages, messages=messages,
functions=None, functions=None,

View file

@ -44,7 +44,7 @@ class ChdirTemporaryDirectory(IgnorantTemporaryDirectory):
def __enter__(self): def __enter__(self):
res = super().__enter__() res = super().__enter__()
os.chdir(self.temp_dir.name) os.chdir(Path(self.temp_dir.name).resolve())
return res return res
def __exit__(self, exc_type, exc_val, exc_tb): def __exit__(self, exc_type, exc_val, exc_tb):
@ -112,13 +112,19 @@ def format_messages(messages, title=None):
content = msg.get("content") content = msg.get("content")
if isinstance(content, list): # Handle list content (e.g., image messages) if isinstance(content, list): # Handle list content (e.g., image messages)
for item in content: for item in content:
if isinstance(item, dict) and "image_url" in item: if isinstance(item, dict):
output.append(f"{role} Image URL: {item['image_url']['url']}") for key, value in item.items():
if isinstance(value, dict) and "url" in value:
output.append(f"{role} {key.capitalize()} URL: {value['url']}")
else:
output.append(f"{role} {key}: {value}")
else:
output.append(f"{role} {item}")
elif isinstance(content, str): # Handle string content elif isinstance(content, str): # Handle string content
output.append(format_content(role, content)) output.append(format_content(role, content))
content = msg.get("function_call") function_call = msg.get("function_call")
if content: if function_call:
output.append(f"{role} {content}") output.append(f"{role} Function Call: {function_call}")
return "\n".join(output) return "\n".join(output)

View file

@ -10,12 +10,15 @@ from aider import utils
from aider.dump import dump # noqa: F401 from aider.dump import dump # noqa: F401
def check_version(io, just_check=False): def check_version(io, just_check=False, verbose=False):
fname = Path.home() / ".aider" / "caches" / "versioncheck" fname = Path.home() / ".aider" / "caches" / "versioncheck"
if not just_check and fname.exists(): if not just_check and fname.exists():
day = 60 * 60 * 24 day = 60 * 60 * 24
since = time.time() - fname.stat().st_mtime since = time.time() - fname.stat().st_mtime
if since < day: if since < day:
if verbose:
hours = since / 60 / 60
io.tool_output(f"Too soon to check version: {hours:.1f} hours")
return return
# To keep startup fast, avoid importing this unless needed # To keep startup fast, avoid importing this unless needed
@ -27,7 +30,7 @@ def check_version(io, just_check=False):
latest_version = data["info"]["version"] latest_version = data["info"]["version"]
current_version = aider.__version__ current_version = aider.__version__
if just_check: if just_check or verbose:
io.tool_output(f"Current version: {current_version}") io.tool_output(f"Current version: {current_version}")
io.tool_output(f"Latest version: {latest_version}") io.tool_output(f"Latest version: {latest_version}")
@ -41,11 +44,13 @@ def check_version(io, just_check=False):
fname.parent.mkdir(parents=True, exist_ok=True) fname.parent.mkdir(parents=True, exist_ok=True)
fname.touch() fname.touch()
if just_check: if just_check or verbose:
if is_update_available: if is_update_available:
io.tool_output("Update available") io.tool_output("Update available")
else: else:
io.tool_output("No update available") io.tool_output("No update available")
if just_check:
return is_update_available return is_update_available
if not is_update_available: if not is_update_available:

View file

@ -16,6 +16,51 @@ cog.out(text)
# Release history # Release history
### Aider v0.50.0
- Infinite output for DeepSeek Coder, Mistral models in addition to Anthropic's models.
- New `--deepseek` switch to use DeepSeek Coder.
- DeepSeek Coder uses 8k token output.
- New `--chat-mode <mode>` switch to launch in ask/help/code modes.
- New `/code <message>` command request a code edit while in `ask` mode.
- Web scraper is more robust if page never idles.
- Improved token and cost reporting for infinite output.
- Improvements and bug fixes for `/read` only files.
- Switched from `setup.py` to `pyproject.toml`, by @branchvincent.
- Bug fix to persist files added during `/ask`.
- Bug fix for chat history size in `/tokens`.
- Aider wrote 66% of the code in this release.
### Aider v0.49.1
- Bugfix to `/help`.
### Aider v0.49.0
- Add read-only files to the chat context with `/read` and `--read`, including from outside the git repo.
- `/diff` now shows diffs of all changes resulting from your request, including lint and test fixes.
- New `/clipboard` command to paste images or text from the clipboard, replaces `/add-clipboard-image`.
- Now shows the markdown scraped when you add a url with `/web`.
- When [scripting aider](https://aider.chat/docs/scripting.html) messages can now contain in-chat `/` commands.
- Aider in docker image now suggests the correct command to update to latest version.
- Improved retries on API errors (was easy to test during Sonnet outage).
- Added `--mini` for `gpt-4o-mini`.
- Bugfix to keep session cost accurate when using `/ask` and `/help`.
- Performance improvements for repo map calculation.
- `/tokens` now shows the active model.
- Enhanced commit message attribution options:
- New `--attribute-commit-message-author` to prefix commit messages with 'aider: ' if aider authored the changes, replaces `--attribute-commit-message`.
- New `--attribute-commit-message-committer` to prefix all commit messages with 'aider: '.
- Aider wrote 61% of the code in this release.
### Aider v0.48.1
- Added `openai/gpt-4o-2024-08-06`.
- Worked around litellm bug that removes OpenRouter app headers when using `extra_headers`.
- Improved progress indication during repo map processing.
- Corrected instructions for upgrading the docker container to latest aider version.
- Removed obsolete 16k token limit on commit diffs, use per-model limits.
### Aider v0.48.0 ### Aider v0.48.0
- Performance improvements for large/mono repos. - Performance improvements for large/mono repos.

File diff suppressed because it is too large Load diff

View file

@ -1,90 +1,126 @@
<canvas id="blameChart" width="800" height="450" style="margin-top: 20px"></canvas> <canvas id="blameChart" width="800" height="360" style="margin-top: 20px"></canvas>
<canvas id="linesChart" width="800" height="360" style="margin-top: 20px"></canvas>
<script src="https://cdn.jsdelivr.net/npm/chart.js"></script> <script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
<script src="https://cdn.jsdelivr.net/npm/moment"></script> <script src="https://cdn.jsdelivr.net/npm/moment"></script>
<script src="https://cdn.jsdelivr.net/npm/chartjs-adapter-moment"></script> <script src="https://cdn.jsdelivr.net/npm/chartjs-adapter-moment"></script>
<script> <script>
document.addEventListener('DOMContentLoaded', function () { document.addEventListener('DOMContentLoaded', function () {
var ctx = document.getElementById('blameChart').getContext('2d'); var blameCtx = document.getElementById('blameChart').getContext('2d');
var linesCtx = document.getElementById('linesChart').getContext('2d');
var labels = [{% for row in site.data.blame %}'{{ row.end_tag }}',{% endfor %}];
var blameData = { var blameData = {
labels: labels,
datasets: [{ datasets: [{
label: 'Aider\'s Contribution to Each Release', label: 'Aider\'s percent of new code by release',
data: [ data: [{% for row in site.data.blame %}{ x: '{{ row.end_tag }}', y: {{ row.aider_percentage }}, lines: {{ row.aider_total }} },{% endfor %}],
{% for row in site.data.blame %} backgroundColor: 'rgba(54, 162, 235, 0.8)',
{
x: '{{ row.end_date }}',
y: {{ row.aider_percentage }},
r: Math.sqrt({{ row.aider_total }}) * 1.5,
label: '{{ row.end_tag }}',
percentage: {{ row.aider_percentage }},
lines: {{ row.aider_total }}
},
{% endfor %}
],
backgroundColor: 'rgba(54, 162, 235, 0.2)',
borderColor: 'rgba(54, 162, 235, 1)', borderColor: 'rgba(54, 162, 235, 1)',
borderWidth: 1 borderWidth: 1
}] }]
}; };
var blameChart = new Chart(ctx, { var linesData = {
type: 'bubble', labels: labels,
datasets: [{
label: 'Aider\'s lines of new code',
data: [{% for row in site.data.blame %}{ x: '{{ row.end_tag }}', y: {{ row.aider_total }} },{% endfor %}],
backgroundColor: 'rgba(255, 99, 132, 0.8)',
borderColor: 'rgba(255, 99, 132, 1)',
borderWidth: 1
}]
};
var blameChart = new Chart(blameCtx, {
type: 'bar',
data: blameData, data: blameData,
options: { options: {
scales: { scales: {
x: { x: {
type: 'time', type: 'category',
time: {
unit: 'month',
displayFormats: {
month: 'MMM YYYY'
}
},
title: { title: {
display: true, display: true,
text: 'Release date' text: 'Version'
}, },
ticks: { ticks: {
maxRotation: 45, maxRotation: 45,
minRotation: 45 minRotation: 45
}, }
min: moment('{{ site.data.blame | first | map: "end_date" | first }}').subtract(1, 'month'),
max: moment('{{ site.data.blame | last | map: "end_date" | first }}').add(1, 'month')
}, },
y: { y: {
title: { title: {
display: true, display: true,
text: 'Aider Contribution (% of code)' text: 'Percent of new code'
}, },
beginAtZero: true beginAtZero: true
} }
}, },
plugins: { plugins: {
legend: {
display: false
},
tooltip: { tooltip: {
callbacks: { callbacks: {
label: function(context) { label: function(context) {
return `${context.raw.label}: ${Math.round(context.raw.percentage)}% (${context.raw.lines} lines)`; var label = 'Aider\'s contribution';
} var value = context.parsed.y || 0;
} var lines = context.raw.lines || 0;
}, return `${label}: ${Math.round(value)}% (${lines} lines)`;
legend: {
display: true,
position: 'top',
labels: {
generateLabels: function(chart) {
return [{
text: 'Bubble size: Lines of code contributed by aider',
fillStyle: 'rgba(54, 162, 235, 0.2)',
strokeStyle: 'rgba(54, 162, 235, 1)',
lineWidth: 1,
hidden: false,
index: 0
}];
} }
} }
}, },
title: { title: {
display: true, display: true,
text: 'Aider\'s Contribution to Each Release', text: 'Percent of new code written by aider, by release',
font: {
size: 16
}
}
}
}
});
var linesChart = new Chart(linesCtx, {
type: 'bar',
data: linesData,
options: {
scales: {
x: {
type: 'category',
title: {
display: true,
text: 'Version'
},
ticks: {
maxRotation: 45,
minRotation: 45
}
},
y: {
title: {
display: true,
text: 'Lines of new code'
},
beginAtZero: true
}
},
plugins: {
legend: {
display: false
},
tooltip: {
callbacks: {
label: function(context) {
var label = 'New lines of code by aider';
var value = context.parsed.y || 0;
return `${label}: ${value}`;
}
}
},
title: {
display: true,
text: 'Lines of new code written by aider, by release',
font: { font: {
size: 16 size: 16
} }

View file

@ -2,16 +2,16 @@
You can get started quickly like this: You can get started quickly like this:
``` ```
$ pip install aider-chat python -m pip install aider-chat
# Change directory into a git repo # Change directory into a git repo
$ cd /to/your/git/repo cd /to/your/git/repo
# Work with Claude 3.5 Sonnet on your repo # Work with Claude 3.5 Sonnet on your repo
$ export ANTHROPIC_API_KEY=your-key-goes-here export ANTHROPIC_API_KEY=your-key-goes-here
$ aider aider
# Work with GPT-4o on your repo # Work with GPT-4o on your repo
$ export OPENAI_API_KEY=your-key-goes-here export OPENAI_API_KEY=your-key-goes-here
$ aider aider
``` ```

View file

@ -44,6 +44,10 @@ Model azure/gpt-4-turbo: Missing these environment variables:
- AZURE_API_KEY - AZURE_API_KEY
``` ```
{: .tip }
On Windows,
if you just set these environment variables using `setx` you may need to restart your terminal or
command prompt for the changes to take effect.
## Unknown which environment variables are required ## Unknown which environment variables are required

View file

@ -0,0 +1,5 @@
{: .tip }
In some environments you may get "aider command not found" errors.
You can try `python -m aider` or
[see here for more info](/docs/troubleshooting/aider-not-found.html).

View file

@ -0,0 +1,7 @@
{: .tip }
Using a Python
[virtual environment](https://docs.python.org/3/library/venv.html){:target="_blank"}
is recommended.
Or, you could
[use pipx to install aider](/docs/install/pipx.html)
once for your whole system.

View file

@ -209,7 +209,7 @@ that aider originally used.
Switching from ctags to tree-sitter provides a bunch of benefits: Switching from ctags to tree-sitter provides a bunch of benefits:
- The map is richer, showing full function call signatures and other details straight from the source files. - The map is richer, showing full function call signatures and other details straight from the source files.
- Thanks to `py-tree-sitter-languages`, we get full support for many programming languages via a python package that's automatically installed as part of the normal `pip install aider-chat`. - Thanks to `py-tree-sitter-languages`, we get full support for many programming languages via a python package that's automatically installed as part of the normal `python -m pip install aider-chat`.
- We remove the requirement for users to manually install `universal-ctags` via some external tool or package manager (brew, apt, choco, etc). - We remove the requirement for users to manually install `universal-ctags` via some external tool or package manager (brew, apt, choco, etc).
- Tree-sitter integration is a key enabler for future work and capabilities for aider. - Tree-sitter integration is a key enabler for future work and capabilities for aider.

View file

@ -23,7 +23,7 @@ making it the best available model for pair programming with AI.
To use Claude 3 Opus with aider: To use Claude 3 Opus with aider:
``` ```
pip install aider-chat python -m pip install aider-chat
export ANTHROPIC_API_KEY=sk-... export ANTHROPIC_API_KEY=sk-...
aider --opus aider --opus
``` ```

View file

@ -46,7 +46,7 @@ It also supports [connecting to almost any LLM](https://aider.chat/docs/llms.htm
Use the `--browser` switch to launch the browser version of aider: Use the `--browser` switch to launch the browser version of aider:
``` ```
pip install aider-chat python -m pip install aider-chat
export OPENAI_API_KEY=<key> # Mac/Linux export OPENAI_API_KEY=<key> # Mac/Linux
setx OPENAI_API_KEY <key> # Windows, restart shell after setx setx OPENAI_API_KEY <key> # Windows, restart shell after setx

View file

@ -116,7 +116,7 @@ for more details, but
you can get started quickly with aider and Sonnet like this: you can get started quickly with aider and Sonnet like this:
``` ```
$ pip install aider-chat $ python -m pip install aider-chat
$ export ANTHROPIC_API_KEY=<key> # Mac/Linux $ export ANTHROPIC_API_KEY=<key> # Mac/Linux
$ setx ANTHROPIC_API_KEY <key> # Windows, restart shell after setx $ setx ANTHROPIC_API_KEY <key> # Windows, restart shell after setx

View file

@ -30,7 +30,7 @@ included for scale.
You can code with all of these models using aider like this: You can code with all of these models using aider like this:
``` ```
$ pip install aider-chat $ python -m pip install aider-chat
# Change directory into a git repo to work on # Change directory into a git repo to work on
$ cd /to/your/git/repo $ cd /to/your/git/repo

Binary file not shown.

Before

Width:  |  Height:  |  Size: 64 KiB

After

Width:  |  Height:  |  Size: 158 KiB

Before After
Before After

View file

@ -38,12 +38,18 @@
## Use gpt-4o model for the main chat ## Use gpt-4o model for the main chat
#4o: false #4o: false
## Use gpt-4o-mini model for the main chat
#mini: false
## Use gpt-4-1106-preview model for the main chat ## Use gpt-4-1106-preview model for the main chat
#4-turbo: false #4-turbo: false
## Use gpt-3.5-turbo model for the main chat ## Use gpt-3.5-turbo model for the main chat
#35turbo: false #35turbo: false
## Use deepseek/deepseek-coder model for the main chat
#deepseek: false
################# #################
# Model Settings: # Model Settings:
@ -167,8 +173,11 @@
## Attribute aider commits in the git committer name (default: True) ## Attribute aider commits in the git committer name (default: True)
#attribute-committer: true #attribute-committer: true
## Prefix commit messages with 'aider: ' (default: False) ## Prefix commit messages with 'aider: ' if aider authored the changes (default: False)
#attribute-commit-message: false #attribute-commit-message-author: false
## Prefix all commit messages with 'aider: ' (default: False)
#attribute-commit-message-committer: false
## Commit all pending changes with a suitable commit message, then exit ## Commit all pending changes with a suitable commit message, then exit
#commit: false #commit: false
@ -206,6 +215,9 @@
## specify a file to edit (can be used multiple times) ## specify a file to edit (can be used multiple times)
#file: #file:
## specify a read-only file (can be used multiple times)
#read:
## Use VI editing mode in the terminal (default: False) ## Use VI editing mode in the terminal (default: False)
#vim: false #vim: false

View file

@ -42,12 +42,18 @@
## Use gpt-4o model for the main chat ## Use gpt-4o model for the main chat
#AIDER_4O= #AIDER_4O=
## Use gpt-4o-mini model for the main chat
#AIDER_MINI=
## Use gpt-4-1106-preview model for the main chat ## Use gpt-4-1106-preview model for the main chat
#AIDER_4_TURBO= #AIDER_4_TURBO=
## Use gpt-3.5-turbo model for the main chat ## Use gpt-3.5-turbo model for the main chat
#AIDER_35TURBO= #AIDER_35TURBO=
## Use deepseek/deepseek-coder model for the main chat
#AIDER_DEEPSEEK=
################# #################
# Model Settings: # Model Settings:
@ -171,8 +177,11 @@
## Attribute aider commits in the git committer name (default: True) ## Attribute aider commits in the git committer name (default: True)
#AIDER_ATTRIBUTE_COMMITTER=true #AIDER_ATTRIBUTE_COMMITTER=true
## Prefix commit messages with 'aider: ' (default: False) ## Prefix commit messages with 'aider: ' if aider authored the changes (default: False)
#AIDER_ATTRIBUTE_COMMIT_MESSAGE=false #AIDER_ATTRIBUTE_COMMIT_MESSAGE_AUTHOR=false
## Prefix all commit messages with 'aider: ' (default: False)
#AIDER_ATTRIBUTE_COMMIT_MESSAGE_COMMITTER=false
## Commit all pending changes with a suitable commit message, then exit ## Commit all pending changes with a suitable commit message, then exit
#AIDER_COMMIT=false #AIDER_COMMIT=false
@ -210,6 +219,9 @@
## specify a file to edit (can be used multiple times) ## specify a file to edit (can be used multiple times)
#AIDER_FILE= #AIDER_FILE=
## specify a read-only file (can be used multiple times)
#AIDER_READ=
## Use VI editing mode in the terminal (default: False) ## Use VI editing mode in the terminal (default: False)
#AIDER_VIM=false #AIDER_VIM=false

View file

@ -77,12 +77,18 @@ cog.outl("```")
## Use gpt-4o model for the main chat ## Use gpt-4o model for the main chat
#4o: false #4o: false
## Use gpt-4o-mini model for the main chat
#mini: false
## Use gpt-4-1106-preview model for the main chat ## Use gpt-4-1106-preview model for the main chat
#4-turbo: false #4-turbo: false
## Use gpt-3.5-turbo model for the main chat ## Use gpt-3.5-turbo model for the main chat
#35turbo: false #35turbo: false
## Use deepseek/deepseek-coder model for the main chat
#deepseek: false
################# #################
# Model Settings: # Model Settings:
@ -206,8 +212,11 @@ cog.outl("```")
## Attribute aider commits in the git committer name (default: True) ## Attribute aider commits in the git committer name (default: True)
#attribute-committer: true #attribute-committer: true
## Prefix commit messages with 'aider: ' (default: False) ## Prefix commit messages with 'aider: ' if aider authored the changes (default: False)
#attribute-commit-message: false #attribute-commit-message-author: false
## Prefix all commit messages with 'aider: ' (default: False)
#attribute-commit-message-committer: false
## Commit all pending changes with a suitable commit message, then exit ## Commit all pending changes with a suitable commit message, then exit
#commit: false #commit: false
@ -245,6 +254,9 @@ cog.outl("```")
## specify a file to edit (can be used multiple times) ## specify a file to edit (can be used multiple times)
#file: #file:
## specify a read-only file (can be used multiple times)
#read:
## Use VI editing mode in the terminal (default: False) ## Use VI editing mode in the terminal (default: False)
#vim: false #vim: false

View file

@ -84,12 +84,18 @@ cog.outl("```")
## Use gpt-4o model for the main chat ## Use gpt-4o model for the main chat
#AIDER_4O= #AIDER_4O=
## Use gpt-4o-mini model for the main chat
#AIDER_MINI=
## Use gpt-4-1106-preview model for the main chat ## Use gpt-4-1106-preview model for the main chat
#AIDER_4_TURBO= #AIDER_4_TURBO=
## Use gpt-3.5-turbo model for the main chat ## Use gpt-3.5-turbo model for the main chat
#AIDER_35TURBO= #AIDER_35TURBO=
## Use deepseek/deepseek-coder model for the main chat
#AIDER_DEEPSEEK=
################# #################
# Model Settings: # Model Settings:
@ -213,8 +219,11 @@ cog.outl("```")
## Attribute aider commits in the git committer name (default: True) ## Attribute aider commits in the git committer name (default: True)
#AIDER_ATTRIBUTE_COMMITTER=true #AIDER_ATTRIBUTE_COMMITTER=true
## Prefix commit messages with 'aider: ' (default: False) ## Prefix commit messages with 'aider: ' if aider authored the changes (default: False)
#AIDER_ATTRIBUTE_COMMIT_MESSAGE=false #AIDER_ATTRIBUTE_COMMIT_MESSAGE_AUTHOR=false
## Prefix all commit messages with 'aider: ' (default: False)
#AIDER_ATTRIBUTE_COMMIT_MESSAGE_COMMITTER=false
## Commit all pending changes with a suitable commit message, then exit ## Commit all pending changes with a suitable commit message, then exit
#AIDER_COMMIT=false #AIDER_COMMIT=false
@ -252,6 +261,9 @@ cog.outl("```")
## specify a file to edit (can be used multiple times) ## specify a file to edit (can be used multiple times)
#AIDER_FILE= #AIDER_FILE=
## specify a read-only file (can be used multiple times)
#AIDER_READ=
## Use VI editing mode in the terminal (default: False) ## Use VI editing mode in the terminal (default: False)
#AIDER_VIM=false #AIDER_VIM=false

View file

@ -26,8 +26,8 @@ cog.out(get_md_help())
]]]--> ]]]-->
``` ```
usage: aider [-h] [--openai-api-key] [--anthropic-api-key] [--model] usage: aider [-h] [--openai-api-key] [--anthropic-api-key] [--model]
[--opus] [--sonnet] [--4] [--4o] [--4-turbo] [--opus] [--sonnet] [--4] [--4o] [--mini] [--4-turbo]
[--35turbo] [--models] [--openai-api-base] [--35turbo] [--deepseek] [--models] [--openai-api-base]
[--openai-api-type] [--openai-api-version] [--openai-api-type] [--openai-api-version]
[--openai-api-deployment-id] [--openai-organization-id] [--openai-api-deployment-id] [--openai-organization-id]
[--model-settings-file] [--model-metadata-file] [--model-settings-file] [--model-metadata-file]
@ -47,12 +47,13 @@ usage: aider [-h] [--openai-api-key] [--anthropic-api-key] [--model]
[--dirty-commits | --no-dirty-commits] [--dirty-commits | --no-dirty-commits]
[--attribute-author | --no-attribute-author] [--attribute-author | --no-attribute-author]
[--attribute-committer | --no-attribute-committer] [--attribute-committer | --no-attribute-committer]
[--attribute-commit-message | --no-attribute-commit-message] [--attribute-commit-message-author | --no-attribute-commit-message-author]
[--attribute-commit-message-committer | --no-attribute-commit-message-committer]
[--commit] [--commit-prompt] [--dry-run | --no-dry-run] [--commit] [--commit-prompt] [--dry-run | --no-dry-run]
[--lint] [--lint-cmd] [--auto-lint | --no-auto-lint] [--lint] [--lint-cmd] [--auto-lint | --no-auto-lint]
[--test-cmd] [--auto-test | --no-auto-test] [--test] [--test-cmd] [--auto-test | --no-auto-test] [--test]
[--file] [--vim] [--voice-language] [--version] [--file] [--read] [--vim] [--voice-language]
[--just-check-update] [--version] [--just-check-update]
[--check-update | --no-check-update] [--apply] [--yes] [--check-update | --no-check-update] [--apply] [--yes]
[-v] [--show-repo-map] [--show-prompts] [--exit] [-v] [--show-repo-map] [--show-prompts] [--exit]
[--message] [--message-file] [--encoding] [-c] [--gui] [--message] [--message-file] [--encoding] [-c] [--gui]
@ -100,6 +101,10 @@ Aliases:
Use gpt-4o model for the main chat Use gpt-4o model for the main chat
Environment variable: `AIDER_4O` Environment variable: `AIDER_4O`
### `--mini`
Use gpt-4o-mini model for the main chat
Environment variable: `AIDER_MINI`
### `--4-turbo` ### `--4-turbo`
Use gpt-4-1106-preview model for the main chat Use gpt-4-1106-preview model for the main chat
Environment variable: `AIDER_4_TURBO` Environment variable: `AIDER_4_TURBO`
@ -113,6 +118,10 @@ Aliases:
- `--3` - `--3`
- `-3` - `-3`
### `--deepseek`
Use deepseek/deepseek-coder model for the main chat
Environment variable: `AIDER_DEEPSEEK`
## Model Settings: ## Model Settings:
### `--models MODEL` ### `--models MODEL`
@ -160,6 +169,9 @@ Aliases:
### `--edit-format EDIT_FORMAT` ### `--edit-format EDIT_FORMAT`
Specify what edit format the LLM should use (default depends on model) Specify what edit format the LLM should use (default depends on model)
Environment variable: `AIDER_EDIT_FORMAT` Environment variable: `AIDER_EDIT_FORMAT`
Aliases:
- `--edit-format EDIT_FORMAT`
- `--chat-mode EDIT_FORMAT`
### `--weak-model WEAK_MODEL` ### `--weak-model WEAK_MODEL`
Specify the model to use for commit messages and chat history summarization (default depends on --model) Specify the model to use for commit messages and chat history summarization (default depends on --model)
@ -327,13 +339,21 @@ Aliases:
- `--attribute-committer` - `--attribute-committer`
- `--no-attribute-committer` - `--no-attribute-committer`
### `--attribute-commit-message` ### `--attribute-commit-message-author`
Prefix commit messages with 'aider: ' (default: False) Prefix commit messages with 'aider: ' if aider authored the changes (default: False)
Default: False Default: False
Environment variable: `AIDER_ATTRIBUTE_COMMIT_MESSAGE` Environment variable: `AIDER_ATTRIBUTE_COMMIT_MESSAGE_AUTHOR`
Aliases: Aliases:
- `--attribute-commit-message` - `--attribute-commit-message-author`
- `--no-attribute-commit-message` - `--no-attribute-commit-message-author`
### `--attribute-commit-message-committer`
Prefix all commit messages with 'aider: ' (default: False)
Default: False
Environment variable: `AIDER_ATTRIBUTE_COMMIT_MESSAGE_COMMITTER`
Aliases:
- `--attribute-commit-message-committer`
- `--no-attribute-commit-message-committer`
### `--commit` ### `--commit`
Commit all pending changes with a suitable commit message, then exit Commit all pending changes with a suitable commit message, then exit
@ -396,6 +416,10 @@ Environment variable: `AIDER_TEST`
specify a file to edit (can be used multiple times) specify a file to edit (can be used multiple times)
Environment variable: `AIDER_FILE` Environment variable: `AIDER_FILE`
### `--read FILE`
specify a read-only file (can be used multiple times)
Environment variable: `AIDER_READ`
### `--vim` ### `--vim`
Use VI editing mode in the terminal (default: False) Use VI editing mode in the terminal (default: False)
Default: False Default: False

View file

@ -73,7 +73,7 @@ cd aider
# It's recommended to make a virtual environment # It's recommended to make a virtual environment
# Install the dependencies listed in the `requirements.txt` file: # Install the dependencies listed in the `requirements.txt` file:
pip install -e . python -m pip install -e .
# Run the local version of Aider: # Run the local version of Aider:
python -m aider python -m aider

View file

@ -55,5 +55,9 @@ Aider marks commits that it either authored or committed.
You can use `--no-attribute-author` and `--no-attribute-committer` to disable You can use `--no-attribute-author` and `--no-attribute-committer` to disable
modification of the git author and committer name fields. modification of the git author and committer name fields.
Additionally, you can use `--attribute-commit-message` to prefix commit messages with 'aider: '. Additionally, you can use the following options to prefix commit messages:
This option is disabled by default, but can be useful for easily identifying commits made by aider.
- `--attribute-commit-message-author`: Prefix commit messages with 'aider: ' if aider authored the changes.
- `--attribute-commit-message-committer`: Prefix all commit messages with 'aider: ', regardless of whether aider authored the changes or not.
Both of these options are disabled by default, but can be useful for easily identifying changes made by aider.

View file

@ -15,6 +15,8 @@ for more details,
or the or the
[usage instructions](https://aider.chat/docs/usage.html) to start coding with aider. [usage instructions](https://aider.chat/docs/usage.html) to start coding with aider.
{% include python-m-aider.md %}
<div class="video-container"> <div class="video-container">
<video controls poster="/assets/install.jpg"> <video controls poster="/assets/install.jpg">
<source src="/assets/install.mp4" type="video/mp4"> <source src="/assets/install.mp4" type="video/mp4">

View file

@ -25,13 +25,7 @@ To work with Anthropic's models like Claude 3.5 Sonnet you need a paid
[Anthropic API key](https://docs.anthropic.com/claude/reference/getting-started-with-the-api). [Anthropic API key](https://docs.anthropic.com/claude/reference/getting-started-with-the-api).
{: .tip } {% include venv-pipx.md %}
Using a Python
[virtual environment](https://docs.python.org/3/library/venv.html){:target="_blank"}
is recommended.
Or, you could
[use pipx to install aider](/docs/install/pipx.html)
once for your whole system.
## Mac/Linux install ## Mac/Linux install
@ -59,10 +53,7 @@ $ aider --4o --openai-api-key sk-xxx...
$ aider --sonnet --anthropic-api-key sk-xxx... $ aider --sonnet --anthropic-api-key sk-xxx...
``` ```
{: .tip } {% include python-m-aider.md %}
In some environments the `aider` command may not be available
on your shell path.
You can also run aider like this: `python -m aider`
## Working with other LLMs ## Working with other LLMs

View file

@ -8,7 +8,7 @@ nav_order: 100
If you are using aider to work on a python project, sometimes your project will require If you are using aider to work on a python project, sometimes your project will require
specific versions of python packages which conflict with the versions that aider specific versions of python packages which conflict with the versions that aider
requires. requires.
If this happens, the `pip install` command may return errors like these: If this happens, the `python -m pip install` command may return errors like these:
``` ```
aider-chat 0.23.0 requires somepackage==X.Y.Z, but you have somepackage U.W.V which is incompatible. aider-chat 0.23.0 requires somepackage==X.Y.Z, but you have somepackage U.W.V which is incompatible.

View file

@ -5,7 +5,7 @@ description: Aider supports pretty much all popular coding languages.
--- ---
# Supported languages # Supported languages
Aider supports almost all popular coding languages. Aider should work well with most popular coding languages.
This is because top LLMs are fluent in most mainstream languages, This is because top LLMs are fluent in most mainstream languages,
and familiar with popular libraries, packages and frameworks. and familiar with popular libraries, packages and frameworks.
@ -20,8 +20,6 @@ a [repository map](https://aider.chat/docs/repomap.html).
Aider can currently produce repository maps for many popular Aider can currently produce repository maps for many popular
mainstream languages, listed below. mainstream languages, listed below.
Aider should work quite well for other languages, even those
without repo map or linter support.
<!--[[[cog <!--[[[cog
from aider.repomap import get_supported_languages_md from aider.repomap import get_supported_languages_md
@ -82,3 +80,30 @@ cog.out(get_supported_languages_md())
<!--[[[end]]]--> <!--[[[end]]]-->
## How to add support for another language
Aider should work quite well for other languages, even those
without repo map or linter support.
You should really try coding with aider before
assuming it needs better support for your language.
That said, if aider already has support for linting your language,
then it should be possible to add repo map support.
To build a repo map, aider needs the `tags.scm` file
from the given language's tree-sitter grammar.
If you can find and share that file in a
[GitHub issue](https://github.com/paul-gauthier/aider/issues),
then it may be possible to add repo map support.
If aider doesn't support linting, it will be complicated to
add linting and repo map support.
That is because aider relies on
[py-tree-sitter-languages](https://github.com/grantjenks/py-tree-sitter-languages)
to provide pre-packaged versions of tree-sitter
parsers for many languages.
Aider needs to be easy for users to install in many environments,
and it is probably too complex to add dependencies on
additional individual tree-sitter parsers.

View file

@ -136,6 +136,16 @@ The model also has to successfully apply all its changes to the source file with
tr.selected { tr.selected {
color: #0056b3; color: #0056b3;
} }
table {
table-layout: fixed;
}
td, th {
word-wrap: break-word;
overflow-wrap: break-word;
}
td:nth-child(3), td:nth-child(4) {
font-size: 12px;
}
</style> </style>
## Code refactoring leaderboard ## Code refactoring leaderboard
@ -291,7 +301,7 @@ Submit results by opening a PR with edits to the
By Paul Gauthier, By Paul Gauthier,
last updated last updated
<!--[[[cog <!--[[[cog
import os import subprocess
import datetime import datetime
files = [ files = [
@ -300,11 +310,17 @@ files = [
'aider/website/_data/refactor_leaderboard.yml' 'aider/website/_data/refactor_leaderboard.yml'
] ]
mod_times = [os.path.getmtime(file) for file in files] def get_last_modified_date(file):
latest_mod_time = max(mod_times) result = subprocess.run(['git', 'log', '-1', '--format=%ct', file], capture_output=True, text=True)
mod_date = datetime.datetime.fromtimestamp(latest_mod_time) if result.returncode == 0:
cog.out(f"{mod_date.strftime('%B %d, %Y.')}") timestamp = int(result.stdout.strip())
return datetime.datetime.fromtimestamp(timestamp)
return datetime.datetime.min
mod_dates = [get_last_modified_date(file) for file in files]
latest_mod_date = max(mod_dates)
cog.out(f"{latest_mod_date.strftime('%B %d, %Y.')}")
]]]--> ]]]-->
August 06, 2024. August 10, 2024.
<!--[[[end]]]--> <!--[[[end]]]-->
</p> </p>

View file

@ -14,7 +14,7 @@ Aider has some built in shortcuts for the most popular Anthropic models and
has been tested and benchmarked to work well with them: has been tested and benchmarked to work well with them:
``` ```
pip install aider-chat python -m pip install aider-chat
export ANTHROPIC_API_KEY=<key> # Mac/Linux export ANTHROPIC_API_KEY=<key> # Mac/Linux
setx ANTHROPIC_API_KEY <key> # Windows, restart shell after setx setx ANTHROPIC_API_KEY <key> # Windows, restart shell after setx
@ -33,8 +33,8 @@ aider --models anthropic/
Anthropic has very low rate limits. Anthropic has very low rate limits.
You can access all the Anthropic models via You can access all the Anthropic models via
[OpenRouter](openrouter.md) [OpenRouter](openrouter.md)
without rate limits. or [Google Vertex AI](vertex.md)
For example: `aider --model openrouter/anthropic/claude-3.5-sonnet` with more generous rate limits.
You can use `aider --model <model-name>` to use any other Anthropic model. You can use `aider --model <model-name>` to use any other Anthropic model.
For example, if you want to use a specific version of Opus For example, if you want to use a specific version of Opus

View file

@ -8,7 +8,7 @@ nav_order: 500
Aider can connect to the OpenAI models on Azure. Aider can connect to the OpenAI models on Azure.
``` ```
pip install aider-chat python -m pip install aider-chat
# Mac/Linux: # Mac/Linux:
export AZURE_API_KEY=<key> export AZURE_API_KEY=<key>

View file

@ -13,7 +13,7 @@ You'll need a [Cohere API key](https://dashboard.cohere.com/welcome/login).
To use **Command-R+**: To use **Command-R+**:
``` ```
pip install aider-chat python -m pip install aider-chat
export COHERE_API_KEY=<key> # Mac/Linux export COHERE_API_KEY=<key> # Mac/Linux
setx COHERE_API_KEY <key> # Windows, restart shell after setx setx COHERE_API_KEY <key> # Windows, restart shell after setx

View file

@ -6,19 +6,15 @@ nav_order: 500
# DeepSeek # DeepSeek
Aider can connect to the DeepSeek.com API. Aider can connect to the DeepSeek.com API.
The DeepSeek Coder V2 model gets the top score on aider's code editing benchmark. The DeepSeek Coder V2 model has a top score on aider's code editing benchmark.
``` ```
pip install aider-chat python -m pip install aider-chat
export DEEPSEEK_API_KEY=<key> # Mac/Linux export DEEPSEEK_API_KEY=<key> # Mac/Linux
setx DEEPSEEK_API_KEY <key> # Windows, restart shell after setx setx DEEPSEEK_API_KEY <key> # Windows, restart shell after setx
# Use DeepSeek Coder V2 # Use DeepSeek Coder V2
aider --model deepseek/deepseek-coder aider --deepseek
``` ```
See the [model warnings](warnings.html)
section for information on warnings which will occur
when working with models that aider is not familiar with.

View file

@ -12,7 +12,7 @@ with code editing capability that's comparable to GPT-3.5.
You'll need a [Gemini API key](https://aistudio.google.com/app/u/2/apikey). You'll need a [Gemini API key](https://aistudio.google.com/app/u/2/apikey).
``` ```
pip install aider-chat python -m pip install aider-chat
export GEMINI_API_KEY=<key> # Mac/Linux export GEMINI_API_KEY=<key> # Mac/Linux
setx GEMINI_API_KEY <key> # Windows, restart shell after setx setx GEMINI_API_KEY <key> # Windows, restart shell after setx

View file

@ -13,7 +13,7 @@ You'll need a [Groq API key](https://console.groq.com/keys).
To use **Llama3 70B**: To use **Llama3 70B**:
``` ```
pip install aider-chat python -m pip install aider-chat
export GROQ_API_KEY=<key> # Mac/Linux export GROQ_API_KEY=<key> # Mac/Linux
setx GROQ_API_KEY <key> # Windows, restart shell after setx setx GROQ_API_KEY <key> # Windows, restart shell after setx

View file

@ -15,7 +15,7 @@ ollama pull <model>
ollama serve ollama serve
# In another terminal window... # In another terminal window...
pip install aider-chat python -m pip install aider-chat
export OLLAMA_API_BASE=http://127.0.0.1:11434 # Mac/Linux export OLLAMA_API_BASE=http://127.0.0.1:11434 # Mac/Linux
setx OLLAMA_API_BASE http://127.0.0.1:11434 # Windows, restart shell after setx setx OLLAMA_API_BASE http://127.0.0.1:11434 # Windows, restart shell after setx

View file

@ -8,7 +8,7 @@ nav_order: 500
Aider can connect to any LLM which is accessible via an OpenAI compatible API endpoint. Aider can connect to any LLM which is accessible via an OpenAI compatible API endpoint.
``` ```
pip install aider-chat python -m pip install aider-chat
# Mac/Linux: # Mac/Linux:
export OPENAI_API_BASE=<endpoint> export OPENAI_API_BASE=<endpoint>

View file

@ -14,7 +14,7 @@ Aider has some built in shortcuts for the most popular OpenAI models and
has been tested and benchmarked to work well with them: has been tested and benchmarked to work well with them:
``` ```
pip install aider-chat python -m pip install aider-chat
export OPENAI_API_KEY=<key> # Mac/Linux export OPENAI_API_KEY=<key> # Mac/Linux
setx OPENAI_API_KEY <key> # Windows, restart shell after setx setx OPENAI_API_KEY <key> # Windows, restart shell after setx

View file

@ -9,7 +9,7 @@ Aider can connect to [models provided by OpenRouter](https://openrouter.ai/model
You'll need an [OpenRouter API key](https://openrouter.ai/keys). You'll need an [OpenRouter API key](https://openrouter.ai/keys).
``` ```
pip install aider-chat python -m pip install aider-chat
export OPENROUTER_API_KEY=<key> # Mac/Linux export OPENROUTER_API_KEY=<key> # Mac/Linux
setx OPENROUTER_API_KEY <key> # Windows, restart shell after setx setx OPENROUTER_API_KEY <key> # Windows, restart shell after setx
@ -21,15 +21,15 @@ aider --model openrouter/<provider>/<model>
aider --models openrouter/ aider --models openrouter/
``` ```
In particular, Llama3 70B works well with aider, at low cost: In particular, many aider users access Sonnet via OpenRouter:
``` ```
pip install aider-chat python -m pip install aider-chat
export OPENROUTER_API_KEY=<key> # Mac/Linux export OPENROUTER_API_KEY=<key> # Mac/Linux
setx OPENROUTER_API_KEY <key> # Windows, restart shell after setx setx OPENROUTER_API_KEY <key> # Windows, restart shell after setx
aider --model openrouter/meta-llama/llama-3-70b-instruct aider --model openrouter/anthropic/claude-3.5-sonnet
``` ```

View file

@ -77,6 +77,7 @@ cog.out(''.join(lines))
- FIREWORKS_API_KEY - FIREWORKS_API_KEY
- FRIENDLIAI_API_KEY - FRIENDLIAI_API_KEY
- GEMINI_API_KEY - GEMINI_API_KEY
- GITHUB_API_KEY
- GROQ_API_KEY - GROQ_API_KEY
- HUGGINGFACE_API_KEY - HUGGINGFACE_API_KEY
- MARITALK_API_KEY - MARITALK_API_KEY

View file

@ -0,0 +1,43 @@
---
parent: Connecting to LLMs
nav_order: 550
---
# Vertex AI
Aider can connect to models provided by Google Vertex AI.
You will need to install the
[gcloud CLI](https://cloud.google.com/sdk/docs/install) and [login](https://cloud.google.com/sdk/docs/initializing) with a GCP account
or service account with permission to use the Vertex AI API.
With your chosen login method, the gcloud CLI should automatically set the
`GOOGLE_APPLICATION_CREDENTIALS` environment variable which points to the credentials file.
To configure Aider to use the Vertex AI API, you need to set `VERTEXAI_PROJECT` (the GCP project ID)
and `VERTEXAI_LOCATION` (the GCP region) [environment variables for Aider](/docs/config/dotenv.html).
Note that Claude on Vertex AI is only available in certain GCP regions,
check [the model card](https://console.cloud.google.com/vertex-ai/publishers/anthropic/model-garden/claude-3-5-sonnet)
for your model to see which regions are supported.
Example `.env` file:
```
VERTEXAI_PROJECT=my-project
VERTEXAI_LOCATION=us-east5
```
Then you can run aider with the `--model` command line switch, like this:
```
aider --model vertex_ai/claude-3-5-sonnet@20240620
```
Or you can use the [yaml config](/docs/config/aider_conf.html) to set the model to any of the
models supported by Vertex AI.
Example `.aider.conf.yml` file:
```yaml
model: vertex_ai/claude-3-5-sonnet@20240620
```

View file

@ -74,6 +74,10 @@ coder.run("make a script that prints hello world")
# Send another instruction # Send another instruction
coder.run("make it say goodbye") coder.run("make it say goodbye")
# You can run in-chat "/" commands too
coder.run("/tokens")
``` ```
See the See the

View file

@ -0,0 +1,25 @@
---
parent: Troubleshooting
nav_order: 28
---
# Aider not found
In some environments the `aider` command may not be available
on your shell path.
This can occur because of permissions/security settings in your OS,
and often happens to Windows users.
You may see an error message like this:
> aider: The term 'aider' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
Below is the most fail safe way to install and run aider in these situations:
```
python -m pip install aider-chat
python -m aider
```
{% include venv-pipx.md %}

View file

@ -48,7 +48,7 @@ It also supports [connecting to almost any LLM](https://aider.chat/docs/llms.htm
Use the `--browser` switch to launch the browser version of aider: Use the `--browser` switch to launch the browser version of aider:
``` ```
pip install aider-chat python -m pip install aider-chat
export OPENAI_API_KEY=<key> # Mac/Linux export OPENAI_API_KEY=<key> # Mac/Linux
setx OPENAI_API_KEY <key> # Windows, restart shell after setx setx OPENAI_API_KEY <key> # Windows, restart shell after setx

View file

@ -15,27 +15,30 @@ cog.out(get_help_md())
|Command|Description| |Command|Description|
|:------|:----------| |:------|:----------|
| **/add** | Add files to the chat so GPT can edit them or review them in detail | | **/add** | Add files to the chat so GPT can edit them or review them in detail |
| **/add-clipboard-image** | Add an image from the clipboard to the chat |
| **/ask** | Ask questions about the code base without editing any files | | **/ask** | Ask questions about the code base without editing any files |
| **/chat-mode** | Switch to a new chat mode | | **/chat-mode** | Switch to a new chat mode |
| **/clear** | Clear the chat history | | **/clear** | Clear the chat history |
| **/clipboard** | Add image/text from the clipboard to the chat (optionally provide a name for the image) |
| **/code** | Ask for changes to your code |
| **/commit** | Commit edits to the repo made outside the chat (commit message optional) | | **/commit** | Commit edits to the repo made outside the chat (commit message optional) |
| **/diff** | Display the diff of the last aider commit | | **/diff** | Display the diff of changes since the last message |
| **/drop** | Remove files from the chat session to free up context space | | **/drop** | Remove files from the chat session to free up context space |
| **/exit** | Exit the application | | **/exit** | Exit the application |
| **/git** | Run a git command | | **/git** | Run a git command |
| **/help** | Ask questions about aider | | **/help** | Ask questions about aider |
| **/lint** | Lint and fix provided files or in-chat files if none provided | | **/lint** | Lint and fix provided files or in-chat files if none provided |
| **/ls** | List all known files and indicate which are included in the chat session | | **/ls** | List all known files and indicate which are included in the chat session |
| **/map** | Print out the current repository map |
| **/model** | Switch to a new LLM | | **/model** | Switch to a new LLM |
| **/models** | Search the list of available models | | **/models** | Search the list of available models |
| **/quit** | Exit the application | | **/quit** | Exit the application |
| **/read** | Add a file to the chat that is for reference, not to be edited |
| **/run** | Run a shell command and optionally add the output to the chat (alias: !) | | **/run** | Run a shell command and optionally add the output to the chat (alias: !) |
| **/test** | Run a shell command and add the output to the chat on non-zero exit code | | **/test** | Run a shell command and add the output to the chat on non-zero exit code |
| **/tokens** | Report on the number of tokens used by the current chat context | | **/tokens** | Report on the number of tokens used by the current chat context |
| **/undo** | Undo the last git commit if it was done by aider | | **/undo** | Undo the last git commit if it was done by aider |
| **/voice** | Record and transcribe voice input | | **/voice** | Record and transcribe voice input |
| **/web** | Use headless selenium to scrape a webpage and add the content to the chat | | **/web** | Scrape a webpage, convert to markdown and add to the chat |
<!--[[[end]]]--> <!--[[[end]]]-->

View file

@ -13,7 +13,20 @@ Aider has 3 different chat modes:
- `help` - Aider will answer questions about using aider, configuring, troubleshooting, etc. - `help` - Aider will answer questions about using aider, configuring, troubleshooting, etc.
By default, aider starts in "code" mode. As you are talking, you can By default, aider starts in "code" mode. As you are talking, you can
send messages in other modes using `/ask` and `/help` commands: send individual messages in a specific mode using `/code`, `/ask`, and `/help` commands:
Using `/code`, `/ask`, and `/help` applies just to that particular message.
Your next message will go back to the active mode (usually "code" mode by default).
You can switch the active mode in a sticky way
with the `/chat-mode <mode>` command:
```
/chat-mode code
/chat-mode ask
/chat-mode help
```
Or you can launch aider in one of the modes with the `--chat-mode <mode>` switch.
<div class="chat-transcript" markdown="1"> <div class="chat-transcript" markdown="1">
@ -33,17 +46,22 @@ This is the source code to the popular django package.
Run `aider --model ollama/<ollama-model>`. Run `aider --model ollama/<ollama-model>`.
See these docs for more info: https://aider.chat/docs/llms/ollama.html See these docs for more info: https://aider.chat/docs/llms/ollama.html
#### /code Add a function to calculate the factorial of a number
Certainly! I'll add a function to calculate the factorial of a number. Here's the change:
```python
def factorial(n):
if n == 0 or n == 1:
return 1
else:
return n * factorial(n - 1)
# Example usage
print(factorial(5)) # Output: 120
```
This function uses recursion to calculate the factorial. It's been added to your code.
</div> </div>
Using `/ask` and `/help` applies just to that particular message.
Your next message will go back to code mode.
You can switch between the modes in a sticky way
with the `/chat-mode <mode>` command:
```
/chat-mode code
/chat-mode ask
/chat-mode help
```

View file

@ -49,18 +49,18 @@ cog.out(open("aider/website/_includes/get-started.md").read())
You can get started quickly like this: You can get started quickly like this:
``` ```
$ pip install aider-chat python -m pip install aider-chat
# Change directory into a git repo # Change directory into a git repo
$ cd /to/your/git/repo cd /to/your/git/repo
# Work with Claude 3.5 Sonnet on your repo # Work with Claude 3.5 Sonnet on your repo
$ export ANTHROPIC_API_KEY=your-key-goes-here export ANTHROPIC_API_KEY=your-key-goes-here
$ aider aider
# Work with GPT-4o on your repo # Work with GPT-4o on your repo
$ export OPENAI_API_KEY=your-key-goes-here export OPENAI_API_KEY=your-key-goes-here
$ aider aider
``` ```
<!-- NOOP --> <!-- NOOP -->

67
pyproject.toml Normal file
View file

@ -0,0 +1,67 @@
# [[[cog
# from aider.help_pats import exclude_website_pats
# ]]]
# [[[end]]]
[project]
name = "aider-chat"
description = "Aider is AI pair programming in your terminal"
readme = "README.md"
classifiers = [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python",
"Topic :: Software Development",
]
requires-python = ">=3.9,<3.13"
dynamic = ["dependencies", "optional-dependencies", "version"]
[project.urls]
Homepage = "https://github.com/paul-gauthier/aider"
[project.scripts]
aider = "aider.main:main"
[tool.setuptools.dynamic]
version = { attr = "aider.__init__.__version__" }
dependencies = { file = "requirements.txt" }
[tool.setuptools.dynamic.optional-dependencies]
dev = { file = "requirements/requirements-dev.txt" }
help = { file = "requirements/requirements-help.txt" }
browser = { file = "requirements/requirements-browser.txt" }
playwright = { file = "requirements/requirements-playwright.txt" }
[tool.setuptools.packages.find]
include = ["aider*", "aider.website"]
[tool.setuptools.package-data]
"aider" = ["queries/*.scm"]
"aider.website" = ["**/*.md"]
[tool.setuptools.exclude-package-data]
"aider.website" = [
# [[[cog
# cog.out("\n".join(f' "{pat}",' for pat in exclude_website_pats))
# ]]]
"examples/**",
"_posts/**",
"HISTORY.md",
"docs/benchmarks*md",
"docs/ctags.md",
"docs/unified-diffs.md",
"docs/leaderboards/index.md",
"assets/**",
# [[[end]]]
]
[build-system]
requires = ["setuptools>=68"]
build-backend = "setuptools.build_meta"

View file

@ -4,9 +4,9 @@
# #
# pip-compile --output-file=requirements.txt requirements/requirements.in # pip-compile --output-file=requirements.txt requirements/requirements.in
# #
aiohappyeyeballs==2.3.4 aiohappyeyeballs==2.3.5
# via aiohttp # via aiohttp
aiohttp==3.10.0 aiohttp==3.10.3
# via litellm # via litellm
aiosignal==1.3.1 aiosignal==1.3.1
# via aiohttp # via aiohttp
@ -16,7 +16,7 @@ anyio==4.4.0
# via # via
# httpx # httpx
# openai # openai
attrs==23.2.0 attrs==24.2.0
# via # via
# aiohttp # aiohttp
# jsonschema # jsonschema
@ -30,7 +30,7 @@ certifi==2024.7.4
# httpcore # httpcore
# httpx # httpx
# requests # requests
cffi==1.16.0 cffi==1.17.0
# via # via
# sounddevice # sounddevice
# soundfile # soundfile
@ -48,7 +48,7 @@ distro==1.9.0
# via openai # via openai
filelock==3.15.4 filelock==3.15.4
# via huggingface-hub # via huggingface-hub
flake8==7.1.0 flake8==7.1.1
# via -r requirements/requirements.in # via -r requirements/requirements.in
frozenlist==1.4.1 frozenlist==1.4.1
# via # via
@ -84,13 +84,15 @@ importlib-resources==6.4.0
# via -r requirements/requirements.in # via -r requirements/requirements.in
jinja2==3.1.4 jinja2==3.1.4
# via litellm # via litellm
jiter==0.5.0
# via openai
jsonschema==4.23.0 jsonschema==4.23.0
# via # via
# -r requirements/requirements.in # -r requirements/requirements.in
# litellm # litellm
jsonschema-specifications==2023.12.1 jsonschema-specifications==2023.12.1
# via jsonschema # via jsonschema
litellm==1.42.9 litellm==1.43.9
# via -r requirements/requirements.in # via -r requirements/requirements.in
markdown-it-py==3.0.0 markdown-it-py==3.0.0
# via rich # via rich
@ -110,7 +112,7 @@ numpy==1.26.4
# via # via
# -r requirements/requirements.in # -r requirements/requirements.in
# scipy # scipy
openai==1.37.2 openai==1.40.6
# via litellm # via litellm
packaging==24.1 packaging==24.1
# via # via
@ -123,8 +125,10 @@ pathspec==0.12.1
pillow==10.4.0 pillow==10.4.0
# via -r requirements/requirements.in # via -r requirements/requirements.in
prompt-toolkit==3.0.47 prompt-toolkit==3.0.47
# via -r requirements/requirements.in # via
pycodestyle==2.12.0 # -r requirements/requirements.in
# pypager
pycodestyle==2.12.1
# via flake8 # via flake8
pycparser==2.22 pycparser==2.22
# via cffi # via cffi
@ -137,12 +141,18 @@ pydantic-core==2.20.1
pyflakes==3.2.0 pyflakes==3.2.0
# via flake8 # via flake8
pygments==2.18.0 pygments==2.18.0
# via rich # via
# pypager
# rich
pypager==3.0.1
# via -r requirements/requirements.in
pypandoc==1.13 pypandoc==1.13
# via -r requirements/requirements.in # via -r requirements/requirements.in
pyperclip==1.9.0
# via -r requirements/requirements.in
python-dotenv==1.0.1 python-dotenv==1.0.1
# via litellm # via litellm
pyyaml==6.0.1 pyyaml==6.0.2
# via # via
# -r requirements/requirements.in # -r requirements/requirements.in
# huggingface-hub # huggingface-hub
@ -159,7 +169,7 @@ requests==2.32.3
# tiktoken # tiktoken
rich==13.7.1 rich==13.7.1
# via -r requirements/requirements.in # via -r requirements/requirements.in
rpds-py==0.19.1 rpds-py==0.20.0
# via # via
# jsonschema # jsonschema
# referencing # referencing
@ -172,7 +182,7 @@ sniffio==1.3.1
# anyio # anyio
# httpx # httpx
# openai # openai
sounddevice==0.4.7 sounddevice==0.5.0
# via -r requirements/requirements.in # via -r requirements/requirements.in
soundfile==0.12.1 soundfile==0.12.1
# via -r requirements/requirements.in # via -r requirements/requirements.in
@ -181,8 +191,10 @@ soupsieve==2.5
tiktoken==0.7.0 tiktoken==0.7.0
# via litellm # via litellm
tokenizers==0.19.1 tokenizers==0.19.1
# via litellm # via
tqdm==4.66.4 # -r requirements/requirements.in
# litellm
tqdm==4.66.5
# via # via
# huggingface-hub # huggingface-hub
# openai # openai
@ -204,5 +216,5 @@ wcwidth==0.2.13
# via prompt-toolkit # via prompt-toolkit
yarl==1.9.4 yarl==1.9.4
# via aiohttp # via aiohttp
zipp==3.19.2 zipp==3.20.0
# via importlib-metadata # via importlib-metadata

View file

@ -4,9 +4,9 @@
# #
# pip-compile --output-file=requirements/requirements-browser.txt requirements/requirements-browser.in # pip-compile --output-file=requirements/requirements-browser.txt requirements/requirements-browser.in
# #
altair==5.3.0 altair==5.4.0
# via streamlit # via streamlit
attrs==23.2.0 attrs==24.2.0
# via # via
# -c requirements/../requirements.txt # -c requirements/../requirements.txt
# jsonschema # jsonschema
@ -64,10 +64,11 @@ mdurl==0.1.2
# via # via
# -c requirements/../requirements.txt # -c requirements/../requirements.txt
# markdown-it-py # markdown-it-py
narwhals==1.3.0
# via altair
numpy==1.26.4 numpy==1.26.4
# via # via
# -c requirements/../requirements.txt # -c requirements/../requirements.txt
# altair
# pandas # pandas
# pyarrow # pyarrow
# pydeck # pydeck
@ -78,9 +79,7 @@ packaging==24.1
# altair # altair
# streamlit # streamlit
pandas==2.2.2 pandas==2.2.2
# via # via streamlit
# altair
# streamlit
pillow==10.4.0 pillow==10.4.0
# via # via
# -c requirements/../requirements.txt # -c requirements/../requirements.txt
@ -112,7 +111,7 @@ rich==13.7.1
# via # via
# -c requirements/../requirements.txt # -c requirements/../requirements.txt
# streamlit # streamlit
rpds-py==0.19.1 rpds-py==0.20.0
# via # via
# -c requirements/../requirements.txt # -c requirements/../requirements.txt
# jsonschema # jsonschema
@ -123,19 +122,18 @@ smmap==5.0.1
# via # via
# -c requirements/../requirements.txt # -c requirements/../requirements.txt
# gitdb # gitdb
streamlit==1.37.0 streamlit==1.37.1
# via -r requirements/requirements-browser.in # via -r requirements/requirements-browser.in
tenacity==8.5.0 tenacity==8.5.0
# via streamlit # via streamlit
toml==0.10.2 toml==0.10.2
# via streamlit # via streamlit
toolz==0.12.1
# via altair
tornado==6.4.1 tornado==6.4.1
# via streamlit # via streamlit
typing-extensions==4.12.2 typing-extensions==4.12.2
# via # via
# -c requirements/../requirements.txt # -c requirements/../requirements.txt
# altair
# streamlit # streamlit
tzdata==2024.1 tzdata==2024.1
# via pandas # via pandas
@ -143,5 +141,5 @@ urllib3==2.2.2
# via # via
# -c requirements/../requirements.txt # -c requirements/../requirements.txt
# requests # requests
watchdog==4.0.1 watchdog==4.0.2
# via -r requirements/requirements-browser.in # via -r requirements/requirements-browser.in

View file

@ -6,7 +6,7 @@
# #
alabaster==0.7.16 alabaster==0.7.16
# via sphinx # via sphinx
babel==2.15.0 babel==2.16.0
# via sphinx # via sphinx
build==1.2.1 build==1.2.1
# via pip-tools # via pip-tools
@ -75,7 +75,7 @@ markupsafe==2.1.5
# via # via
# -c requirements/../requirements.txt # -c requirements/../requirements.txt
# jinja2 # jinja2
matplotlib==3.9.1 matplotlib==3.9.2
# via -r requirements/requirements-dev.in # via -r requirements/requirements-dev.in
mdurl==0.1.2 mdurl==0.1.2
# via # via
@ -137,7 +137,7 @@ python-dateutil==2.9.0.post0
# pandas # pandas
pytz==2024.1 pytz==2024.1
# via pandas # via pandas
pyyaml==6.0.1 pyyaml==6.0.2
# via # via
# -c requirements/../requirements.txt # -c requirements/../requirements.txt
# pre-commit # pre-commit
@ -191,7 +191,7 @@ urllib3==2.2.2
# requests # requests
virtualenv==20.26.3 virtualenv==20.26.3
# via pre-commit # via pre-commit
wheel==0.43.0 wheel==0.44.0
# via pip-tools # via pip-tools
# The following packages are considered to be unsafe in a requirements file: # The following packages are considered to be unsafe in a requirements file:

View file

@ -4,11 +4,11 @@
# #
# pip-compile --output-file=requirements/requirements-help.txt requirements/requirements-help.in # pip-compile --output-file=requirements/requirements-help.txt requirements/requirements-help.in
# #
aiohappyeyeballs==2.3.4 aiohappyeyeballs==2.3.5
# via # via
# -c requirements/../requirements.txt # -c requirements/../requirements.txt
# aiohttp # aiohttp
aiohttp==3.10.0 aiohttp==3.10.3
# via # via
# -c requirements/../requirements.txt # -c requirements/../requirements.txt
# huggingface-hub # huggingface-hub
@ -26,7 +26,7 @@ anyio==4.4.0
# -c requirements/../requirements.txt # -c requirements/../requirements.txt
# httpx # httpx
# openai # openai
attrs==23.2.0 attrs==24.2.0
# via # via
# -c requirements/../requirements.txt # -c requirements/../requirements.txt
# aiohttp # aiohttp
@ -104,15 +104,19 @@ jinja2==3.1.4
# via # via
# -c requirements/../requirements.txt # -c requirements/../requirements.txt
# torch # torch
jiter==0.5.0
# via
# -c requirements/../requirements.txt
# openai
joblib==1.4.2 joblib==1.4.2
# via # via
# nltk # nltk
# scikit-learn # scikit-learn
llama-index-core==0.10.59 llama-index-core==0.10.65
# via # via
# -r requirements/requirements-help.in # -r requirements/requirements-help.in
# llama-index-embeddings-huggingface # llama-index-embeddings-huggingface
llama-index-embeddings-huggingface==0.2.2 llama-index-embeddings-huggingface==0.2.3
# via -r requirements/requirements-help.in # via -r requirements/requirements-help.in
markupsafe==2.1.5 markupsafe==2.1.5
# via # via
@ -138,7 +142,7 @@ networkx==3.2.1
# -c requirements/../requirements.txt # -c requirements/../requirements.txt
# llama-index-core # llama-index-core
# torch # torch
nltk==3.8.1 nltk==3.8.2
# via llama-index-core # via llama-index-core
numpy==1.26.4 numpy==1.26.4
# via # via
@ -149,7 +153,7 @@ numpy==1.26.4
# scipy # scipy
# sentence-transformers # sentence-transformers
# transformers # transformers
openai==1.37.2 openai==1.40.6
# via # via
# -c requirements/../requirements.txt # -c requirements/../requirements.txt
# llama-index-core # llama-index-core
@ -178,7 +182,7 @@ python-dateutil==2.9.0.post0
# via pandas # via pandas
pytz==2024.1 pytz==2024.1
# via pandas # via pandas
pyyaml==6.0.1 pyyaml==6.0.2
# via # via
# -c requirements/../requirements.txt # -c requirements/../requirements.txt
# huggingface-hub # huggingface-hub
@ -197,7 +201,7 @@ requests==2.32.3
# llama-index-core # llama-index-core
# tiktoken # tiktoken
# transformers # transformers
safetensors==0.4.3 safetensors==0.4.4
# via transformers # via transformers
scikit-learn==1.5.1 scikit-learn==1.5.1
# via sentence-transformers # via sentence-transformers
@ -216,11 +220,11 @@ sniffio==1.3.1
# anyio # anyio
# httpx # httpx
# openai # openai
sqlalchemy[asyncio]==2.0.31 sqlalchemy[asyncio]==2.0.32
# via # via
# llama-index-core # llama-index-core
# sqlalchemy # sqlalchemy
sympy==1.13.1 sympy==1.13.2
# via torch # via torch
tenacity==8.5.0 tenacity==8.5.0
# via llama-index-core # via llama-index-core
@ -236,7 +240,7 @@ tokenizers==0.19.1
# transformers # transformers
torch==2.2.2 torch==2.2.2
# via sentence-transformers # via sentence-transformers
tqdm==4.66.4 tqdm==4.66.5
# via # via
# -c requirements/../requirements.txt # -c requirements/../requirements.txt
# huggingface-hub # huggingface-hub
@ -245,7 +249,7 @@ tqdm==4.66.4
# openai # openai
# sentence-transformers # sentence-transformers
# transformers # transformers
transformers==4.43.3 transformers==4.44.0
# via sentence-transformers # via sentence-transformers
typing-extensions==4.12.2 typing-extensions==4.12.2
# via # via

View file

@ -6,7 +6,7 @@
# #
greenlet==3.0.3 greenlet==3.0.3
# via playwright # via playwright
playwright==1.45.1 playwright==1.46.0
# via -r requirements/requirements-playwright.in # via -r requirements/requirements-playwright.in
pyee==11.1.0 pyee==11.1.0
# via playwright # via playwright

View file

@ -22,6 +22,8 @@ pypandoc
litellm litellm
flake8 flake8
importlib_resources importlib_resources
pyperclip
pypager
# The proper depdendency is networkx[default], but this brings # The proper depdendency is networkx[default], but this brings
# in matplotlib and a bunch of other deps # in matplotlib and a bunch of other deps
@ -46,3 +48,7 @@ importlib-metadata<8.0.0
# Because sentence-transformers doesn't like >=2 # Because sentence-transformers doesn't like >=2
numpy<2 numpy<2
# Going past this makes dependencies unresolvable
# Seems to be caused by sentence-transformers
tokenizers==0.19.1

View file

@ -2,6 +2,7 @@
import argparse import argparse
import subprocess import subprocess
import sys
from collections import defaultdict from collections import defaultdict
from datetime import datetime from datetime import datetime
from operator import itemgetter from operator import itemgetter
@ -17,10 +18,14 @@ def blame(start_tag, end_tag=None):
authors = get_commit_authors(commits) authors = get_commit_authors(commits)
pats = "*.py *.scm *.sh **Dockerfile **Gemfile .github/workflows/*.yml".split() revision = end_tag if end_tag else "HEAD"
files = [] files = run(["git", "ls-tree", "-r", "--name-only", revision]).strip().split("\n")
for pat in pats: files = [
files += run(["git", "ls-files", pat]).strip().split("\n") f
for f in files
if f.endswith((".py", ".scm", ".sh", "Dockerfile", "Gemfile"))
or (f.startswith(".github/workflows/") and f.endswith(".yml"))
]
all_file_counts = {} all_file_counts = {}
grand_total = defaultdict(int) grand_total = defaultdict(int)
@ -186,10 +191,14 @@ def get_counts_for_file(start_tag, end_tag, authors, fname):
line_counts[author] += 1 line_counts[author] += 1
return dict(line_counts) return dict(line_counts)
except subprocess.CalledProcessError: except subprocess.CalledProcessError as e:
# print(f"Warning: Unable to blame file {fname}. It may have been added after {start_tag} " if "no such path" in str(e).lower():
# f"or removed before {end_tag or 'HEAD'}.", file=sys.stderr) # File doesn't exist in this revision range, which is okay
return None return None
else:
# Some other error occurred
print(f"Warning: Unable to blame file {fname}. Error: {e}", file=sys.stderr)
return None
def get_all_tags_since(start_tag): def get_all_tags_since(start_tag):

View file

@ -9,22 +9,36 @@ import sys
from packaging import version from packaging import version
def check_cog_pyproject():
result = subprocess.run(["cog", "--check", "pyproject.toml"], capture_output=True, text=True)
if result.returncode != 0:
print("Error: cog --check pyproject.toml failed, updating.")
subprocess.run(["cog", "-r", "pyproject.toml"])
sys.exit(1)
def main(): def main():
parser = argparse.ArgumentParser(description="Bump version") parser = argparse.ArgumentParser(description="Bump version")
parser.add_argument("new_version", help="New version in x.y.z format") parser.add_argument("new_version", help="New version in x.y.z format")
parser.add_argument( parser.add_argument(
"--dry-run", action="store_true", help="Print each step without actually executing them" "--dry-run", action="store_true", help="Print each step without actually executing them"
) )
# Function to check if we are on the main branch # Function to check if we are on the main branch
def check_branch(): def check_branch():
branch = subprocess.run(["git", "rev-parse", "--abbrev-ref", "HEAD"], capture_output=True, text=True).stdout.strip() branch = subprocess.run(
["git", "rev-parse", "--abbrev-ref", "HEAD"], capture_output=True, text=True
).stdout.strip()
if branch != "main": if branch != "main":
print("Error: Not on the main branch.") print("Error: Not on the main branch.")
sys.exit(1) sys.exit(1)
# Function to check if the working directory is clean # Function to check if the working directory is clean
def check_working_directory_clean(): def check_working_directory_clean():
status = subprocess.run(["git", "status", "--porcelain"], capture_output=True, text=True).stdout status = subprocess.run(
["git", "status", "--porcelain"], capture_output=True, text=True
).stdout
if status: if status:
print("Error: Working directory is not clean.") print("Error: Working directory is not clean.")
sys.exit(1) sys.exit(1)
@ -32,19 +46,33 @@ def main():
# Function to fetch the latest changes and check if the main branch is up to date # Function to fetch the latest changes and check if the main branch is up to date
def check_main_branch_up_to_date(): def check_main_branch_up_to_date():
subprocess.run(["git", "fetch", "origin"], check=True) subprocess.run(["git", "fetch", "origin"], check=True)
local_main = subprocess.run(["git", "rev-parse", "main"], capture_output=True, text=True).stdout.strip() local_main = subprocess.run(
["git", "rev-parse", "main"], capture_output=True, text=True
).stdout.strip()
print(f"Local main commit hash: {local_main}") print(f"Local main commit hash: {local_main}")
origin_main = subprocess.run(["git", "rev-parse", "origin/main"], capture_output=True, text=True).stdout.strip() origin_main = subprocess.run(
["git", "rev-parse", "origin/main"], capture_output=True, text=True
).stdout.strip()
print(f"Origin main commit hash: {origin_main}") print(f"Origin main commit hash: {origin_main}")
if local_main != origin_main: if local_main != origin_main:
local_date = subprocess.run(["git", "show", "-s", "--format=%ci", "main"], capture_output=True, text=True).stdout.strip() local_date = subprocess.run(
origin_date = subprocess.run(["git", "show", "-s", "--format=%ci", "origin/main"], capture_output=True, text=True).stdout.strip() ["git", "show", "-s", "--format=%ci", "main"], capture_output=True, text=True
).stdout.strip()
origin_date = subprocess.run(
["git", "show", "-s", "--format=%ci", "origin/main"], capture_output=True, text=True
).stdout.strip()
local_date = datetime.datetime.strptime(local_date, "%Y-%m-%d %H:%M:%S %z") local_date = datetime.datetime.strptime(local_date, "%Y-%m-%d %H:%M:%S %z")
origin_date = datetime.datetime.strptime(origin_date, "%Y-%m-%d %H:%M:%S %z") origin_date = datetime.datetime.strptime(origin_date, "%Y-%m-%d %H:%M:%S %z")
if local_date < origin_date: if local_date < origin_date:
print("Error: The local main branch is behind origin/main. Please pull the latest changes.") print(
"Error: The local main branch is behind origin/main. Please pull the latest"
" changes."
)
elif local_date > origin_date: elif local_date > origin_date:
print("Error: The origin/main branch is behind the local main branch. Please push your changes.") print(
"Error: The origin/main branch is behind the local main branch. Please push"
" your changes."
)
else: else:
print("Error: The main branch and origin/main have diverged.") print("Error: The main branch and origin/main have diverged.")
sys.exit(1) sys.exit(1)
@ -53,6 +81,7 @@ def main():
dry_run = args.dry_run dry_run = args.dry_run
# Perform checks before proceeding # Perform checks before proceeding
check_cog_pyproject()
check_branch() check_branch()
check_working_directory_clean() check_working_directory_clean()
check_main_branch_up_to_date() check_main_branch_up_to_date()

View file

@ -1,73 +0,0 @@
import re
from pathlib import Path
from setuptools import find_packages, setup
from aider import __version__
from aider.help_pats import exclude_website_pats
def get_requirements(suffix=""):
if suffix:
fname = "requirements-" + suffix + ".txt"
fname = Path("requirements") / fname
else:
fname = Path("requirements.txt")
requirements = fname.read_text().splitlines()
return requirements
requirements = get_requirements()
# README
with open("README.md", "r", encoding="utf-8") as f:
long_description = f.read()
long_description = re.sub(r"\n!\[.*\]\(.*\)", "", long_description)
# long_description = re.sub(r"\n- \[.*\]\(.*\)", "", long_description)
# Discover packages, plus the website
packages = find_packages(exclude=["benchmark", "tests"])
packages += ["aider.website"]
print("Packages:", packages)
extras = "dev help browser playwright".split()
setup(
name="aider-chat",
version=__version__,
packages=packages,
include_package_data=True,
package_data={
"aider": ["queries/*.scm"],
"aider.website": ["**/*.md"],
},
exclude_package_data={"aider.website": exclude_website_pats},
install_requires=requirements,
extras_require={extra: get_requirements(extra) for extra in extras},
python_requires=">=3.9,<3.13",
entry_points={
"console_scripts": [
"aider = aider.main:main",
],
},
description="Aider is AI pair programming in your terminal",
long_description=long_description,
long_description_content_type="text/markdown",
url="https://github.com/paul-gauthier/aider",
classifiers=[
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python",
"Topic :: Software Development",
],
)

View file

@ -189,6 +189,33 @@ class TestCoder(unittest.TestCase):
self.assertEqual(coder.abs_fnames, set([str(fname.resolve())])) self.assertEqual(coder.abs_fnames, set([str(fname.resolve())]))
def test_check_for_file_mentions_read_only(self):
with GitTemporaryDirectory():
io = InputOutput(
pretty=False,
yes=True,
)
coder = Coder.create(self.GPT35, None, io)
fname = Path("readonly_file.txt")
fname.touch()
coder.abs_read_only_fnames.add(str(fname.resolve()))
# Mock the get_tracked_files method
mock = MagicMock()
mock.return_value = set([str(fname)])
coder.repo.get_tracked_files = mock
# Call the check_for_file_mentions method
result = coder.check_for_file_mentions(f"Please check {fname}!")
# Assert that the method returns None (user not asked to add the file)
self.assertIsNone(result)
# Assert that abs_fnames is still empty (file not added)
self.assertEqual(coder.abs_fnames, set())
def test_check_for_subdir_mention(self): def test_check_for_subdir_mention(self):
with GitTemporaryDirectory(): with GitTemporaryDirectory():
io = InputOutput(pretty=False, yes=True) io = InputOutput(pretty=False, yes=True)
@ -259,7 +286,7 @@ class TestCoder(unittest.TestCase):
files = [file1, file2] files = [file1, file2]
# Initialize the Coder object with the mocked IO and mocked repo # Initialize the Coder object with the mocked IO and mocked repo
coder = Coder.create(self.GPT35, None, io=InputOutput(), fnames=files, pretty=False) coder = Coder.create(self.GPT35, None, io=InputOutput(), fnames=files)
def mock_send(*args, **kwargs): def mock_send(*args, **kwargs):
coder.partial_response_content = "ok" coder.partial_response_content = "ok"
@ -286,7 +313,7 @@ class TestCoder(unittest.TestCase):
files = [file1, file2] files = [file1, file2]
# Initialize the Coder object with the mocked IO and mocked repo # Initialize the Coder object with the mocked IO and mocked repo
coder = Coder.create(self.GPT35, None, io=InputOutput(), fnames=files, pretty=False) coder = Coder.create(self.GPT35, None, io=InputOutput(), fnames=files)
def mock_send(*args, **kwargs): def mock_send(*args, **kwargs):
coder.partial_response_content = "ok" coder.partial_response_content = "ok"
@ -377,7 +404,7 @@ class TestCoder(unittest.TestCase):
fname = Path("file.txt") fname = Path("file.txt")
io = InputOutput(yes=True) io = InputOutput(yes=True)
coder = Coder.create(self.GPT35, "diff", io=io, fnames=[str(fname)], pretty=False) coder = Coder.create(self.GPT35, "diff", io=io, fnames=[str(fname)])
self.assertTrue(fname.exists()) self.assertTrue(fname.exists())
@ -434,9 +461,7 @@ new
fname1.write_text("ONE\n") fname1.write_text("ONE\n")
io = InputOutput(yes=True) io = InputOutput(yes=True)
coder = Coder.create( coder = Coder.create(self.GPT35, "diff", io=io, fnames=[str(fname1), str(fname2)])
self.GPT35, "diff", io=io, fnames=[str(fname1), str(fname2)], pretty=False
)
def mock_send(*args, **kwargs): def mock_send(*args, **kwargs):
coder.partial_response_content = f""" coder.partial_response_content = f"""
@ -489,7 +514,7 @@ TWO
fname2.write_text("OTHER\n") fname2.write_text("OTHER\n")
io = InputOutput(yes=True) io = InputOutput(yes=True)
coder = Coder.create(self.GPT35, "diff", io=io, fnames=[str(fname)], pretty=False) coder = Coder.create(self.GPT35, "diff", io=io, fnames=[str(fname)])
def mock_send(*args, **kwargs): def mock_send(*args, **kwargs):
coder.partial_response_content = f""" coder.partial_response_content = f"""
@ -567,7 +592,7 @@ three
repo.git.commit("-m", "initial") repo.git.commit("-m", "initial")
io = InputOutput(yes=True) io = InputOutput(yes=True)
coder = Coder.create(self.GPT35, "diff", io=io, fnames=[str(fname)], pretty=False) coder = Coder.create(self.GPT35, "diff", io=io, fnames=[str(fname)])
def mock_send(*args, **kwargs): def mock_send(*args, **kwargs):
coder.partial_response_content = f""" coder.partial_response_content = f"""
@ -640,7 +665,7 @@ two
def test_check_for_urls(self): def test_check_for_urls(self):
io = InputOutput(yes=True) io = InputOutput(yes=True)
coder = Coder.create(self.GPT35, None, io=io, pretty=False) coder = Coder.create(self.GPT35, None, io=io)
coder.commands.scraper = MagicMock() coder.commands.scraper = MagicMock()
coder.commands.scraper.scrape = MagicMock(return_value="some content") coder.commands.scraper.scrape = MagicMock(return_value="some content")

View file

@ -10,7 +10,7 @@ from unittest import TestCase, mock
import git import git
from aider.coders import Coder from aider.coders import Coder
from aider.commands import Commands from aider.commands import Commands, SwitchCoder
from aider.dump import dump # noqa: F401 from aider.dump import dump # noqa: F401
from aider.io import InputOutput from aider.io import InputOutput
from aider.models import Model from aider.models import Model
@ -537,6 +537,62 @@ class TestCommands(TestCase):
commands.cmd_add("file.txt") commands.cmd_add("file.txt")
self.assertEqual(coder.abs_fnames, set()) self.assertEqual(coder.abs_fnames, set())
def test_cmd_add_read_only_file(self):
with GitTemporaryDirectory():
# Initialize the Commands and InputOutput objects
io = InputOutput(pretty=False, yes=True)
from aider.coders import Coder
coder = Coder.create(self.GPT35, None, io)
commands = Commands(io, coder)
# Create a test file
test_file = Path("test_read_only.txt")
test_file.write_text("Test content")
# Add the file as read-only
commands.cmd_read(str(test_file))
# Verify it's in abs_read_only_fnames
self.assertTrue(
any(
os.path.samefile(str(test_file.resolve()), fname)
for fname in coder.abs_read_only_fnames
)
)
# Try to add the read-only file
commands.cmd_add(str(test_file))
# It's not in the repo, should not do anything
self.assertFalse(
any(os.path.samefile(str(test_file.resolve()), fname) for fname in coder.abs_fnames)
)
self.assertTrue(
any(
os.path.samefile(str(test_file.resolve()), fname)
for fname in coder.abs_read_only_fnames
)
)
repo = git.Repo()
repo.git.add(str(test_file))
repo.git.commit("-m", "initial")
# Try to add the read-only file
commands.cmd_add(str(test_file))
# Verify it's now in abs_fnames and not in abs_read_only_fnames
self.assertTrue(
any(os.path.samefile(str(test_file.resolve()), fname) for fname in coder.abs_fnames)
)
self.assertFalse(
any(
os.path.samefile(str(test_file.resolve()), fname)
for fname in coder.abs_read_only_fnames
)
)
def test_cmd_test_unbound_local_error(self): def test_cmd_test_unbound_local_error(self):
with ChdirTemporaryDirectory(): with ChdirTemporaryDirectory():
io = InputOutput(pretty=False, yes=False) io = InputOutput(pretty=False, yes=False)
@ -731,6 +787,140 @@ class TestCommands(TestCase):
self.assertNotIn(fname2, str(coder.abs_fnames)) self.assertNotIn(fname2, str(coder.abs_fnames))
self.assertNotIn(fname3, str(coder.abs_fnames)) self.assertNotIn(fname3, str(coder.abs_fnames))
def test_cmd_read(self):
with GitTemporaryDirectory():
io = InputOutput(pretty=False, yes=False)
coder = Coder.create(self.GPT35, None, io)
commands = Commands(io, coder)
# Create a test file
test_file = Path("test_read.txt")
test_file.write_text("Test content")
# Test the /read command
commands.cmd_read(str(test_file))
# Check if the file was added to abs_read_only_fnames
self.assertTrue(
any(
os.path.samefile(str(test_file.resolve()), fname)
for fname in coder.abs_read_only_fnames
)
)
# Test dropping the read-only file
commands.cmd_drop(str(test_file))
# Check if the file was removed from abs_read_only_fnames
self.assertFalse(
any(
os.path.samefile(str(test_file.resolve()), fname)
for fname in coder.abs_read_only_fnames
)
)
def test_cmd_read_with_external_file(self):
with tempfile.NamedTemporaryFile(mode="w", delete=False) as external_file:
external_file.write("External file content")
external_file_path = external_file.name
try:
with GitTemporaryDirectory():
io = InputOutput(pretty=False, yes=False)
coder = Coder.create(self.GPT35, None, io)
commands = Commands(io, coder)
# Test the /read command with an external file
commands.cmd_read(external_file_path)
# Check if the external file was added to abs_read_only_fnames
real_external_file_path = os.path.realpath(external_file_path)
self.assertTrue(
any(
os.path.samefile(real_external_file_path, fname)
for fname in coder.abs_read_only_fnames
)
)
# Test dropping the external read-only file
commands.cmd_drop(Path(external_file_path).name)
# Check if the file was removed from abs_read_only_fnames
self.assertFalse(
any(
os.path.samefile(real_external_file_path, fname)
for fname in coder.abs_read_only_fnames
)
)
finally:
os.unlink(external_file_path)
def test_cmd_diff(self):
with GitTemporaryDirectory() as repo_dir:
repo = git.Repo(repo_dir)
io = InputOutput(pretty=False, yes=True)
coder = Coder.create(self.GPT35, None, io)
commands = Commands(io, coder)
# Create and commit a file
filename = "test_file.txt"
file_path = Path(repo_dir) / filename
file_path.write_text("Initial content\n")
repo.git.add(filename)
repo.git.commit("-m", "Initial commit\n")
# Modify the file to make it dirty
file_path.write_text("Modified content")
# Mock repo.get_commit_message to return a canned commit message
with mock.patch.object(
coder.repo, "get_commit_message", return_value="Canned commit message"
):
# Run cmd_commit
commands.cmd_commit()
# Capture the output of cmd_diff
with mock.patch("builtins.print") as mock_print:
commands.cmd_diff("")
# Check if the diff output is correct
mock_print.assert_called_with(mock.ANY)
diff_output = mock_print.call_args[0][0]
self.assertIn("-Initial content", diff_output)
self.assertIn("+Modified content", diff_output)
# Modify the file again
file_path.write_text("Further modified content")
# Run cmd_commit again
commands.cmd_commit()
# Capture the output of cmd_diff
with mock.patch("builtins.print") as mock_print:
commands.cmd_diff("")
# Check if the diff output is correct
mock_print.assert_called_with(mock.ANY)
diff_output = mock_print.call_args[0][0]
self.assertIn("-Modified content", diff_output)
self.assertIn("+Further modified content", diff_output)
# Modify the file a third time
file_path.write_text("Final modified content")
# Run cmd_commit again
commands.cmd_commit()
# Capture the output of cmd_diff
with mock.patch("builtins.print") as mock_print:
commands.cmd_diff("")
# Check if the diff output is correct
mock_print.assert_called_with(mock.ANY)
diff_output = mock_print.call_args[0][0]
self.assertIn("-Further modified content", diff_output)
self.assertIn("+Final modified content", diff_output)
def test_cmd_ask(self): def test_cmd_ask(self):
io = InputOutput(pretty=False, yes=True) io = InputOutput(pretty=False, yes=True)
coder = Coder.create(self.GPT35, None, io) coder = Coder.create(self.GPT35, None, io)
@ -742,17 +932,12 @@ class TestCommands(TestCase):
with mock.patch("aider.coders.Coder.run") as mock_run: with mock.patch("aider.coders.Coder.run") as mock_run:
mock_run.return_value = canned_reply mock_run.return_value = canned_reply
commands.cmd_ask(question) with self.assertRaises(SwitchCoder):
commands.cmd_ask(question)
mock_run.assert_called_once() mock_run.assert_called_once()
mock_run.assert_called_once_with(question) mock_run.assert_called_once_with(question)
self.assertEqual(len(coder.cur_messages), 2)
self.assertEqual(coder.cur_messages[0]["role"], "user")
self.assertEqual(coder.cur_messages[0]["content"], question)
self.assertEqual(coder.cur_messages[1]["role"], "assistant")
self.assertEqual(coder.cur_messages[1]["content"], canned_reply)
def test_cmd_lint_with_dirty_file(self): def test_cmd_lint_with_dirty_file(self):
with GitTemporaryDirectory() as repo_dir: with GitTemporaryDirectory() as repo_dir:
repo = git.Repo(repo_dir) repo = git.Repo(repo_dir)

View file

@ -297,7 +297,7 @@ These changes replace the `subprocess.run` patches with `subprocess.check_output
files = [file1] files = [file1]
# Initialize the Coder object with the mocked IO and mocked repo # Initialize the Coder object with the mocked IO and mocked repo
coder = Coder.create(self.GPT35, "diff", io=InputOutput(), fnames=files, pretty=False) coder = Coder.create(self.GPT35, "diff", io=InputOutput(), fnames=files)
def mock_send(*args, **kwargs): def mock_send(*args, **kwargs):
coder.partial_response_content = f""" coder.partial_response_content = f"""
@ -340,7 +340,6 @@ new
io=InputOutput(dry_run=True), io=InputOutput(dry_run=True),
fnames=files, fnames=files,
dry_run=True, dry_run=True,
pretty=False,
) )
def mock_send(*args, **kwargs): def mock_send(*args, **kwargs):

View file

@ -149,17 +149,6 @@ class TestMain(TestCase):
_, kwargs = MockCoder.call_args _, kwargs = MockCoder.call_args
assert kwargs["dirty_commits"] is True assert kwargs["dirty_commits"] is True
assert kwargs["auto_commits"] is True assert kwargs["auto_commits"] is True
assert kwargs["pretty"] is True
with patch("aider.coders.Coder.create") as MockCoder:
main(["--no-pretty"], input=DummyInput())
_, kwargs = MockCoder.call_args
assert kwargs["pretty"] is False
with patch("aider.coders.Coder.create") as MockCoder:
main(["--pretty"], input=DummyInput())
_, kwargs = MockCoder.call_args
assert kwargs["pretty"] is True
with patch("aider.coders.Coder.create") as MockCoder: with patch("aider.coders.Coder.create") as MockCoder:
main(["--no-dirty-commits"], input=DummyInput()) main(["--no-dirty-commits"], input=DummyInput())
@ -235,6 +224,15 @@ class TestMain(TestCase):
main(["--yes", fname, "--encoding", "iso-8859-15"]) main(["--yes", fname, "--encoding", "iso-8859-15"])
def test_main_exit_calls_version_check(self):
with GitTemporaryDirectory():
with patch("aider.main.check_version") as mock_check_version, patch(
"aider.main.InputOutput"
) as mock_input_output:
main(["--exit"], input=DummyInput(), output=DummyOutput())
mock_check_version.assert_called_once()
mock_input_output.assert_called_once()
@patch("aider.main.InputOutput") @patch("aider.main.InputOutput")
@patch("aider.coders.base_coder.Coder.run") @patch("aider.coders.base_coder.Coder.run")
def test_main_message_adds_to_input_history(self, mock_run, MockInputOutput): def test_main_message_adds_to_input_history(self, mock_run, MockInputOutput):
@ -396,3 +394,36 @@ class TestMain(TestCase):
output=DummyOutput(), output=DummyOutput(),
) )
MockRepoMap.assert_called_once() MockRepoMap.assert_called_once()
def test_read_option(self):
with GitTemporaryDirectory():
test_file = "test_file.txt"
Path(test_file).touch()
coder = main(
["--read", test_file, "--exit", "--yes"],
input=DummyInput(),
output=DummyOutput(),
return_coder=True,
)
self.assertIn(str(Path(test_file).resolve()), coder.abs_read_only_fnames)
def test_read_option_with_external_file(self):
with tempfile.NamedTemporaryFile(mode="w", delete=False) as external_file:
external_file.write("External file content")
external_file_path = external_file.name
try:
with GitTemporaryDirectory():
coder = main(
["--read", external_file_path, "--exit", "--yes"],
input=DummyInput(),
output=DummyOutput(),
return_coder=True,
)
real_external_file_path = os.path.realpath(external_file_path)
self.assertIn(real_external_file_path, coder.abs_read_only_fnames)
finally:
os.unlink(external_file_path)

View file

@ -4,7 +4,7 @@ from unittest.mock import MagicMock, patch
import httpx import httpx
from aider.llm import litellm from aider.llm import litellm
from aider.sendchat import send_with_retries from aider.sendchat import simple_send_with_retries
class PrintCalled(Exception): class PrintCalled(Exception):
@ -14,7 +14,7 @@ class PrintCalled(Exception):
class TestSendChat(unittest.TestCase): class TestSendChat(unittest.TestCase):
@patch("litellm.completion") @patch("litellm.completion")
@patch("builtins.print") @patch("builtins.print")
def test_send_with_retries_rate_limit_error(self, mock_print, mock_completion): def test_simple_send_with_retries_rate_limit_error(self, mock_print, mock_completion):
mock = MagicMock() mock = MagicMock()
mock.status_code = 500 mock.status_code = 500
@ -29,19 +29,19 @@ class TestSendChat(unittest.TestCase):
None, None,
] ]
# Call the send_with_retries method # Call the simple_send_with_retries method
send_with_retries("model", ["message"], None, False) simple_send_with_retries("model", ["message"])
mock_print.assert_called_once() mock_print.assert_called_once()
@patch("litellm.completion") @patch("litellm.completion")
@patch("builtins.print") @patch("builtins.print")
def test_send_with_retries_connection_error(self, mock_print, mock_completion): def test_simple_send_with_retries_connection_error(self, mock_print, mock_completion):
# Set up the mock to raise # Set up the mock to raise
mock_completion.side_effect = [ mock_completion.side_effect = [
httpx.ConnectError("Connection error"), httpx.ConnectError("Connection error"),
None, None,
] ]
# Call the send_with_retries method # Call the simple_send_with_retries method
send_with_retries("model", ["message"], None, False) simple_send_with_retries("model", ["message"])
mock_print.assert_called_once() mock_print.assert_called_once()

View file

@ -288,9 +288,7 @@ after b
files = [file1] files = [file1]
# Initialize the Coder object with the mocked IO and mocked repo # Initialize the Coder object with the mocked IO and mocked repo
coder = Coder.create( coder = Coder.create(self.GPT35, "whole", io=InputOutput(), fnames=files, stream=False)
self.GPT35, "whole", io=InputOutput(), fnames=files, stream=False, pretty=False
)
# no trailing newline so the response content below doesn't add ANOTHER newline # no trailing newline so the response content below doesn't add ANOTHER newline
new_content = "new\ntwo\nthree" new_content = "new\ntwo\nthree"

View file

@ -22,7 +22,13 @@ class TestHelp(unittest.TestCase):
help_coder_run = MagicMock(return_value="") help_coder_run = MagicMock(return_value="")
aider.coders.HelpCoder.run = help_coder_run aider.coders.HelpCoder.run = help_coder_run
commands.cmd_help("hi") try:
commands.cmd_help("hi")
except aider.commands.SwitchCoder:
pass
else:
# If no exception was raised, fail the test
assert False, "SwitchCoder exception was not raised"
help_coder_run.assert_called_once() help_coder_run.assert_called_once()

View file

@ -35,7 +35,7 @@ class TestScrape(unittest.TestCase):
self.commands.io.tool_error = mock_print_error self.commands.io.tool_error = mock_print_error
# Run the cmd_web command # Run the cmd_web command
result = self.commands.cmd_web("https://example.com") result = self.commands.cmd_web("https://example.com", paginate=False)
# Assert that the result contains some content # Assert that the result contains some content
self.assertIsNotNone(result) self.assertIsNotNone(result)
@ -100,7 +100,7 @@ class TestScrape(unittest.TestCase):
# Mock the necessary objects and methods # Mock the necessary objects and methods
scraper.scrape_with_playwright = MagicMock() scraper.scrape_with_playwright = MagicMock()
scraper.scrape_with_playwright.return_value = None scraper.scrape_with_playwright.return_value = (None, None)
# Call the scrape method # Call the scrape method
result = scraper.scrape("https://example.com") result = scraper.scrape("https://example.com")
@ -113,6 +113,54 @@ class TestScrape(unittest.TestCase):
"Failed to retrieve content from https://example.com" "Failed to retrieve content from https://example.com"
) )
# Reset the mock
mock_print_error.reset_mock()
# Test with a different return value
scraper.scrape_with_playwright.return_value = ("Some content", "text/html")
result = scraper.scrape("https://example.com")
# Assert that the result is not None
self.assertIsNotNone(result)
# Assert that print_error was not called
mock_print_error.assert_not_called()
def test_scrape_text_plain(self):
# Create a Scraper instance
scraper = Scraper(print_error=MagicMock(), playwright_available=True)
# Mock the scrape_with_playwright method
plain_text = "This is plain text content."
scraper.scrape_with_playwright = MagicMock(return_value=(plain_text, "text/plain"))
# Call the scrape method
result = scraper.scrape("https://example.com")
# Assert that the result is the same as the input plain text
self.assertEqual(result, plain_text)
def test_scrape_text_html(self):
# Create a Scraper instance
scraper = Scraper(print_error=MagicMock(), playwright_available=True)
# Mock the scrape_with_playwright method
html_content = "<html><body><h1>Test</h1><p>This is HTML content.</p></body></html>"
scraper.scrape_with_playwright = MagicMock(return_value=(html_content, "text/html"))
# Mock the html_to_markdown method
expected_markdown = "# Test\n\nThis is HTML content."
scraper.html_to_markdown = MagicMock(return_value=expected_markdown)
# Call the scrape method
result = scraper.scrape("https://example.com")
# Assert that the result is the expected markdown
self.assertEqual(result, expected_markdown)
# Assert that html_to_markdown was called with the HTML content
scraper.html_to_markdown.assert_called_once_with(html_content)
if __name__ == "__main__": if __name__ == "__main__":
unittest.main() unittest.main()