mirror of
https://github.com/Aider-AI/aider.git
synced 2025-06-01 02:05:00 +00:00
Merge pull request #564 from paul-gauthier/litellm
This commit is contained in:
commit
a13abaccef
36 changed files with 963 additions and 950 deletions
2
.github/workflows/docker-build-test.yml
vendored
2
.github/workflows/docker-build-test.yml
vendored
|
@ -16,7 +16,7 @@ jobs:
|
|||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v3
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v3
|
||||
|
|
4
.github/workflows/release.yml
vendored
4
.github/workflows/release.yml
vendored
|
@ -11,10 +11,10 @@ jobs:
|
|||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v3
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v4
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: 3.x
|
||||
|
||||
|
|
4
.github/workflows/ubuntu-tests.yml
vendored
4
.github/workflows/ubuntu-tests.yml
vendored
|
@ -17,10 +17,10 @@ jobs:
|
|||
|
||||
steps:
|
||||
- name: Check out repository
|
||||
uses: actions/checkout@v3
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python ${{ matrix.python-version }}
|
||||
uses: actions/setup-python@v4
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: ${{ matrix.python-version }}
|
||||
|
||||
|
|
4
.github/workflows/windows-tests.yml
vendored
4
.github/workflows/windows-tests.yml
vendored
|
@ -17,10 +17,10 @@ jobs:
|
|||
|
||||
steps:
|
||||
- name: Check out repository
|
||||
uses: actions/checkout@v3
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python ${{ matrix.python-version }}
|
||||
uses: actions/setup-python@v4
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: ${{ matrix.python-version }}
|
||||
|
||||
|
|
63
README.md
63
README.md
|
@ -1,13 +1,15 @@
|
|||
|
||||
# aider is AI pair programming in your terminal
|
||||
|
||||
Aider is a command line tool that lets you pair program with GPT-3.5/GPT-4,
|
||||
Aider is a command line tool that lets you pair program with LLMs,
|
||||
to edit code stored in your local git repository.
|
||||
Aider will directly edit the code in your local source files,
|
||||
and [git commit the changes](https://aider.chat/docs/faq.html#how-does-aider-use-git)
|
||||
with sensible commit messages.
|
||||
You can start a new project or work with an existing git repo.
|
||||
Aider is unique in that it lets you ask for changes to [pre-existing, larger codebases](https://aider.chat/docs/repomap.html).
|
||||
Aider works well with GPT 3.5, GPT-4, GPT-4 Turbo with Vision,
|
||||
and Claude 3 Opus; it also has support for [connecting to almost any LLM](https://aider.chat/docs/llms.html).
|
||||
|
||||
<p align="center">
|
||||
<img src="assets/screencast.svg" alt="aider screencast">
|
||||
|
@ -42,40 +44,43 @@ get started quickly like this:
|
|||
|
||||
```
|
||||
$ pip install aider-chat
|
||||
|
||||
# To work with GPT-4 Turbo:
|
||||
$ export OPENAI_API_KEY=your-key-goes-here
|
||||
$ aider hello.js
|
||||
$ aider
|
||||
|
||||
Using git repo: .git
|
||||
Added hello.js to the chat.
|
||||
|
||||
hello.js> write a js script that prints hello world
|
||||
# To work with Claude 3 Opus:
|
||||
$ export ANTHROPIC_API_KEY=your-key-goes-here
|
||||
$ aider --opus
|
||||
```
|
||||
|
||||
## Example chat transcripts
|
||||
|
||||
Here are some example transcripts that show how you can chat with `aider` to write and edit code with GPT-4.
|
||||
|
||||
* [**Hello World Flask App**](https://aider.chat/examples/hello-world-flask.html): Start from scratch and have GPT create a simple Flask app with various endpoints, such as adding two numbers and calculating the Fibonacci sequence.
|
||||
* [**Hello World Flask App**](https://aider.chat/examples/hello-world-flask.html): Start from scratch and have aider create a simple Flask app with various endpoints, such as adding two numbers and calculating the Fibonacci sequence.
|
||||
|
||||
* [**Javascript Game Modification**](https://aider.chat/examples/2048-game.html): Dive into an existing open-source repo, and get GPT's help to understand it and make modifications.
|
||||
* [**Javascript Game Modification**](https://aider.chat/examples/2048-game.html): Dive into an existing open-source repo, and get aider's help to understand it and make modifications.
|
||||
|
||||
* [**Complex Multi-file Change with Debugging**](https://aider.chat/examples/complex-change.html): GPT makes a complex code change that is coordinated across multiple source files, and resolves bugs by reviewing error output and doc snippets.
|
||||
* [**Complex Multi-file Change with Debugging**](https://aider.chat/examples/complex-change.html): Aider makes a complex code change that is coordinated across multiple source files, and resolves bugs by reviewing error output and doc snippets.
|
||||
|
||||
* [**Create a Black Box Test Case**](https://aider.chat/examples/add-test.html): GPT creates a "black box" test case without access to the source of the method being tested, using only a
|
||||
* [**Create a Black Box Test Case**](https://aider.chat/examples/add-test.html): Aider creates a "black box" test case without access to the source of the method being tested, using only a
|
||||
[high level map of the repository based on tree-sitter](https://aider.chat/docs/repomap.html).
|
||||
|
||||
You can find more chat transcripts on the [examples page](https://aider.chat/examples/).
|
||||
|
||||
## Features
|
||||
|
||||
* Chat with GPT about your code by launching `aider` from the command line with set of source files to discuss and edit together. Aider lets GPT see and edit the content of those files.
|
||||
* GPT can write and edit code in most popular languages: python, javascript, typescript, php, html, css, etc.
|
||||
* Chat with aider about your code by launching `aider` from the command line with set of source files to discuss and edit together. Aider lets the LLM see and edit the content of those files.
|
||||
* Aider can write and edit code in most popular languages: python, javascript, typescript, php, html, css, etc.
|
||||
* Aider works well with GPT 3.5, GPT-4, GPT-4 Turbo with Vision,
|
||||
and Claude 3 Opus; it also has support for [connecting to almost any LLM](https://aider.chat/docs/llms.html).
|
||||
* Request new features, changes, improvements, or bug fixes to your code. Ask for new test cases, updated documentation or code refactors.
|
||||
* Aider will apply the edits suggested by GPT directly to your source files.
|
||||
* Aider will apply the edits suggested by the LLM directly to your source files.
|
||||
* Aider will [automatically commit each changeset to your local git repo](https://aider.chat/docs/faq.html#how-does-aider-use-git) with a descriptive commit message. These frequent, automatic commits provide a safety net. It's easy to undo changes or use standard git workflows to manage longer sequences of changes.
|
||||
* You can use aider with multiple source files at once, so GPT can make coordinated code changes across all of them in a single changeset/commit.
|
||||
* Aider can [give *GPT-4* a map of your entire git repo](https://aider.chat/docs/repomap.html), which helps it understand and modify large codebases.
|
||||
* You can also edit files by hand using your editor while chatting with aider. Aider will notice these out-of-band edits and keep GPT up to date with the latest versions of your files. This lets you bounce back and forth between the aider chat and your editor, to collaboratively code with GPT.
|
||||
* You can use aider with multiple source files at once, so aider can make coordinated code changes across all of them in a single changeset/commit.
|
||||
* Aider can [give the LLM a map of your entire git repo](https://aider.chat/docs/repomap.html), which helps it understand and modify large codebases.
|
||||
* You can also edit files by hand using your editor while chatting with aider. Aider will notice these out-of-band edits and keep up to date with the latest versions of your files. This lets you bounce back and forth between the aider chat and your editor, to collaboratively code with an LLM.
|
||||
* If you are using gpt-4 through openai directly, you can add image files to your context which will automatically switch you to the gpt-4-vision-preview model
|
||||
|
||||
|
||||
|
@ -94,23 +99,23 @@ python -m aider.main <file1> <file2>
|
|||
```
|
||||
|
||||
Replace `<file1>`, `<file2>`, etc., with the paths to the source code files you want to work on.
|
||||
These files will be "added to the chat session", so that GPT can see their contents and edit them according to your instructions.
|
||||
These files will be "added to the chat session", so that the LLM can see their contents and edit them according to your instructions.
|
||||
|
||||
You can also just launch `aider` anywhere in a git repo without naming
|
||||
files on the command line. It will discover all the files in the
|
||||
repo. You can then add and remove individual files in the chat
|
||||
session with the `/add` and `/drop` chat commands described below.
|
||||
If you or GPT mention one of the repo's filenames in the conversation,
|
||||
If you or the LLM mention one of the repo's filenames in the conversation,
|
||||
aider will ask if you'd like to add it to the chat.
|
||||
|
||||
Think about the change you want to make and which files will need
|
||||
to be edited -- add those files to the chat.
|
||||
Don't add *all* the files in your repo to the chat.
|
||||
Be selective, and just add the files that GPT will need to edit.
|
||||
If you add a bunch of unrelated files, GPT can get overwhelmed
|
||||
Be selective, and just add the files that the LLM will need to edit.
|
||||
If you add a bunch of unrelated files, the LLM can get overwhelmed
|
||||
and confused (and it costs more tokens).
|
||||
Aider will automatically
|
||||
share snippets from other, related files with GPT so it can
|
||||
share snippets from other, related files with the LLM so it can
|
||||
[understand the rest of your code base](https://aider.chat/docs/repomap.html).
|
||||
|
||||
Aider also has many
|
||||
|
@ -136,15 +141,15 @@ See the [full command docs](https://aider.chat/docs/commands.html) for more info
|
|||
## Tips
|
||||
|
||||
* Think about which files need to be edited to make your change and add them to the chat.
|
||||
Aider has some ability to help GPT figure out which files to edit all by itself, but the most effective approach is to explicitly add the needed files to the chat yourself.
|
||||
* Large changes are best performed as a sequence of thoughtful bite sized steps, where you plan out the approach and overall design. Walk GPT through changes like you might with a junior dev. Ask for a refactor to prepare, then ask for the actual change. Spend the time to ask for code quality/structure improvements.
|
||||
* Use Control-C to safely interrupt GPT if it isn't providing a useful response. The partial response remains in the conversation, so you can refer to it when you reply to GPT with more information or direction.
|
||||
* Use the `/run` command to run tests, linters, etc and show the output to GPT so it can fix any issues.
|
||||
Aider has some ability to help the LLM figure out which files to edit all by itself, but the most effective approach is to explicitly add the needed files to the chat yourself.
|
||||
* Large changes are best performed as a sequence of thoughtful bite sized steps, where you plan out the approach and overall design. Walk the LLM through changes like you might with a junior dev. Ask for a refactor to prepare, then ask for the actual change. Spend the time to ask for code quality/structure improvements.
|
||||
* Use Control-C to safely interrupt the LLM if it isn't providing a useful response. The partial response remains in the conversation, so you can refer to it when you reply to the LLM with more information or direction.
|
||||
* Use the `/run` command to run tests, linters, etc and show the output to the LLM so it can fix any issues.
|
||||
* Use Meta-ENTER (Esc+ENTER in some environments) to enter multiline chat messages. Or enter `{` alone on the first line to start a multiline message and `}` alone on the last line to end it.
|
||||
* If your code is throwing an error, share the error output with GPT using `/run` or by pasting it into the chat. Let GPT figure out and fix the bug.
|
||||
* GPT knows about a lot of standard tools and libraries, but may get some of the fine details wrong about APIs and function arguments. You can paste doc snippets into the chat to resolve these issues.
|
||||
* GPT can only see the content of the files you specifically "add to the chat". Aider also sends GPT-4 a [map of your entire git repo](https://aider.chat/docs/repomap.html). So GPT may ask to see additional files if it feels that's needed for your requests.
|
||||
* I also shared some general [GPT coding tips on Hacker News](https://news.ycombinator.com/item?id=36211879).
|
||||
* If your code is throwing an error, share the error output with the LLM using `/run` or by pasting it into the chat. Let the LLM figure out and fix the bug.
|
||||
* LLMs know about a lot of standard tools and libraries, but may get some of the fine details wrong about APIs and function arguments. You can paste doc snippets into the chat to resolve these issues.
|
||||
* The LLM can only see the content of the files you specifically "add to the chat". Aider also sends a [map of your entire git repo](https://aider.chat/docs/repomap.html). So the LLM may ask to see additional files if it feels that's needed for your requests.
|
||||
* I also shared some general [LLM coding tips on Hacker News](https://news.ycombinator.com/item?id=36211879).
|
||||
|
||||
|
||||
## Installation
|
||||
|
|
|
@ -15,19 +15,12 @@ using Aider's code editing benchmark suite.
|
|||
Claude 3 Opus outperforms all of OpenAI's models,
|
||||
making it the best available model for pair programming with AI.
|
||||
|
||||
Aider currently supports Claude 3 Opus via
|
||||
[OpenRouter](https://aider.chat/docs/faq.html#accessing-other-llms-with-openrouter):
|
||||
To use Claude 3 Opus with aider:
|
||||
|
||||
```
|
||||
# Install aider
|
||||
pip install aider-chat
|
||||
|
||||
# Setup OpenRouter access
|
||||
export OPENAI_API_KEY=<your-openrouter-key>
|
||||
export OPENAI_API_BASE=https://openrouter.ai/api/v1
|
||||
|
||||
# Run aider with Claude 3 Opus using the diff editing format
|
||||
aider --model anthropic/claude-3-opus --edit-format diff
|
||||
export ANTHROPIC_API_KEY=sk-...
|
||||
aider --opus
|
||||
```
|
||||
|
||||
## Aider's code editing benchmark
|
||||
|
|
|
@ -42,7 +42,6 @@ def wrap_fence(name):
|
|||
|
||||
|
||||
class Coder:
|
||||
client = None
|
||||
abs_fnames = None
|
||||
repo = None
|
||||
last_aider_commit_hash = None
|
||||
|
@ -62,39 +61,27 @@ class Coder:
|
|||
main_model=None,
|
||||
edit_format=None,
|
||||
io=None,
|
||||
client=None,
|
||||
skip_model_availabily_check=False,
|
||||
**kwargs,
|
||||
):
|
||||
from . import EditBlockCoder, UnifiedDiffCoder, WholeFileCoder
|
||||
|
||||
if not main_model:
|
||||
main_model = models.Model.create(models.DEFAULT_MODEL_NAME)
|
||||
|
||||
if not skip_model_availabily_check and not main_model.always_available:
|
||||
if not check_model_availability(io, client, main_model):
|
||||
fallback_model = models.GPT35_0125
|
||||
io.tool_error(
|
||||
f"API key does not support {main_model.name}, falling back to"
|
||||
f" {fallback_model.name}"
|
||||
)
|
||||
main_model = fallback_model
|
||||
main_model = models.Model(models.DEFAULT_MODEL_NAME)
|
||||
|
||||
if edit_format is None:
|
||||
edit_format = main_model.edit_format
|
||||
|
||||
if edit_format == "diff":
|
||||
return EditBlockCoder(client, main_model, io, **kwargs)
|
||||
return EditBlockCoder(main_model, io, **kwargs)
|
||||
elif edit_format == "whole":
|
||||
return WholeFileCoder(client, main_model, io, **kwargs)
|
||||
return WholeFileCoder(main_model, io, **kwargs)
|
||||
elif edit_format == "udiff":
|
||||
return UnifiedDiffCoder(client, main_model, io, **kwargs)
|
||||
return UnifiedDiffCoder(main_model, io, **kwargs)
|
||||
else:
|
||||
raise ValueError(f"Unknown edit format {edit_format}")
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
client,
|
||||
main_model,
|
||||
io,
|
||||
fnames=None,
|
||||
|
@ -113,8 +100,6 @@ class Coder:
|
|||
voice_language=None,
|
||||
aider_ignore_file=None,
|
||||
):
|
||||
self.client = client
|
||||
|
||||
if not fnames:
|
||||
fnames = []
|
||||
|
||||
|
@ -151,7 +136,11 @@ class Coder:
|
|||
|
||||
self.main_model = main_model
|
||||
|
||||
self.io.tool_output(f"Model: {main_model.name} using {self.edit_format} edit format")
|
||||
weak_model = main_model.weak_model
|
||||
self.io.tool_output(
|
||||
f"Models: {main_model.name} with {self.edit_format} edit format, weak model"
|
||||
f" {weak_model.name}"
|
||||
)
|
||||
|
||||
self.show_diffs = show_diffs
|
||||
|
||||
|
@ -160,7 +149,11 @@ class Coder:
|
|||
if use_git:
|
||||
try:
|
||||
self.repo = GitRepo(
|
||||
self.io, fnames, git_dname, aider_ignore_file, client=self.client
|
||||
self.io,
|
||||
fnames,
|
||||
git_dname,
|
||||
aider_ignore_file,
|
||||
models=main_model.commit_message_models(),
|
||||
)
|
||||
self.root = self.repo.root
|
||||
except FileNotFoundError:
|
||||
|
@ -223,8 +216,7 @@ class Coder:
|
|||
self.io.tool_output(f"Added {fname} to the chat.")
|
||||
|
||||
self.summarizer = ChatSummary(
|
||||
self.client,
|
||||
models.Model.weak_model(),
|
||||
self.main_model.weak_model,
|
||||
self.main_model.max_chat_history_tokens,
|
||||
)
|
||||
|
||||
|
@ -374,7 +366,7 @@ class Coder:
|
|||
return files_messages
|
||||
|
||||
def get_images_message(self):
|
||||
if not utils.is_gpt4_with_openai_base_url(self.main_model.name, self.client):
|
||||
if not self.main_model.accepts_images:
|
||||
return None
|
||||
|
||||
image_messages = []
|
||||
|
@ -518,7 +510,7 @@ class Coder:
|
|||
messages += self.cur_messages
|
||||
|
||||
# Add the reminder prompt if we still have room to include it.
|
||||
if total_tokens < self.main_model.max_context_tokens:
|
||||
if total_tokens < self.main_model.info.get("max_input_tokens", 0):
|
||||
messages += reminder_message
|
||||
|
||||
return messages
|
||||
|
@ -656,9 +648,7 @@ class Coder:
|
|||
|
||||
interrupted = False
|
||||
try:
|
||||
hash_object, completion = send_with_retries(
|
||||
self.client, model, messages, functions, self.stream
|
||||
)
|
||||
hash_object, completion = send_with_retries(model, messages, functions, self.stream)
|
||||
self.chat_completion_call_hashes.append(hash_object.hexdigest())
|
||||
|
||||
if self.stream:
|
||||
|
@ -717,10 +707,10 @@ class Coder:
|
|||
completion_tokens = completion.usage.completion_tokens
|
||||
|
||||
tokens = f"{prompt_tokens} prompt tokens, {completion_tokens} completion tokens"
|
||||
if self.main_model.prompt_price:
|
||||
cost = prompt_tokens * self.main_model.prompt_price / 1000
|
||||
if self.main_model.completion_price:
|
||||
cost += completion_tokens * self.main_model.completion_price / 1000
|
||||
if self.main_model.info.get("input_cost_per_token"):
|
||||
cost = prompt_tokens * self.main_model.info.get("input_cost_per_token")
|
||||
if self.main_model.info.get("output_cost_per_token"):
|
||||
cost += completion_tokens * self.main_model.info.get("output_cost_per_token")
|
||||
tokens += f", ${cost:.6f} cost"
|
||||
self.total_cost += cost
|
||||
|
||||
|
@ -1052,21 +1042,3 @@ class Coder:
|
|||
# files changed, move cur messages back behind the files messages
|
||||
# self.move_back_cur_messages(self.gpt_prompts.files_content_local_edits)
|
||||
return True
|
||||
|
||||
|
||||
def check_model_availability(io, client, main_model):
|
||||
try:
|
||||
available_models = client.models.list()
|
||||
except openai.NotFoundError:
|
||||
# Azure sometimes returns 404?
|
||||
# https://discord.com/channels/1131200896827654144/1182327371232186459
|
||||
io.tool_error(f"Unable to list available models, proceeding with {main_model.name}")
|
||||
return True
|
||||
|
||||
model_ids = sorted(model.id for model in available_models)
|
||||
if main_model.name in model_ids:
|
||||
return True
|
||||
|
||||
available_models = ", ".join(model_ids)
|
||||
io.tool_error(f"API key supports: {available_models}")
|
||||
return False
|
||||
|
|
|
@ -1,14 +1,16 @@
|
|||
import os
|
||||
import re
|
||||
import subprocess
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
import git
|
||||
import openai
|
||||
from prompt_toolkit.completion import Completion
|
||||
|
||||
from aider import prompts, voice
|
||||
from aider.scrape import Scraper
|
||||
from aider.utils import is_gpt4_with_openai_base_url, is_image_file
|
||||
from aider.utils import is_image_file
|
||||
|
||||
from .dump import dump # noqa: F401
|
||||
|
||||
|
@ -25,7 +27,6 @@ class Commands:
|
|||
voice_language = None
|
||||
|
||||
self.voice_language = voice_language
|
||||
self.tokenizer = coder.main_model.tokenizer
|
||||
|
||||
def cmd_web(self, args):
|
||||
"Use headless selenium to scrape a webpage and add the content to the chat"
|
||||
|
@ -176,7 +177,7 @@ class Commands:
|
|||
self.io.tool_output()
|
||||
|
||||
width = 8
|
||||
cost_width = 7
|
||||
cost_width = 9
|
||||
|
||||
def fmt(v):
|
||||
return format(int(v), ",").rjust(width)
|
||||
|
@ -188,22 +189,17 @@ class Commands:
|
|||
total_cost = 0.0
|
||||
for tk, msg, tip in res:
|
||||
total += tk
|
||||
cost = tk * (self.coder.main_model.prompt_price / 1000)
|
||||
cost = tk * self.coder.main_model.info.get("input_cost_per_token", 0)
|
||||
total_cost += cost
|
||||
msg = msg.ljust(col_width)
|
||||
self.io.tool_output(f"${cost:5.2f} {fmt(tk)} {msg} {tip}")
|
||||
self.io.tool_output(f"${cost:7.4f} {fmt(tk)} {msg} {tip}")
|
||||
|
||||
self.io.tool_output("=" * (width + cost_width + 1))
|
||||
self.io.tool_output(f"${total_cost:5.2f} {fmt(total)} tokens total")
|
||||
self.io.tool_output(f"${total_cost:7.4f} {fmt(total)} tokens total")
|
||||
|
||||
# only switch to image model token count if gpt4 and openai and image in files
|
||||
image_in_chat = False
|
||||
if is_gpt4_with_openai_base_url(self.coder.main_model.name, self.coder.client):
|
||||
image_in_chat = any(
|
||||
is_image_file(relative_fname)
|
||||
for relative_fname in self.coder.get_inchat_relative_files()
|
||||
)
|
||||
limit = 128000 if image_in_chat else self.coder.main_model.max_context_tokens
|
||||
limit = self.coder.main_model.info.get("max_input_tokens", 0)
|
||||
if not limit:
|
||||
return
|
||||
|
||||
remaining = limit - total
|
||||
if remaining > 1024:
|
||||
|
@ -214,7 +210,10 @@ class Commands:
|
|||
" /clear to make space)"
|
||||
)
|
||||
else:
|
||||
self.io.tool_error(f"{cost_pad}{fmt(remaining)} tokens remaining, window exhausted!")
|
||||
self.io.tool_error(
|
||||
f"{cost_pad}{fmt(remaining)} tokens remaining, window exhausted (use /drop or"
|
||||
" /clear to make space)"
|
||||
)
|
||||
self.io.tool_output(f"{cost_pad}{fmt(limit)} tokens max context window size")
|
||||
|
||||
def cmd_undo(self, args):
|
||||
|
@ -376,12 +375,11 @@ class Commands:
|
|||
if abs_file_path in self.coder.abs_fnames:
|
||||
self.io.tool_error(f"{matched_file} is already in the chat")
|
||||
else:
|
||||
if is_image_file(matched_file) and not is_gpt4_with_openai_base_url(
|
||||
self.coder.main_model.name, self.coder.client
|
||||
):
|
||||
if is_image_file(matched_file) and not self.coder.main_model.accepts_images:
|
||||
self.io.tool_error(
|
||||
f"Cannot add image file {matched_file} as the model does not support image"
|
||||
" files"
|
||||
f"Cannot add image file {matched_file} as the"
|
||||
f" {self.coder.main_model.name} does not support image.\nYou can run `aider"
|
||||
" --4turbo` to use GPT-4 Turbo with Vision."
|
||||
)
|
||||
continue
|
||||
content = self.io.read_text(abs_file_path)
|
||||
|
@ -547,8 +545,11 @@ class Commands:
|
|||
"Record and transcribe voice input"
|
||||
|
||||
if not self.voice:
|
||||
if "OPENAI_API_KEY" not in os.environ:
|
||||
self.io.tool_error("To use /voice you must provide an OpenAI API key.")
|
||||
return
|
||||
try:
|
||||
self.voice = voice.Voice(self.coder.client)
|
||||
self.voice = voice.Voice()
|
||||
except voice.SoundDeviceError:
|
||||
self.io.tool_error(
|
||||
"Unable to import `sounddevice` and/or `soundfile`, is portaudio installed?"
|
||||
|
@ -572,7 +573,12 @@ class Commands:
|
|||
history.reverse()
|
||||
history = "\n".join(history)
|
||||
|
||||
text = self.voice.record_and_transcribe(history, language=self.voice_language)
|
||||
try:
|
||||
text = self.voice.record_and_transcribe(history, language=self.voice_language)
|
||||
except openai.OpenAIError as err:
|
||||
self.io.tool_error(f"Unable to use OpenAI whisper model: {err}")
|
||||
return
|
||||
|
||||
if text:
|
||||
self.io.add_to_input_history(text)
|
||||
print()
|
||||
|
|
|
@ -1,5 +1,4 @@
|
|||
import argparse
|
||||
import json
|
||||
|
||||
from aider import models, prompts
|
||||
from aider.dump import dump # noqa: F401
|
||||
|
@ -7,9 +6,8 @@ from aider.sendchat import simple_send_with_retries
|
|||
|
||||
|
||||
class ChatSummary:
|
||||
def __init__(self, client, model=models.Model.weak_model(), max_tokens=1024):
|
||||
self.client = client
|
||||
self.tokenizer = model.tokenizer
|
||||
def __init__(self, model=None, max_tokens=1024):
|
||||
self.token_count = model.token_count
|
||||
self.max_tokens = max_tokens
|
||||
self.model = model
|
||||
|
||||
|
@ -21,7 +19,7 @@ class ChatSummary:
|
|||
def tokenize(self, messages):
|
||||
sized = []
|
||||
for msg in messages:
|
||||
tokens = len(self.tokenizer.encode(json.dumps(msg)))
|
||||
tokens = self.token_count(msg)
|
||||
sized.append((tokens, msg))
|
||||
return sized
|
||||
|
||||
|
@ -61,7 +59,7 @@ class ChatSummary:
|
|||
summary = self.summarize_all(head)
|
||||
|
||||
tail_tokens = sum(tokens for tokens, msg in sized[split_index:])
|
||||
summary_tokens = len(self.tokenizer.encode(json.dumps(summary)))
|
||||
summary_tokens = self.token_count(summary)
|
||||
|
||||
result = summary + tail
|
||||
if summary_tokens + tail_tokens < self.max_tokens:
|
||||
|
@ -85,7 +83,7 @@ class ChatSummary:
|
|||
dict(role="user", content=content),
|
||||
]
|
||||
|
||||
summary = simple_send_with_retries(self.client, self.model.name, messages)
|
||||
summary = simple_send_with_retries(self.model.name, messages)
|
||||
if summary is None:
|
||||
raise ValueError(f"summarizer unexpectedly failed for {self.model.name}")
|
||||
summary = prompts.summary_prefix + summary
|
||||
|
@ -125,7 +123,7 @@ def main():
|
|||
|
||||
assistant.append(line)
|
||||
|
||||
summarizer = ChatSummary(models.Model.weak_model())
|
||||
summarizer = ChatSummary(models.Model(models.DEFAULT_WEAK_MODEL_NAME, weak_model=False))
|
||||
summary = summarizer.summarize(messages[-40:])
|
||||
dump(summary)
|
||||
|
||||
|
|
140
aider/main.py
140
aider/main.py
|
@ -6,7 +6,7 @@ from pathlib import Path
|
|||
|
||||
import configargparse
|
||||
import git
|
||||
import openai
|
||||
import litellm
|
||||
|
||||
from aider import __version__, models
|
||||
from aider.coders import Coder
|
||||
|
@ -16,6 +16,10 @@ from aider.versioncheck import check_version
|
|||
|
||||
from .dump import dump # noqa: F401
|
||||
|
||||
litellm.suppress_debug_info = True
|
||||
os.environ["OR_SITE_URL"] = "http://aider.chat"
|
||||
os.environ["OR_APP_NAME"] = "Aider"
|
||||
|
||||
|
||||
def get_git_root():
|
||||
"""Try and guess the git repo, since the conf.yml can be at the repo root"""
|
||||
|
@ -159,6 +163,12 @@ def main(argv=None, input=None, output=None, force_git_root=None):
|
|||
env_var="OPENAI_API_KEY",
|
||||
help="Specify the OpenAI API key",
|
||||
)
|
||||
core_group.add_argument(
|
||||
"--anthropic-api-key",
|
||||
metavar="ANTHROPIC_API_KEY",
|
||||
env_var="ANTHROPIC_API_KEY",
|
||||
help="Specify the OpenAI API key",
|
||||
)
|
||||
default_model = models.DEFAULT_MODEL_NAME
|
||||
core_group.add_argument(
|
||||
"--model",
|
||||
|
@ -166,31 +176,40 @@ def main(argv=None, input=None, output=None, force_git_root=None):
|
|||
default=default_model,
|
||||
help=f"Specify the model to use for the main chat (default: {default_model})",
|
||||
)
|
||||
opus_model = "claude-3-opus-20240229"
|
||||
core_group.add_argument(
|
||||
"--skip-model-availability-check",
|
||||
metavar="SKIP_MODEL_AVAILABILITY_CHECK",
|
||||
default=False,
|
||||
help="Override to skip model availability check (default: False)",
|
||||
"--opus",
|
||||
action="store_const",
|
||||
dest="model",
|
||||
const=opus_model,
|
||||
help=f"Use {opus_model} model for the main chat",
|
||||
)
|
||||
default_4_model = "gpt-4-0613"
|
||||
sonnet_model = "claude-3-sonnet-20240229"
|
||||
core_group.add_argument(
|
||||
"--sonnet",
|
||||
action="store_const",
|
||||
dest="model",
|
||||
const=sonnet_model,
|
||||
help=f"Use {sonnet_model} model for the main chat",
|
||||
)
|
||||
gpt_4_model = "gpt-4-0613"
|
||||
core_group.add_argument(
|
||||
"--4",
|
||||
"-4",
|
||||
action="store_const",
|
||||
dest="model",
|
||||
const=default_4_model,
|
||||
help=f"Use {default_4_model} model for the main chat",
|
||||
const=gpt_4_model,
|
||||
help=f"Use {gpt_4_model} model for the main chat",
|
||||
)
|
||||
default_4_turbo_model = "gpt-4-1106-preview"
|
||||
gpt_4_turbo_model = "gpt-4-turbo"
|
||||
core_group.add_argument(
|
||||
"--4turbo",
|
||||
"--4-turbo",
|
||||
"--4-turbo-vision",
|
||||
action="store_const",
|
||||
dest="model",
|
||||
const=default_4_turbo_model,
|
||||
help=f"Use {default_4_turbo_model} model for the main chat",
|
||||
const=gpt_4_turbo_model,
|
||||
help=f"Use {gpt_4_turbo_model} model for the main chat",
|
||||
)
|
||||
default_3_model = models.GPT35_0125
|
||||
gpt_3_model_name = "gpt-3.5-turbo"
|
||||
core_group.add_argument(
|
||||
"--35turbo",
|
||||
"--35-turbo",
|
||||
|
@ -198,8 +217,8 @@ def main(argv=None, input=None, output=None, force_git_root=None):
|
|||
"-3",
|
||||
action="store_const",
|
||||
dest="model",
|
||||
const=default_3_model.name,
|
||||
help=f"Use {default_3_model.name} model for the main chat",
|
||||
const=gpt_3_model_name,
|
||||
help=f"Use {gpt_3_model_name} model for the main chat",
|
||||
)
|
||||
core_group.add_argument(
|
||||
"--voice-language",
|
||||
|
@ -240,19 +259,27 @@ def main(argv=None, input=None, output=None, force_git_root=None):
|
|||
env_var="OPENAI_ORGANIZATION_ID",
|
||||
help="Specify the OpenAI organization ID",
|
||||
)
|
||||
model_group.add_argument(
|
||||
"--openrouter",
|
||||
dest="openai_api_base",
|
||||
action="store_const",
|
||||
const="https://openrouter.ai/api/v1",
|
||||
help="Specify the api base url as https://openrouter.ai/api/v1",
|
||||
)
|
||||
model_group.add_argument(
|
||||
"--edit-format",
|
||||
metavar="EDIT_FORMAT",
|
||||
default=None,
|
||||
help="Specify what edit format GPT should use (default depends on model)",
|
||||
)
|
||||
core_group.add_argument(
|
||||
"--weak-model",
|
||||
metavar="WEAK_MODEL",
|
||||
default=None,
|
||||
help=(
|
||||
"Specify the model to use for commit messages and chat history summarization (default"
|
||||
" depends on --model)"
|
||||
),
|
||||
)
|
||||
model_group.add_argument(
|
||||
"--require-model-info",
|
||||
action=argparse.BooleanOptionalAction,
|
||||
default=True,
|
||||
help="Only work with models that have meta-data available (default: True)",
|
||||
)
|
||||
model_group.add_argument(
|
||||
"--map-tokens",
|
||||
type=int,
|
||||
|
@ -545,7 +572,9 @@ def main(argv=None, input=None, output=None, force_git_root=None):
|
|||
def scrub_sensitive_info(text):
|
||||
# Replace sensitive information with placeholder
|
||||
if text and args.openai_api_key:
|
||||
return text.replace(args.openai_api_key, "***")
|
||||
text = text.replace(args.openai_api_key, "***")
|
||||
if text and args.anthropic_api_key:
|
||||
text = text.replace(args.anthropic_api_key, "***")
|
||||
return text
|
||||
|
||||
if args.verbose:
|
||||
|
@ -559,47 +588,46 @@ def main(argv=None, input=None, output=None, force_git_root=None):
|
|||
|
||||
io.tool_output(*map(scrub_sensitive_info, sys.argv), log_only=True)
|
||||
|
||||
if not args.openai_api_key:
|
||||
if os.name == "nt":
|
||||
io.tool_error(
|
||||
"No OpenAI API key provided. Use --openai-api-key or setx OPENAI_API_KEY."
|
||||
)
|
||||
else:
|
||||
io.tool_error(
|
||||
"No OpenAI API key provided. Use --openai-api-key or export OPENAI_API_KEY."
|
||||
)
|
||||
if args.anthropic_api_key:
|
||||
os.environ["ANTHROPIC_API_KEY"] = args.anthropic_api_key
|
||||
|
||||
if args.openai_api_key:
|
||||
os.environ["OPENAI_API_KEY"] = args.openai_api_key
|
||||
if args.openai_api_base:
|
||||
os.environ["OPENAI_API_BASE"] = args.openai_api_base
|
||||
if args.openai_api_version:
|
||||
os.environ["AZURE_API_VERSION"] = args.openai_api_version
|
||||
if args.openai_api_type:
|
||||
os.environ["AZURE_API_TYPE"] = args.openai_api_type
|
||||
if args.openai_organization_id:
|
||||
os.environ["OPENAI_ORGANIZATION"] = args.openai_organization_id
|
||||
|
||||
# Is the model known and are all needed keys/params available?
|
||||
res = litellm.validate_environment(args.model)
|
||||
missing_keys = res.get("missing_keys")
|
||||
if missing_keys:
|
||||
io.tool_error(f"To use model {args.model}, please set these environment variables:")
|
||||
for key in missing_keys:
|
||||
io.tool_error(f"- {key}")
|
||||
return 1
|
||||
elif not res["keys_in_environment"] and args.require_model_info:
|
||||
io.tool_error(models.check_model_name(args.model))
|
||||
return 1
|
||||
|
||||
if args.openai_api_type == "azure":
|
||||
client = openai.AzureOpenAI(
|
||||
api_key=args.openai_api_key,
|
||||
azure_endpoint=args.openai_api_base,
|
||||
api_version=args.openai_api_version,
|
||||
azure_deployment=args.openai_api_deployment_id,
|
||||
# Check in advance that we have model metadata
|
||||
try:
|
||||
main_model = models.Model(
|
||||
args.model, weak_model=args.weak_model, require_model_info=args.require_model_info
|
||||
)
|
||||
else:
|
||||
kwargs = dict()
|
||||
if args.openai_api_base:
|
||||
kwargs["base_url"] = args.openai_api_base
|
||||
if "openrouter.ai" in args.openai_api_base:
|
||||
kwargs["default_headers"] = {
|
||||
"HTTP-Referer": "http://aider.chat",
|
||||
"X-Title": "Aider",
|
||||
}
|
||||
if args.openai_organization_id:
|
||||
kwargs["organization"] = args.openai_organization_id
|
||||
|
||||
client = openai.OpenAI(api_key=args.openai_api_key, **kwargs)
|
||||
|
||||
main_model = models.Model.create(args.model, client)
|
||||
except models.NoModelInfo as err:
|
||||
io.tool_error(str(err))
|
||||
return 1
|
||||
|
||||
try:
|
||||
coder = Coder.create(
|
||||
main_model=main_model,
|
||||
edit_format=args.edit_format,
|
||||
io=io,
|
||||
skip_model_availabily_check=args.skip_model_availability_check,
|
||||
client=client,
|
||||
##
|
||||
fnames=fnames,
|
||||
git_dname=git_dname,
|
||||
|
|
290
aider/models.py
Normal file
290
aider/models.py
Normal file
|
@ -0,0 +1,290 @@
|
|||
import difflib
|
||||
import json
|
||||
import math
|
||||
import sys
|
||||
from dataclasses import dataclass, fields
|
||||
|
||||
import litellm
|
||||
from PIL import Image
|
||||
|
||||
from aider.dump import dump # noqa: F401
|
||||
|
||||
DEFAULT_MODEL_NAME = "gpt-4-1106-preview"
|
||||
DEFAULT_WEAK_MODEL_NAME = "gpt-3.5-turbo"
|
||||
|
||||
|
||||
class NoModelInfo(Exception):
|
||||
"""
|
||||
Exception raised when model information cannot be retrieved.
|
||||
"""
|
||||
|
||||
def __init__(self, model):
|
||||
super().__init__(check_model_name(model))
|
||||
|
||||
|
||||
@dataclass
|
||||
class ModelSettings:
|
||||
name: str
|
||||
edit_format: str
|
||||
weak_model_name: str = DEFAULT_WEAK_MODEL_NAME
|
||||
use_repo_map: bool = False
|
||||
send_undo_reply: bool = False
|
||||
accepts_images: bool = False
|
||||
|
||||
|
||||
# https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo
|
||||
# https://platform.openai.com/docs/models/gpt-3-5-turbo
|
||||
# https://openai.com/pricing
|
||||
|
||||
MODEL_SETTINGS = [
|
||||
# gpt-3.5
|
||||
ModelSettings(
|
||||
"gpt-3.5-turbo-0125",
|
||||
"whole",
|
||||
),
|
||||
ModelSettings(
|
||||
"gpt-3.5-turbo-1106",
|
||||
"whole",
|
||||
),
|
||||
ModelSettings(
|
||||
"gpt-3.5-turbo-0613",
|
||||
"whole",
|
||||
),
|
||||
ModelSettings(
|
||||
"gpt-3.5-turbo-16k-0613",
|
||||
"whole",
|
||||
),
|
||||
# gpt-4
|
||||
ModelSettings(
|
||||
"gpt-4-turbo-2024-04-09",
|
||||
"udiff",
|
||||
use_repo_map=True,
|
||||
send_undo_reply=True,
|
||||
accepts_images=True,
|
||||
),
|
||||
ModelSettings(
|
||||
"gpt-4-turbo",
|
||||
"udiff",
|
||||
use_repo_map=True,
|
||||
send_undo_reply=True,
|
||||
accepts_images=True,
|
||||
),
|
||||
ModelSettings(
|
||||
"gpt-4-0125-preview",
|
||||
"udiff",
|
||||
use_repo_map=True,
|
||||
send_undo_reply=True,
|
||||
),
|
||||
ModelSettings(
|
||||
"gpt-4-1106-preview",
|
||||
"udiff",
|
||||
use_repo_map=True,
|
||||
send_undo_reply=True,
|
||||
),
|
||||
ModelSettings(
|
||||
"gpt-4-vision-preview",
|
||||
"diff",
|
||||
use_repo_map=True,
|
||||
send_undo_reply=True,
|
||||
accepts_images=True,
|
||||
),
|
||||
ModelSettings(
|
||||
"gpt-4-0613",
|
||||
"diff",
|
||||
use_repo_map=True,
|
||||
send_undo_reply=True,
|
||||
),
|
||||
ModelSettings(
|
||||
"gpt-4-32k-0613",
|
||||
"diff",
|
||||
use_repo_map=True,
|
||||
send_undo_reply=True,
|
||||
),
|
||||
# Claude
|
||||
ModelSettings(
|
||||
"claude-3-opus-20240229",
|
||||
"diff",
|
||||
weak_model_name="claude-3-haiku-20240307",
|
||||
use_repo_map=True,
|
||||
send_undo_reply=True,
|
||||
),
|
||||
]
|
||||
|
||||
|
||||
class Model:
|
||||
name = None
|
||||
|
||||
edit_format = "whole"
|
||||
use_repo_map = False
|
||||
send_undo_reply = False
|
||||
accepts_images = False
|
||||
weak_model_name = DEFAULT_WEAK_MODEL_NAME
|
||||
|
||||
max_chat_history_tokens = 1024
|
||||
weak_model = None
|
||||
|
||||
def __init__(self, model, weak_model=None, require_model_info=True):
|
||||
self.name = model
|
||||
|
||||
try:
|
||||
self.info = litellm.get_model_info(model)
|
||||
except Exception:
|
||||
if require_model_info:
|
||||
raise NoModelInfo(model)
|
||||
self.info = dict()
|
||||
|
||||
if self.info.get("max_input_tokens", 0) < 32 * 1024:
|
||||
self.max_chat_history_tokens = 1024
|
||||
else:
|
||||
self.max_chat_history_tokens = 2 * 1024
|
||||
|
||||
self.configure_model_settings(model)
|
||||
if weak_model is False:
|
||||
self.weak_model_name = None
|
||||
else:
|
||||
self.get_weak_model(weak_model, require_model_info)
|
||||
|
||||
def configure_model_settings(self, model):
|
||||
for ms in MODEL_SETTINGS:
|
||||
# direct match, or match "provider/<model>"
|
||||
if model == ms.name or model.endswith("/" + ms.name):
|
||||
for field in fields(ModelSettings):
|
||||
val = getattr(ms, field.name)
|
||||
setattr(self, field.name, val)
|
||||
|
||||
return # <--
|
||||
|
||||
if "gpt-4" in model or "claude-2" in model:
|
||||
self.edit_format = "diff"
|
||||
self.use_repo_map = True
|
||||
self.send_undo_reply = True
|
||||
|
||||
return # <--
|
||||
|
||||
# use the defaults
|
||||
|
||||
def __str__(self):
|
||||
return self.name
|
||||
|
||||
def get_weak_model(self, provided_weak_model_name, require_model_info):
|
||||
# If weak_model_name is provided, override the model settings
|
||||
if provided_weak_model_name:
|
||||
self.weak_model_name = provided_weak_model_name
|
||||
|
||||
if self.weak_model_name == self.name:
|
||||
self.weak_model = self
|
||||
return
|
||||
|
||||
self.weak_model = Model(
|
||||
self.weak_model_name,
|
||||
weak_model=False,
|
||||
require_model_info=require_model_info,
|
||||
)
|
||||
return self.weak_model
|
||||
|
||||
def commit_message_models(self):
|
||||
return [self.weak_model]
|
||||
|
||||
def tokenizer(self, text):
|
||||
return litellm.encode(model=self.name, text=text)
|
||||
|
||||
def token_count(self, messages):
|
||||
if not self.tokenizer:
|
||||
return
|
||||
|
||||
if type(messages) is str:
|
||||
msgs = messages
|
||||
else:
|
||||
msgs = json.dumps(messages)
|
||||
|
||||
return len(self.tokenizer(msgs))
|
||||
|
||||
def token_count_for_image(self, fname):
|
||||
"""
|
||||
Calculate the token cost for an image assuming high detail.
|
||||
The token cost is determined by the size of the image.
|
||||
:param fname: The filename of the image.
|
||||
:return: The token cost for the image.
|
||||
"""
|
||||
width, height = self.get_image_size(fname)
|
||||
|
||||
# If the image is larger than 2048 in any dimension, scale it down to fit within 2048x2048
|
||||
max_dimension = max(width, height)
|
||||
if max_dimension > 2048:
|
||||
scale_factor = 2048 / max_dimension
|
||||
width = int(width * scale_factor)
|
||||
height = int(height * scale_factor)
|
||||
|
||||
# Scale the image such that the shortest side is 768 pixels long
|
||||
min_dimension = min(width, height)
|
||||
scale_factor = 768 / min_dimension
|
||||
width = int(width * scale_factor)
|
||||
height = int(height * scale_factor)
|
||||
|
||||
# Calculate the number of 512x512 tiles needed to cover the image
|
||||
tiles_width = math.ceil(width / 512)
|
||||
tiles_height = math.ceil(height / 512)
|
||||
num_tiles = tiles_width * tiles_height
|
||||
|
||||
# Each tile costs 170 tokens, and there's an additional fixed cost of 85 tokens
|
||||
token_cost = num_tiles * 170 + 85
|
||||
return token_cost
|
||||
|
||||
def get_image_size(self, fname):
|
||||
"""
|
||||
Retrieve the size of an image.
|
||||
:param fname: The filename of the image.
|
||||
:return: A tuple (width, height) representing the image size in pixels.
|
||||
"""
|
||||
with Image.open(fname) as img:
|
||||
return img.size
|
||||
|
||||
|
||||
def check_model_name(model):
|
||||
res = f"Unknown model {model}"
|
||||
|
||||
possible_matches = fuzzy_match_models(model)
|
||||
|
||||
if possible_matches:
|
||||
res += ", did you mean one of these?"
|
||||
for match in possible_matches:
|
||||
res += "\n- " + match
|
||||
|
||||
return res
|
||||
|
||||
|
||||
def fuzzy_match_models(name):
|
||||
models = litellm.model_cost.keys()
|
||||
|
||||
# Check for exact match first
|
||||
if name in models:
|
||||
return [name]
|
||||
|
||||
# Check for models containing the name
|
||||
matching_models = [model for model in models if name in model]
|
||||
|
||||
# If no matches found, check for slight misspellings
|
||||
if not matching_models:
|
||||
matching_models = difflib.get_close_matches(name, models, n=3, cutoff=0.8)
|
||||
|
||||
return matching_models
|
||||
|
||||
|
||||
def main():
|
||||
if len(sys.argv) != 2:
|
||||
print("Usage: python models.py <model_name>")
|
||||
sys.exit(1)
|
||||
|
||||
model_name = sys.argv[1]
|
||||
matching_models = fuzzy_match_models(model_name)
|
||||
|
||||
if matching_models:
|
||||
print(f"Matching models for '{model_name}':")
|
||||
for model in matching_models:
|
||||
print(model)
|
||||
else:
|
||||
print(f"No matching models found for '{model_name}'.")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
|
@ -1,17 +0,0 @@
|
|||
from .model import Model
|
||||
from .openai import OpenAIModel
|
||||
from .openrouter import OpenRouterModel
|
||||
|
||||
GPT4 = Model.create("gpt-4")
|
||||
GPT35 = Model.create("gpt-3.5-turbo")
|
||||
GPT35_0125 = Model.create("gpt-3.5-turbo-0125")
|
||||
|
||||
DEFAULT_MODEL_NAME = "gpt-4-1106-preview"
|
||||
|
||||
__all__ = [
|
||||
OpenAIModel,
|
||||
OpenRouterModel,
|
||||
GPT4,
|
||||
GPT35,
|
||||
GPT35_0125,
|
||||
]
|
|
@ -1,94 +0,0 @@
|
|||
import json
|
||||
import math
|
||||
|
||||
from PIL import Image
|
||||
|
||||
|
||||
class Model:
|
||||
name = None
|
||||
edit_format = None
|
||||
max_context_tokens = 0
|
||||
tokenizer = None
|
||||
max_chat_history_tokens = 1024
|
||||
|
||||
always_available = False
|
||||
use_repo_map = False
|
||||
send_undo_reply = False
|
||||
|
||||
prompt_price = None
|
||||
completion_price = None
|
||||
|
||||
@classmethod
|
||||
def create(cls, name, client=None):
|
||||
from .openai import OpenAIModel
|
||||
from .openrouter import OpenRouterModel
|
||||
|
||||
if client and client.base_url.host == "openrouter.ai":
|
||||
return OpenRouterModel(client, name)
|
||||
return OpenAIModel(name)
|
||||
|
||||
def __str__(self):
|
||||
return self.name
|
||||
|
||||
@staticmethod
|
||||
def strong_model():
|
||||
return Model.create("gpt-4-0613")
|
||||
|
||||
@staticmethod
|
||||
def weak_model():
|
||||
return Model.create("gpt-3.5-turbo-0125")
|
||||
|
||||
@staticmethod
|
||||
def commit_message_models():
|
||||
return [Model.weak_model()]
|
||||
|
||||
def token_count(self, messages):
|
||||
if not self.tokenizer:
|
||||
return
|
||||
|
||||
if type(messages) is str:
|
||||
msgs = messages
|
||||
else:
|
||||
msgs = json.dumps(messages)
|
||||
|
||||
return len(self.tokenizer.encode(msgs))
|
||||
|
||||
def token_count_for_image(self, fname):
|
||||
"""
|
||||
Calculate the token cost for an image assuming high detail.
|
||||
The token cost is determined by the size of the image.
|
||||
:param fname: The filename of the image.
|
||||
:return: The token cost for the image.
|
||||
"""
|
||||
width, height = self.get_image_size(fname)
|
||||
|
||||
# If the image is larger than 2048 in any dimension, scale it down to fit within 2048x2048
|
||||
max_dimension = max(width, height)
|
||||
if max_dimension > 2048:
|
||||
scale_factor = 2048 / max_dimension
|
||||
width = int(width * scale_factor)
|
||||
height = int(height * scale_factor)
|
||||
|
||||
# Scale the image such that the shortest side is 768 pixels long
|
||||
min_dimension = min(width, height)
|
||||
scale_factor = 768 / min_dimension
|
||||
width = int(width * scale_factor)
|
||||
height = int(height * scale_factor)
|
||||
|
||||
# Calculate the number of 512x512 tiles needed to cover the image
|
||||
tiles_width = math.ceil(width / 512)
|
||||
tiles_height = math.ceil(height / 512)
|
||||
num_tiles = tiles_width * tiles_height
|
||||
|
||||
# Each tile costs 170 tokens, and there's an additional fixed cost of 85 tokens
|
||||
token_cost = num_tiles * 170 + 85
|
||||
return token_cost
|
||||
|
||||
def get_image_size(self, fname):
|
||||
"""
|
||||
Retrieve the size of an image.
|
||||
:param fname: The filename of the image.
|
||||
:return: A tuple (width, height) representing the image size in pixels.
|
||||
"""
|
||||
with Image.open(fname) as img:
|
||||
return img.size
|
|
@ -1,158 +0,0 @@
|
|||
from dataclasses import dataclass, fields
|
||||
|
||||
import tiktoken
|
||||
|
||||
from aider.dump import dump # noqa: F401
|
||||
|
||||
from .model import Model
|
||||
|
||||
|
||||
@dataclass
|
||||
class ModelInfo:
|
||||
name: str
|
||||
max_context_tokens: int
|
||||
prompt_price: float
|
||||
completion_price: float
|
||||
edit_format: str
|
||||
always_available: bool = False
|
||||
use_repo_map: bool = False
|
||||
send_undo_reply: bool = False
|
||||
|
||||
|
||||
# https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo
|
||||
# https://platform.openai.com/docs/models/gpt-3-5-turbo
|
||||
# https://openai.com/pricing
|
||||
|
||||
openai_models = [
|
||||
# gpt-3.5
|
||||
ModelInfo(
|
||||
"gpt-3.5-turbo-0125",
|
||||
16385,
|
||||
0.0005,
|
||||
0.0015,
|
||||
"whole",
|
||||
always_available=True,
|
||||
),
|
||||
ModelInfo(
|
||||
"gpt-3.5-turbo-1106",
|
||||
16385,
|
||||
0.0010,
|
||||
0.0020,
|
||||
"whole",
|
||||
always_available=True,
|
||||
),
|
||||
ModelInfo(
|
||||
"gpt-3.5-turbo-0613",
|
||||
4096,
|
||||
0.0015,
|
||||
0.0020,
|
||||
"whole",
|
||||
always_available=True,
|
||||
),
|
||||
ModelInfo(
|
||||
"gpt-3.5-turbo-16k-0613",
|
||||
16385,
|
||||
0.0030,
|
||||
0.0040,
|
||||
"whole",
|
||||
always_available=True,
|
||||
),
|
||||
# gpt-4
|
||||
ModelInfo(
|
||||
"gpt-4-turbo-2024-04-09",
|
||||
128000,
|
||||
0.01,
|
||||
0.03,
|
||||
"udiff",
|
||||
use_repo_map=True,
|
||||
send_undo_reply=True,
|
||||
),
|
||||
ModelInfo(
|
||||
"gpt-4-0125-preview",
|
||||
128000,
|
||||
0.01,
|
||||
0.03,
|
||||
"udiff",
|
||||
use_repo_map=True,
|
||||
send_undo_reply=True,
|
||||
),
|
||||
ModelInfo(
|
||||
"gpt-4-1106-preview",
|
||||
128000,
|
||||
0.01,
|
||||
0.03,
|
||||
"udiff",
|
||||
use_repo_map=True,
|
||||
send_undo_reply=True,
|
||||
),
|
||||
ModelInfo(
|
||||
"gpt-4-vision-preview",
|
||||
128000,
|
||||
0.01,
|
||||
0.03,
|
||||
"diff",
|
||||
use_repo_map=True,
|
||||
send_undo_reply=True,
|
||||
),
|
||||
ModelInfo(
|
||||
"gpt-4-0613",
|
||||
8192,
|
||||
0.03,
|
||||
0.06,
|
||||
"diff",
|
||||
use_repo_map=True,
|
||||
send_undo_reply=True,
|
||||
),
|
||||
ModelInfo(
|
||||
"gpt-4-32k-0613",
|
||||
32768,
|
||||
0.06,
|
||||
0.12,
|
||||
"diff",
|
||||
use_repo_map=True,
|
||||
send_undo_reply=True,
|
||||
),
|
||||
]
|
||||
|
||||
openai_aliases = {
|
||||
# gpt-3.5
|
||||
"gpt-3.5-turbo": "gpt-3.5-turbo-0613",
|
||||
"gpt-3.5-turbo-16k": "gpt-3.5-turbo-16k-0613",
|
||||
# gpt-4
|
||||
"gpt-4-turbo": "gpt-4-turbo-2024-04-09",
|
||||
"gpt-4-turbo-preview": "gpt-4-0125-preview",
|
||||
"gpt-4": "gpt-4-0613",
|
||||
"gpt-4-32k": "gpt-4-32k-0613",
|
||||
}
|
||||
|
||||
|
||||
class OpenAIModel(Model):
|
||||
def __init__(self, name):
|
||||
true_name = openai_aliases.get(name, name)
|
||||
|
||||
try:
|
||||
self.tokenizer = tiktoken.encoding_for_model(true_name)
|
||||
except KeyError:
|
||||
raise ValueError(f"No known tokenizer for model: {name}")
|
||||
|
||||
model_info = self.lookup_model_info(true_name)
|
||||
if not model_info:
|
||||
raise ValueError(f"Unsupported model: {name}")
|
||||
|
||||
for field in fields(ModelInfo):
|
||||
val = getattr(model_info, field.name)
|
||||
setattr(self, field.name, val)
|
||||
|
||||
# restore the caller's specified name
|
||||
self.name = name
|
||||
|
||||
# set the history token limit
|
||||
if self.max_context_tokens < 32 * 1024:
|
||||
self.max_chat_history_tokens = 1024
|
||||
else:
|
||||
self.max_chat_history_tokens = 2 * 1024
|
||||
|
||||
def lookup_model_info(self, name):
|
||||
for mi in openai_models:
|
||||
if mi.name == name:
|
||||
return mi
|
|
@ -1,40 +0,0 @@
|
|||
import tiktoken
|
||||
|
||||
from .model import Model
|
||||
|
||||
cached_model_details = None
|
||||
|
||||
|
||||
class OpenRouterModel(Model):
|
||||
def __init__(self, client, name):
|
||||
if name.startswith("gpt-4") or name.startswith("gpt-3.5-turbo"):
|
||||
name = "openai/" + name
|
||||
|
||||
self.name = name
|
||||
self.edit_format = edit_format_for_model(name)
|
||||
self.use_repo_map = self.edit_format == "diff"
|
||||
|
||||
# TODO: figure out proper encodings for non openai models
|
||||
self.tokenizer = tiktoken.get_encoding("cl100k_base")
|
||||
|
||||
global cached_model_details
|
||||
if cached_model_details is None:
|
||||
cached_model_details = client.models.list().data
|
||||
found = next(
|
||||
(details for details in cached_model_details if details.id == name), None
|
||||
)
|
||||
|
||||
if found:
|
||||
self.max_context_tokens = int(found.context_length)
|
||||
self.prompt_price = round(float(found.pricing.get("prompt")) * 1000, 6)
|
||||
self.completion_price = round(float(found.pricing.get("completion")) * 1000, 6)
|
||||
|
||||
else:
|
||||
raise ValueError(f"invalid openrouter model: {name}")
|
||||
|
||||
|
||||
def edit_format_for_model(name):
|
||||
if any(str in name for str in ["gpt-4", "claude-2"]):
|
||||
return "diff"
|
||||
|
||||
return "whole"
|
|
@ -4,7 +4,8 @@ from pathlib import Path, PurePosixPath
|
|||
import git
|
||||
import pathspec
|
||||
|
||||
from aider import models, prompts, utils
|
||||
from aider import prompts, utils
|
||||
from aider.models import DEFAULT_WEAK_MODEL_NAME, Model
|
||||
from aider.sendchat import simple_send_with_retries
|
||||
|
||||
from .dump import dump # noqa: F401
|
||||
|
@ -16,9 +17,18 @@ class GitRepo:
|
|||
aider_ignore_spec = None
|
||||
aider_ignore_ts = 0
|
||||
|
||||
def __init__(self, io, fnames, git_dname, aider_ignore_file=None, client=None):
|
||||
self.client = client
|
||||
def __init__(self, io, fnames, git_dname, aider_ignore_file=None, models=None):
|
||||
self.io = io
|
||||
if models:
|
||||
self.models = models
|
||||
else:
|
||||
self.models = [
|
||||
Model(
|
||||
DEFAULT_WEAK_MODEL_NAME,
|
||||
weak_model=False,
|
||||
require_model_info=False,
|
||||
)
|
||||
]
|
||||
|
||||
if git_dname:
|
||||
check_fnames = [git_dname]
|
||||
|
@ -120,8 +130,8 @@ class GitRepo:
|
|||
dict(role="user", content=content),
|
||||
]
|
||||
|
||||
for model in models.Model.commit_message_models():
|
||||
commit_message = simple_send_with_retries(self.client, model.name, messages)
|
||||
for model in self.models:
|
||||
commit_message = simple_send_with_retries(model.name, messages)
|
||||
if commit_message:
|
||||
break
|
||||
|
||||
|
|
|
@ -2,22 +2,24 @@ import colorsys
|
|||
import os
|
||||
import random
|
||||
import sys
|
||||
import warnings
|
||||
from collections import Counter, defaultdict, namedtuple
|
||||
from importlib import resources
|
||||
from pathlib import Path
|
||||
|
||||
import networkx as nx
|
||||
import pkg_resources
|
||||
from diskcache import Cache
|
||||
from grep_ast import TreeContext, filename_to_lang
|
||||
from pygments.lexers import guess_lexer_for_filename
|
||||
from pygments.token import Token
|
||||
from pygments.util import ClassNotFound
|
||||
from tqdm import tqdm
|
||||
from tree_sitter_languages import get_language, get_parser
|
||||
|
||||
from aider import models
|
||||
# tree_sitter is throwing a FutureWarning
|
||||
warnings.simplefilter("ignore", category=FutureWarning)
|
||||
from tree_sitter_languages import get_language, get_parser # noqa: E402
|
||||
|
||||
from .dump import dump # noqa: F402
|
||||
from aider.dump import dump # noqa: F402,E402
|
||||
|
||||
Tag = namedtuple("Tag", "rel_fname fname line name kind".split())
|
||||
|
||||
|
@ -34,7 +36,7 @@ class RepoMap:
|
|||
self,
|
||||
map_tokens=1024,
|
||||
root=None,
|
||||
main_model=models.Model.strong_model(),
|
||||
main_model=None,
|
||||
io=None,
|
||||
repo_content_prefix=None,
|
||||
verbose=False,
|
||||
|
@ -50,7 +52,7 @@ class RepoMap:
|
|||
|
||||
self.max_map_tokens = map_tokens
|
||||
|
||||
self.tokenizer = main_model.tokenizer
|
||||
self.token_count = main_model.token_count
|
||||
self.repo_content_prefix = repo_content_prefix
|
||||
|
||||
def get_repo_map(self, chat_files, other_files):
|
||||
|
@ -87,9 +89,6 @@ class RepoMap:
|
|||
|
||||
return repo_content
|
||||
|
||||
def token_count(self, string):
|
||||
return len(self.tokenizer.encode(string))
|
||||
|
||||
def get_rel_fname(self, fname):
|
||||
return os.path.relpath(fname, self.root)
|
||||
|
||||
|
@ -141,12 +140,12 @@ class RepoMap:
|
|||
|
||||
# Load the tags queries
|
||||
try:
|
||||
scm_fname = pkg_resources.resource_filename(
|
||||
__name__, os.path.join("queries", f"tree-sitter-{lang}-tags.scm")
|
||||
scm_fname = resources.files(__package__).joinpath(
|
||||
"queries", f"tree-sitter-{lang}-tags.scm"
|
||||
)
|
||||
except KeyError:
|
||||
return
|
||||
query_scm = Path(scm_fname)
|
||||
query_scm = scm_fname
|
||||
if not query_scm.exists():
|
||||
return
|
||||
query_scm = query_scm.read_text()
|
||||
|
|
|
@ -3,12 +3,12 @@ import json
|
|||
|
||||
import backoff
|
||||
import httpx
|
||||
import litellm
|
||||
import openai
|
||||
|
||||
# from diskcache import Cache
|
||||
from openai import APIConnectionError, InternalServerError, RateLimitError
|
||||
|
||||
from aider.utils import is_gpt4_with_openai_base_url
|
||||
from aider.dump import dump # noqa: F401
|
||||
|
||||
CACHE_PATH = "~/.aider.send.cache.v1"
|
||||
|
@ -29,10 +29,7 @@ CACHE = None
|
|||
f"{details.get('exception','Exception')}\nRetry in {details['wait']:.1f} seconds."
|
||||
),
|
||||
)
|
||||
def send_with_retries(client, model_name, messages, functions, stream):
|
||||
if not client:
|
||||
raise ValueError("No openai client provided")
|
||||
|
||||
def send_with_retries(model_name, messages, functions, stream):
|
||||
kwargs = dict(
|
||||
model=model_name,
|
||||
messages=messages,
|
||||
|
@ -42,14 +39,6 @@ def send_with_retries(client, model_name, messages, functions, stream):
|
|||
if functions is not None:
|
||||
kwargs["functions"] = functions
|
||||
|
||||
|
||||
# Check conditions to switch to gpt-4-vision-preview or strip out image_url messages
|
||||
if client and is_gpt4_with_openai_base_url(model_name, client):
|
||||
if any(isinstance(msg.get("content"), list) and any("image_url" in item for item in msg.get("content") if isinstance(item, dict)) for msg in messages):
|
||||
kwargs['model'] = "gpt-4-vision-preview"
|
||||
# gpt-4-vision is limited to max tokens of 4096
|
||||
kwargs["max_tokens"] = 4096
|
||||
|
||||
key = json.dumps(kwargs, sort_keys=True).encode()
|
||||
|
||||
# Generate SHA1 hash of kwargs and append it to chat_completion_call_hashes
|
||||
|
@ -58,7 +47,7 @@ def send_with_retries(client, model_name, messages, functions, stream):
|
|||
if not stream and CACHE is not None and key in CACHE:
|
||||
return hash_object, CACHE[key]
|
||||
|
||||
res = client.chat.completions.create(**kwargs)
|
||||
res = litellm.completion(**kwargs)
|
||||
|
||||
if not stream and CACHE is not None:
|
||||
CACHE[key] = res
|
||||
|
@ -66,10 +55,9 @@ def send_with_retries(client, model_name, messages, functions, stream):
|
|||
return hash_object, res
|
||||
|
||||
|
||||
def simple_send_with_retries(client, model_name, messages):
|
||||
def simple_send_with_retries(model_name, messages):
|
||||
try:
|
||||
_hash, response = send_with_retries(
|
||||
client=client,
|
||||
model_name=model_name,
|
||||
messages=messages,
|
||||
functions=None,
|
||||
|
|
|
@ -104,16 +104,3 @@ def show_messages(messages, title=None, functions=None):
|
|||
|
||||
if functions:
|
||||
dump(functions)
|
||||
|
||||
|
||||
def is_gpt4_with_openai_base_url(model_name, client):
|
||||
"""
|
||||
Check if the model_name starts with 'gpt-4' and the client base URL includes 'api.openai.com'.
|
||||
|
||||
:param model_name: The name of the model to check.
|
||||
:param client: The OpenAI client instance.
|
||||
:return: True if conditions are met, False otherwise.
|
||||
"""
|
||||
if client is None or not hasattr(client, "base_url"):
|
||||
return False
|
||||
return model_name.startswith("gpt-4") and "api.openai.com" in client.base_url.host
|
||||
|
|
|
@ -3,6 +3,7 @@ import queue
|
|||
import tempfile
|
||||
import time
|
||||
|
||||
import litellm
|
||||
import numpy as np
|
||||
|
||||
try:
|
||||
|
@ -26,7 +27,7 @@ class Voice:
|
|||
|
||||
threshold = 0.15
|
||||
|
||||
def __init__(self, client):
|
||||
def __init__(self):
|
||||
if sf is None:
|
||||
raise SoundDeviceError
|
||||
try:
|
||||
|
@ -37,8 +38,6 @@ class Voice:
|
|||
except (OSError, ModuleNotFoundError):
|
||||
raise SoundDeviceError
|
||||
|
||||
self.client = client
|
||||
|
||||
def callback(self, indata, frames, time, status):
|
||||
"""This is called (from a separate thread) for each audio block."""
|
||||
rms = np.sqrt(np.mean(indata**2))
|
||||
|
@ -89,7 +88,7 @@ class Voice:
|
|||
file.write(self.q.get())
|
||||
|
||||
with open(filename, "rb") as fh:
|
||||
transcript = self.client.audio.transcriptions.create(
|
||||
transcript = litellm.transcription(
|
||||
model="whisper-1", file=fh, prompt=history, language=language
|
||||
)
|
||||
|
||||
|
|
|
@ -17,7 +17,6 @@ import git
|
|||
import lox
|
||||
import matplotlib.pyplot as plt
|
||||
import numpy as np
|
||||
import openai
|
||||
import pandas as pd
|
||||
import prompts
|
||||
import typer
|
||||
|
@ -956,22 +955,7 @@ def run_test(
|
|||
chat_history_file=history_fname,
|
||||
)
|
||||
|
||||
if "OPENAI_API_BASE" in os.environ and "openrouter.ai" in os.environ["OPENAI_API_BASE"]:
|
||||
client = openai.OpenAI(
|
||||
api_key=os.environ["OPENAI_API_KEY"],
|
||||
base_url=os.environ.get("OPENAI_API_BASE"),
|
||||
default_headers={
|
||||
"HTTP-Referer": "http://aider.chat",
|
||||
"X-Title": "Aider",
|
||||
},
|
||||
)
|
||||
else:
|
||||
client = openai.OpenAI(
|
||||
api_key=os.environ["OPENAI_API_KEY"],
|
||||
base_url=os.environ.get("OPENAI_API_BASE", "https://api.openai.com/v1"),
|
||||
)
|
||||
|
||||
main_model = models.Model.create(model_name, client)
|
||||
main_model = models.Model(model_name)
|
||||
edit_format = edit_format or main_model.edit_format
|
||||
|
||||
dump(main_model)
|
||||
|
@ -983,7 +967,6 @@ def run_test(
|
|||
main_model,
|
||||
edit_format,
|
||||
io,
|
||||
client=client,
|
||||
fnames=fnames,
|
||||
use_git=False,
|
||||
stream=False,
|
||||
|
|
|
@ -4,13 +4,13 @@
|
|||
#
|
||||
# pip-compile --output-file=dev-requirements.txt dev-requirements.in
|
||||
#
|
||||
alabaster==0.7.13
|
||||
alabaster==0.7.16
|
||||
# via sphinx
|
||||
babel==2.14.0
|
||||
# via sphinx
|
||||
build==1.0.3
|
||||
build==1.2.1
|
||||
# via pip-tools
|
||||
certifi==2023.11.17
|
||||
certifi==2024.2.2
|
||||
# via requests
|
||||
cfgv==3.4.0
|
||||
# via pre-commit
|
||||
|
@ -20,11 +20,11 @@ click==8.1.7
|
|||
# via
|
||||
# pip-tools
|
||||
# typer
|
||||
contourpy==1.2.0
|
||||
contourpy==1.2.1
|
||||
# via matplotlib
|
||||
cycler==0.12.1
|
||||
# via matplotlib
|
||||
dill==0.3.7
|
||||
dill==0.3.8
|
||||
# via
|
||||
# multiprocess
|
||||
# pathos
|
||||
|
@ -34,13 +34,13 @@ docutils==0.20.1
|
|||
# via
|
||||
# sphinx
|
||||
# sphinx-rtd-theme
|
||||
filelock==3.13.1
|
||||
filelock==3.13.4
|
||||
# via virtualenv
|
||||
fonttools==4.47.0
|
||||
fonttools==4.51.0
|
||||
# via matplotlib
|
||||
identify==2.5.33
|
||||
identify==2.5.35
|
||||
# via pre-commit
|
||||
idna==3.6
|
||||
idna==3.7
|
||||
# via requests
|
||||
imagesize==1.4.1
|
||||
# via sphinx
|
||||
|
@ -48,107 +48,114 @@ imgcat==0.5.0
|
|||
# via -r dev-requirements.in
|
||||
iniconfig==2.0.0
|
||||
# via pytest
|
||||
jinja2==3.1.2
|
||||
jinja2==3.1.3
|
||||
# via sphinx
|
||||
kiwisolver==1.4.5
|
||||
# via matplotlib
|
||||
lox==0.11.0
|
||||
# via -r dev-requirements.in
|
||||
markupsafe==2.1.3
|
||||
markdown-it-py==3.0.0
|
||||
# via rich
|
||||
markupsafe==2.1.5
|
||||
# via jinja2
|
||||
matplotlib==3.8.2
|
||||
matplotlib==3.8.4
|
||||
# via -r dev-requirements.in
|
||||
multiprocess==0.70.15
|
||||
mdurl==0.1.2
|
||||
# via markdown-it-py
|
||||
multiprocess==0.70.16
|
||||
# via pathos
|
||||
nodeenv==1.8.0
|
||||
# via pre-commit
|
||||
numpy==1.26.3
|
||||
numpy==1.26.4
|
||||
# via
|
||||
# contourpy
|
||||
# matplotlib
|
||||
# pandas
|
||||
packaging==23.2
|
||||
packaging==24.0
|
||||
# via
|
||||
# build
|
||||
# matplotlib
|
||||
# pytest
|
||||
# sphinx
|
||||
pandas==2.1.4
|
||||
pandas==2.2.2
|
||||
# via -r dev-requirements.in
|
||||
pathos==0.3.1
|
||||
pathos==0.3.2
|
||||
# via lox
|
||||
pillow==10.2.0
|
||||
pillow==10.3.0
|
||||
# via matplotlib
|
||||
pip-tools==7.3.0
|
||||
pip-tools==7.4.1
|
||||
# via -r dev-requirements.in
|
||||
platformdirs==4.1.0
|
||||
platformdirs==4.2.0
|
||||
# via virtualenv
|
||||
pluggy==1.3.0
|
||||
pluggy==1.4.0
|
||||
# via pytest
|
||||
pox==0.3.3
|
||||
pox==0.3.4
|
||||
# via pathos
|
||||
ppft==1.7.6.7
|
||||
ppft==1.7.6.8
|
||||
# via pathos
|
||||
pre-commit==3.6.0
|
||||
pre-commit==3.7.0
|
||||
# via -r dev-requirements.in
|
||||
pygments==2.17.2
|
||||
# via sphinx
|
||||
pyparsing==3.1.1
|
||||
# via
|
||||
# rich
|
||||
# sphinx
|
||||
pyparsing==3.1.2
|
||||
# via matplotlib
|
||||
pyproject-hooks==1.0.0
|
||||
# via build
|
||||
pytest==7.4.4
|
||||
# via
|
||||
# build
|
||||
# pip-tools
|
||||
pytest==8.1.1
|
||||
# via -r dev-requirements.in
|
||||
python-dateutil==2.8.2
|
||||
python-dateutil==2.9.0.post0
|
||||
# via
|
||||
# matplotlib
|
||||
# pandas
|
||||
pytz==2023.3.post1
|
||||
pytz==2024.1
|
||||
# via pandas
|
||||
pyyaml==6.0.1
|
||||
# via pre-commit
|
||||
requests==2.31.0
|
||||
# via sphinx
|
||||
rich==13.7.1
|
||||
# via typer
|
||||
shellingham==1.5.4
|
||||
# via typer
|
||||
six==1.16.0
|
||||
# via python-dateutil
|
||||
snowballstemmer==2.2.0
|
||||
# via sphinx
|
||||
sphinx==7.2.6
|
||||
sphinx==7.3.6
|
||||
# via
|
||||
# sphinx-rtd-theme
|
||||
# sphinxcontrib-applehelp
|
||||
# sphinxcontrib-devhelp
|
||||
# sphinxcontrib-htmlhelp
|
||||
# sphinxcontrib-jquery
|
||||
# sphinxcontrib-qthelp
|
||||
# sphinxcontrib-serializinghtml
|
||||
sphinx-rtd-theme==2.0.0
|
||||
# via lox
|
||||
sphinxcontrib-applehelp==1.0.7
|
||||
sphinxcontrib-applehelp==1.0.8
|
||||
# via sphinx
|
||||
sphinxcontrib-devhelp==1.0.5
|
||||
sphinxcontrib-devhelp==1.0.6
|
||||
# via sphinx
|
||||
sphinxcontrib-htmlhelp==2.0.4
|
||||
sphinxcontrib-htmlhelp==2.0.5
|
||||
# via sphinx
|
||||
sphinxcontrib-jquery==4.1
|
||||
# via sphinx-rtd-theme
|
||||
sphinxcontrib-jsmath==1.0.1
|
||||
# via sphinx
|
||||
sphinxcontrib-qthelp==1.0.6
|
||||
sphinxcontrib-qthelp==1.0.7
|
||||
# via sphinx
|
||||
sphinxcontrib-serializinghtml==1.1.9
|
||||
sphinxcontrib-serializinghtml==1.1.10
|
||||
# via sphinx
|
||||
typer==0.9.0
|
||||
typer==0.12.3
|
||||
# via -r dev-requirements.in
|
||||
typing-extensions==4.9.0
|
||||
typing-extensions==4.11.0
|
||||
# via typer
|
||||
tzdata==2023.4
|
||||
tzdata==2024.1
|
||||
# via pandas
|
||||
urllib3==2.1.0
|
||||
urllib3==2.2.1
|
||||
# via requests
|
||||
virtualenv==20.25.0
|
||||
virtualenv==20.25.3
|
||||
# via pre-commit
|
||||
wheel==0.42.0
|
||||
wheel==0.43.0
|
||||
# via pip-tools
|
||||
|
||||
# The following packages are considered to be unsafe in a requirements file:
|
||||
|
|
170
docs/faq.md
170
docs/faq.md
|
@ -2,15 +2,13 @@
|
|||
# Frequently asked questions
|
||||
|
||||
- [How does aider use git?](#how-does-aider-use-git)
|
||||
- [GPT-4 vs GPT-3.5](#gpt-4-vs-gpt-35)
|
||||
- [Can I use aider with other LLMs, local LLMs, etc?](#can-i-use-aider-with-other-llms-local-llms-etc)
|
||||
- [Accessing other LLMs with OpenRouter](#accessing-other-llms-with-openrouter)
|
||||
- [Aider isn't editing my files?](#aider-isnt-editing-my-files)
|
||||
- [Can I use aider with other LLMs, local LLMs, etc?](https://aider.chat/docs/llms.html)
|
||||
- [Can I run aider in Google Colab?](#can-i-run-aider-in-google-colab)
|
||||
- [How can I run aider locally from source code?](#how-can-i-run-aider-locally-from-source-code)
|
||||
- [Can I script aider?](#can-i-script-aider)
|
||||
- [What code languages does aider support?](#what-code-languages-does-aider-support)
|
||||
- [How to use pipx to avoid python package conflicts?](#how-to-use-pipx-to-avoid-python-package-conflicts)
|
||||
- [Aider isn't editing my files?](#aider-isnt-editing-my-files)
|
||||
- [How can I add ALL the files to the chat?](#how-can-i-add-all-the-files-to-the-chat)
|
||||
- [Can I specify guidelines or conventions?](#can-i-specify-guidelines-or-conventions)
|
||||
- [Can I change the system prompts that aider uses?](#can-i-change-the-system-prompts-that-aider-uses)
|
||||
|
@ -40,145 +38,6 @@ While it is not recommended, you can disable aider's use of git in a few ways:
|
|||
- `--no-dirty-commits` will stop aider from committing dirty files before applying GPT's edits.
|
||||
- `--no-git` will completely stop aider from using git on your files. You should ensure you are keeping sensible backups of the files you are working with.
|
||||
|
||||
## GPT-4 vs GPT-3.5
|
||||
|
||||
Aider supports all of OpenAI's chat models,
|
||||
and uses GPT-4 Turbo by default.
|
||||
It has a large context window, good coding skills and
|
||||
generally obeys the instructions in the system prompt.
|
||||
|
||||
You can choose another model with the `--model` command line argument
|
||||
or one of these shortcuts:
|
||||
|
||||
```
|
||||
aider -4 # to use gpt-4-0613
|
||||
aider -3 # to use gpt-3.5-turbo-0125
|
||||
```
|
||||
|
||||
The older `gpt-4-0613` model is a great choice if GPT-4 Turbo is having
|
||||
trouble with your coding task, although it has a smaller context window
|
||||
which can be a real limitation.
|
||||
|
||||
All the GPT-4 models are able to structure code edits as "diffs"
|
||||
and use a
|
||||
[repository map](https://aider.chat/docs/repomap.html)
|
||||
to improve its ability to make changes in larger codebases.
|
||||
|
||||
GPT-3.5 is
|
||||
limited to editing somewhat smaller codebases.
|
||||
It is less able to follow instructions and
|
||||
so can't reliably return code edits as "diffs".
|
||||
Aider disables the
|
||||
repository map
|
||||
when using GPT-3.5.
|
||||
|
||||
For detailed quantitative comparisons of the various models, please see the
|
||||
[aider blog](https://aider.chat/blog/)
|
||||
which contains many benchmarking articles.
|
||||
|
||||
## Can I use aider with other LLMs, local LLMs, etc?
|
||||
|
||||
Aider provides experimental support for LLMs other than OpenAI's GPT-3.5 and GPT-4. The support is currently only experimental for two reasons:
|
||||
|
||||
- GPT-3.5 is just barely capable of *editing code* to provide aider's interactive "pair programming" style workflow. None of the other models seem to be as capable as GPT-3.5 yet.
|
||||
- Just "hooking up" aider to a new model by connecting to its API is almost certainly not enough to get it working in a useful way. Getting aider working well with GPT-3.5 and GPT-4 was a significant undertaking, involving [specific code editing prompts and backends for each model and extensive benchmarking](https://aider.chat/docs/benchmarks.html). Officially supporting each new LLM will probably require a similar effort to tailor the prompts and editing backends.
|
||||
|
||||
Numerous users have done experiments with numerous models. None of these experiments have yet identified other models that look like they are capable of working well with aider.
|
||||
|
||||
Once we see signs that a *particular* model is capable of code editing, it would be reasonable for aider to attempt to officially support such a model. Until then, aider will simply maintain experimental support for using alternative models.
|
||||
|
||||
There are ongoing discussions about [LLM integrations in the aider discord](https://discord.gg/yaUk7JqJ9G).
|
||||
|
||||
Here are some [GitHub issues which may contain relevant information](https://github.com/paul-gauthier/aider/issues?q=is%3Aissue+%23172).
|
||||
|
||||
### OpenAI API compatible LLMs
|
||||
|
||||
If you can make the model accessible via an OpenAI compatible API,
|
||||
you can use `--openai-api-base` to connect to a different API endpoint.
|
||||
|
||||
### Local LLMs
|
||||
|
||||
[LiteLLM](https://github.com/BerriAI/litellm) and
|
||||
[LocalAI](https://github.com/go-skynet/LocalAI)
|
||||
are relevant tools to serve local models via an OpenAI compatible API.
|
||||
|
||||
|
||||
### Azure
|
||||
|
||||
Aider can be configured to connect to the OpenAI models on Azure.
|
||||
Aider supports the configuration changes specified in the
|
||||
[official openai python library docs](https://github.com/openai/openai-python#microsoft-azure-endpoints).
|
||||
You should be able to run aider with the following arguments to connect to Azure:
|
||||
|
||||
```
|
||||
$ aider \
|
||||
--openai-api-type azure \
|
||||
--openai-api-key your-key-goes-here \
|
||||
--openai-api-base https://example-endpoint.openai.azure.com \
|
||||
--openai-api-version 2023-05-15 \
|
||||
--openai-api-deployment-id deployment-name \
|
||||
...
|
||||
```
|
||||
|
||||
You could also store those values in an `.aider.conf.yml` file in your home directory:
|
||||
|
||||
```
|
||||
openai-api-type: azure
|
||||
openai-api-key: your-key-goes-here
|
||||
openai-api-base: https://example-endpoint.openai.azure.com
|
||||
openai-api-version: 2023-05-15
|
||||
openai-api-deployment-id: deployment-name
|
||||
```
|
||||
|
||||
See the
|
||||
[official Azure documentation on using OpenAI models](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/chatgpt-quickstart?tabs=command-line&pivots=programming-language-python)
|
||||
for more information on how to populate the above configuration values.
|
||||
|
||||
|
||||
## Accessing other LLMs with OpenRouter
|
||||
|
||||
[OpenRouter](https://openrouter.ai) provide an interface to [many models](https://openrouter.ai/models) which are not widely accessible, in particular Claude 3 Opus.
|
||||
|
||||
To access the OpenRouter models, simply:
|
||||
|
||||
```
|
||||
# Install aider
|
||||
pip install aider-chat
|
||||
|
||||
# Setup OpenRouter access
|
||||
export OPENAI_API_KEY=<your-openrouter-key>
|
||||
export OPENAI_API_BASE=https://openrouter.ai/api/v1
|
||||
|
||||
# For example, run aider with Claude 3 Opus using the diff editing format
|
||||
aider --model anthropic/claude-3-opus --edit-format diff
|
||||
```
|
||||
|
||||
|
||||
## Aider isn't editing my files?
|
||||
|
||||
Sometimes GPT will reply with some code changes that don't get applied to your local files.
|
||||
In these cases, aider might say something like "Failed to apply edit to *filename*".
|
||||
|
||||
This usually happens because GPT is not specifying the edits
|
||||
to make in the format that aider expects.
|
||||
GPT-3.5 is especially prone to disobeying the system prompt instructions in this manner, but it also happens with GPT-4.
|
||||
|
||||
Aider makes every effort to get GPT to conform, and works hard to deal with
|
||||
replies that are "almost" correctly formatted.
|
||||
If Aider detects an improperly formatted reply, it gives GPT feedback to try again.
|
||||
Also, before each release new versions of aider are
|
||||
[benchmarked](https://aider.chat/docs/benchmarks.html).
|
||||
This helps prevent regressions in the code editing
|
||||
performance of GPT that could have been inadvertantly
|
||||
introduced.
|
||||
|
||||
But sometimes GPT just won't cooperate.
|
||||
In these cases, here are some things you might try:
|
||||
|
||||
- Try the older GPT-4 model `gpt-4-0613` not GPT-4 Turbo by running `aider --model gpt-4-0613`.
|
||||
- Use `/drop` to remove files from the chat session which aren't needed for the task at hand. This will reduce distractions and may help GPT produce properly formatted edits.
|
||||
- Use `/clear` to remove the conversation history, again to help GPT focus.
|
||||
|
||||
|
||||
## Can I run aider in Google Colab?
|
||||
|
||||
|
@ -353,6 +212,31 @@ Install [pipx](https://pipx.pypa.io/stable/) then just do:
|
|||
pipx install aider-chat
|
||||
```
|
||||
|
||||
## Aider isn't editing my files?
|
||||
|
||||
Sometimes GPT will reply with some code changes that don't get applied to your local files.
|
||||
In these cases, aider might say something like "Failed to apply edit to *filename*".
|
||||
|
||||
This usually happens because GPT is not specifying the edits
|
||||
to make in the format that aider expects.
|
||||
GPT-3.5 is especially prone to disobeying the system prompt instructions in this manner, but it also happens with GPT-4.
|
||||
|
||||
Aider makes every effort to get GPT to conform, and works hard to deal with
|
||||
replies that are "almost" correctly formatted.
|
||||
If Aider detects an improperly formatted reply, it gives GPT feedback to try again.
|
||||
Also, before each release new versions of aider are
|
||||
[benchmarked](https://aider.chat/docs/benchmarks.html).
|
||||
This helps prevent regressions in the code editing
|
||||
performance of GPT that could have been inadvertantly
|
||||
introduced.
|
||||
|
||||
But sometimes GPT just won't cooperate.
|
||||
In these cases, here are some things you might try:
|
||||
|
||||
- Try the older GPT-4 model `gpt-4-0613` not GPT-4 Turbo by running `aider --model gpt-4-0613`.
|
||||
- Use `/drop` to remove files from the chat session which aren't needed for the task at hand. This will reduce distractions and may help GPT produce properly formatted edits.
|
||||
- Use `/clear` to remove the conversation history, again to help GPT focus.
|
||||
|
||||
## How can I add ALL the files to the chat?
|
||||
|
||||
People regularly ask about how to add **many or all of their repo's files** to the chat.
|
||||
|
|
|
@ -2,9 +2,10 @@
|
|||
# Installing aider
|
||||
|
||||
- [Install git](#install-git)
|
||||
- [Get your OpenAI API key](#get-your-openai-api-key)
|
||||
- [Get your API key](#get-your-api-key)
|
||||
- [Windows install](#windows-install)
|
||||
- [Mac/Linux install](#maclinux-install)
|
||||
- [Working with other LLMs](https://aider.chat/docs/llms.html)
|
||||
- [Tutorial videos](#tutorial-videos)
|
||||
|
||||
## Install git
|
||||
|
@ -13,33 +14,48 @@ Make sure you have git installed.
|
|||
Here are
|
||||
[instructions for installing git in various environments](https://github.com/git-guides/install-git).
|
||||
|
||||
## Get your OpenAI API key
|
||||
## Get your API key
|
||||
|
||||
You need a paid
|
||||
To work with OpenAI's GPT 3.5 or GPT-4 models you need a paid
|
||||
[OpenAI API key](https://help.openai.com/en/articles/4936850-where-do-i-find-my-secret-api-key).
|
||||
Note that this is different than being a "ChatGPT Plus" subscriber.
|
||||
|
||||
To work with Anthropic's models like Claude 3 Opus you need a paid
|
||||
[Anthropic API key](https://docs.anthropic.com/claude/reference/getting-started-with-the-api).
|
||||
|
||||
## Windows install
|
||||
|
||||
```
|
||||
# Install aider
|
||||
py -m pip install aider-chat
|
||||
|
||||
# Launch aider
|
||||
aider --openai-api-key sk-xxxxxxxxxxxxxxx
|
||||
# To work with GPT-4 Turbo:
|
||||
$ aider --openai-api-key sk-xxx... --4turbo
|
||||
|
||||
# To work with Claude 3 Opus:
|
||||
$ aider --anthropic-api-key sk-xxx... --opus
|
||||
```
|
||||
|
||||
## Mac/Linux install
|
||||
|
||||
|
||||
```
|
||||
# Install aider
|
||||
python -m pip install aider-chat
|
||||
|
||||
# Launch aider
|
||||
aider --openai-api-key sk-xxxxxxxxxxxxxxx
|
||||
# To work with GPT-4 Turbo:
|
||||
$ aider --openai-api-key sk-xxx... --4turbo
|
||||
|
||||
# To work with Claude 3 Opus:
|
||||
$ aider --anthropic-api-key sk-xxx... --opus
|
||||
```
|
||||
|
||||
## Working with other LLMs
|
||||
|
||||
Aider works well with GPT 3.5, GPT-4, GPT-4 Turbo with Vision,
|
||||
and Claude 3 Opus.
|
||||
It also has support for [connecting to almost any LLM](https://aider.chat/docs/llms.html).
|
||||
|
||||
|
||||
## Tutorial videos
|
||||
|
||||
Here are a few tutorial videos:
|
||||
|
|
152
docs/llms.md
Normal file
152
docs/llms.md
Normal file
|
@ -0,0 +1,152 @@
|
|||
|
||||
# Aider can connect to most LLMs
|
||||
|
||||
Aider works well with OpenAI's GPT 3.5, GPT-4, GPT-4 Turbo with Vision and
|
||||
Anthropic's Claude 3 Opus and Sonnet.
|
||||
|
||||
GPT-4 Turbo and Claude 3 Opus are recommended for the best results.
|
||||
|
||||
Aider also has support for connecting to almost any LLM, but it may not work as well
|
||||
depending on the capabilities of the model.
|
||||
For context, GPT-3.5 is just barely capable of *editing code* to provide aider's
|
||||
interactive "pair programming" style workflow.
|
||||
Models that are less capable than GPT-3.5 may struggle to perform well with aider.
|
||||
|
||||
- [OpenAI](#openai)
|
||||
- [Anthropic](#anthropic)
|
||||
- [Azure](#azure)
|
||||
- [OpenAI compatible APIs](#openai-compatible-apis)
|
||||
- [Other LLMs](#other-llms)
|
||||
- [Editing format](#editing-format)
|
||||
|
||||
## OpenAI
|
||||
|
||||
To work with OpenAI's models, you need to provide your
|
||||
[OpenAI API key](https://help.openai.com/en/articles/4936850-where-do-i-find-my-secret-api-key)
|
||||
either in the `OPENAI_API_KEY` environment variable or
|
||||
via the `--openai-api-key` command line switch.
|
||||
|
||||
Aider has some built in shortcuts for the most popular OpenAI models and
|
||||
has been tested and benchmarked to work well with them:
|
||||
|
||||
- OpenAI's GPT-4 Turbo: run `aider` with no args uses GPT-4 Turbo by default.
|
||||
- OpenAI's GPT-4 Turbo with Vision: run `aider --4-turbo-vision` to use this vision capable model, allowing you to share images with GPT by adding them to the chat with `/add` or by naming them on the command line.
|
||||
- OpenAI's GPT-3.5 Turbo: Run `aider --35-turbo`.
|
||||
|
||||
You can use `aider --model <model-name>` to use any other OpenAI model.
|
||||
For example, if you want to use a specific version of GPT-4 Turbo
|
||||
you could do `aider --model gpt-4-0125-preview`.
|
||||
|
||||
## Anthropic
|
||||
|
||||
To work with Anthropic's models, you need to provide your
|
||||
[Anthropic API key](https://docs.anthropic.com/claude/reference/getting-started-with-the-api)
|
||||
either in the `ANTHROPIC_API_KEY` environment variable or
|
||||
via the `--anthropic-api-key` command line switch.
|
||||
|
||||
Aider has some built in shortcuts for the most popular Anthropic models and
|
||||
has been tested and benchmarked to work well with them:
|
||||
|
||||
- Anthropic's Claude 3 Opus: `aider --opus`
|
||||
- Anthropic's Claude 3 Sonnet: `aider --sonnet`
|
||||
|
||||
You can use `aider --model <model-name>` to use any other Anthropic model.
|
||||
For example, if you want to use a specific version of Opus
|
||||
you could do `aider --model claude-3-opus-20240229`.
|
||||
|
||||
## Azure
|
||||
|
||||
Aider can be configured to connect to the OpenAI models on Azure.
|
||||
You can run aider with the following arguments to connect to Azure:
|
||||
|
||||
```
|
||||
$ aider \
|
||||
--openai-api-type azure \
|
||||
--openai-api-key your-key-goes-here \
|
||||
--openai-api-base https://example-endpoint.openai.azure.com \
|
||||
--openai-api-version 2023-05-15 \
|
||||
--openai-api-deployment-id deployment-name \
|
||||
...
|
||||
```
|
||||
|
||||
You could also store those values in an `.aider.conf.yml` file in your home directory:
|
||||
|
||||
```
|
||||
openai-api-type: azure
|
||||
openai-api-key: your-key-goes-here
|
||||
openai-api-base: https://example-endpoint.openai.azure.com
|
||||
openai-api-version: 2023-05-15
|
||||
openai-api-deployment-id: deployment-name
|
||||
```
|
||||
|
||||
Or you can populate the following environment variables instead:
|
||||
|
||||
```
|
||||
OPENAI_API_TYPE=azure
|
||||
OPENAI_API_KEY=your-key-goes-here
|
||||
OPENAI_API_BASE=https://example-endpoint.openai.azure.com
|
||||
OPENAI_API_VERSION=2023-05-15
|
||||
OPENAI_API_DEPLOYMENT_ID=deployment-name
|
||||
```
|
||||
|
||||
See the
|
||||
[official Azure documentation on using OpenAI models](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/chatgpt-quickstart?tabs=command-line&pivots=programming-language-python)
|
||||
for more information on how to populate the above configuration values.
|
||||
|
||||
## OpenAI compatible APIs
|
||||
|
||||
If you can make an LLM accessible via an OpenAI compatible API endpoint,
|
||||
you can use `--openai-api-base` to have aider connect to it.
|
||||
|
||||
You might need to use `--no-require-model-info` if aider doesn't
|
||||
recognize the model you want to use.
|
||||
For unknown models, aider won't have normal metadata available like
|
||||
the context window size, token costs, etc.
|
||||
Some minor functionality will be limited when using such models.
|
||||
|
||||
## Other LLMs
|
||||
|
||||
Aider uses the [litellm](https://docs.litellm.ai/docs/providers) package
|
||||
to connect to hundreds of other models.
|
||||
You can use `aider --model <model-name>` to use any supported model.
|
||||
|
||||
To explore the list of supported models you can run `aider --model <name>`.
|
||||
If the supplied name is not an exact match for a known model, aider will
|
||||
return a list of possible matching models.
|
||||
For example:
|
||||
|
||||
```
|
||||
$ aider --model turbo
|
||||
|
||||
Unknown model turbo, did you mean one of these?
|
||||
- gpt-4-turbo-preview
|
||||
- gpt-4-turbo
|
||||
- gpt-4-turbo-2024-04-09
|
||||
- gpt-3.5-turbo
|
||||
- gpt-3.5-turbo-0301
|
||||
...
|
||||
```
|
||||
|
||||
Depending on which model you access, you may need to provide an API key
|
||||
or other configuration parameters by setting environment variables.
|
||||
If any required variables are not set, aider will print an
|
||||
error message listing which parameters are needed.
|
||||
|
||||
See the [list of providers supported by litellm](https://docs.litellm.ai/docs/providers)
|
||||
for more details.
|
||||
|
||||
|
||||
## Editing format
|
||||
|
||||
Aider uses 3 different "edit formats" to collect code edits from different LLMs:
|
||||
|
||||
- `whole` is a "whole file" editing format, where the model edits a file by returning a full new copy of the file with any changes included.
|
||||
- `diff` is a more efficient diff style format, where the model specified blocks of code to search and replace in order to made changes to files.
|
||||
- `udiff` is the most efficient editing format, where the model returns unified diffs to apply changes to the file.
|
||||
|
||||
Different models work best with different editing formats.
|
||||
Aider is configured to use the best edit format for all the popular OpenAI and Anthropic models.
|
||||
|
||||
For lesser known models aider will default to using the "whole" editing format.
|
||||
If you would like to experiment with the more advanced formats, you can
|
||||
use these switches: `--edit-format diff` or `--edit-format udiff`.
|
|
@ -1,3 +1,4 @@
|
|||
[pytest]
|
||||
norecursedirs = tmp.* build benchmark
|
||||
addopts = -p no:warnings
|
||||
|
||||
|
|
|
@ -12,7 +12,7 @@ numpy
|
|||
scipy
|
||||
backoff
|
||||
pathspec
|
||||
networkx
|
||||
networkx<3.3 # 3.3 no longer works on python 3.9
|
||||
diskcache
|
||||
grep_ast
|
||||
packaging
|
||||
|
@ -24,3 +24,4 @@ Pillow
|
|||
diff-match-patch
|
||||
playwright
|
||||
pypandoc
|
||||
litellm
|
128
requirements.txt
128
requirements.txt
|
@ -4,21 +4,26 @@
|
|||
#
|
||||
# pip-compile requirements.in
|
||||
#
|
||||
aiohttp==3.9.5
|
||||
# via litellm
|
||||
aiosignal==1.3.1
|
||||
# via aiohttp
|
||||
annotated-types==0.6.0
|
||||
# via pydantic
|
||||
anyio==4.2.0
|
||||
anyio==4.3.0
|
||||
# via
|
||||
# httpx
|
||||
# openai
|
||||
attrs==23.2.0
|
||||
# via
|
||||
# aiohttp
|
||||
# jsonschema
|
||||
# referencing
|
||||
backoff==2.2.1
|
||||
# via -r requirements.in
|
||||
beautifulsoup4==4.12.3
|
||||
# via -r requirements.in
|
||||
certifi==2023.11.17
|
||||
certifi==2024.2.2
|
||||
# via
|
||||
# httpcore
|
||||
# httpx
|
||||
|
@ -29,6 +34,8 @@ cffi==1.16.0
|
|||
# soundfile
|
||||
charset-normalizer==3.3.2
|
||||
# via requests
|
||||
click==8.1.7
|
||||
# via litellm
|
||||
configargparse==1.7
|
||||
# via -r requirements.in
|
||||
diff-match-patch==20230430
|
||||
|
@ -37,9 +44,17 @@ diskcache==5.6.3
|
|||
# via -r requirements.in
|
||||
distro==1.9.0
|
||||
# via openai
|
||||
filelock==3.13.4
|
||||
# via huggingface-hub
|
||||
frozenlist==1.4.1
|
||||
# via
|
||||
# aiohttp
|
||||
# aiosignal
|
||||
fsspec==2024.3.1
|
||||
# via huggingface-hub
|
||||
gitdb==4.0.11
|
||||
# via gitpython
|
||||
gitpython==3.1.40
|
||||
gitpython==3.1.43
|
||||
# via -r requirements.in
|
||||
greenlet==3.0.3
|
||||
# via playwright
|
||||
|
@ -47,76 +62,102 @@ grep-ast==0.2.4
|
|||
# via -r requirements.in
|
||||
h11==0.14.0
|
||||
# via httpcore
|
||||
httpcore==1.0.2
|
||||
httpcore==1.0.5
|
||||
# via httpx
|
||||
httpx==0.26.0
|
||||
httpx==0.27.0
|
||||
# via openai
|
||||
idna==3.6
|
||||
huggingface-hub==0.22.2
|
||||
# via tokenizers
|
||||
idna==3.7
|
||||
# via
|
||||
# anyio
|
||||
# httpx
|
||||
# requests
|
||||
jsonschema==4.20.0
|
||||
# yarl
|
||||
importlib-metadata==7.1.0
|
||||
# via litellm
|
||||
jinja2==3.1.3
|
||||
# via litellm
|
||||
jsonschema==4.21.1
|
||||
# via -r requirements.in
|
||||
jsonschema-specifications==2023.12.1
|
||||
# via jsonschema
|
||||
litellm==1.35.12
|
||||
# via -r requirements.in
|
||||
markdown-it-py==3.0.0
|
||||
# via rich
|
||||
markupsafe==2.1.5
|
||||
# via jinja2
|
||||
mdurl==0.1.2
|
||||
# via markdown-it-py
|
||||
multidict==6.0.5
|
||||
# via
|
||||
# aiohttp
|
||||
# yarl
|
||||
networkx==3.2.1
|
||||
# via -r requirements.in
|
||||
numpy==1.26.3
|
||||
numpy==1.26.4
|
||||
# via
|
||||
# -r requirements.in
|
||||
# scipy
|
||||
openai==1.6.1
|
||||
# via -r requirements.in
|
||||
packaging==23.2
|
||||
# via -r requirements.in
|
||||
openai==1.23.1
|
||||
# via
|
||||
# -r requirements.in
|
||||
# litellm
|
||||
packaging==24.0
|
||||
# via
|
||||
# -r requirements.in
|
||||
# huggingface-hub
|
||||
pathspec==0.12.1
|
||||
# via
|
||||
# -r requirements.in
|
||||
# grep-ast
|
||||
pillow==10.2.0
|
||||
pillow==10.3.0
|
||||
# via -r requirements.in
|
||||
playwright==1.41.2
|
||||
playwright==1.43.0
|
||||
# via -r requirements.in
|
||||
prompt-toolkit==3.0.43
|
||||
# via -r requirements.in
|
||||
pycparser==2.21
|
||||
pycparser==2.22
|
||||
# via cffi
|
||||
pydantic==2.5.3
|
||||
pydantic==2.7.0
|
||||
# via openai
|
||||
pydantic-core==2.14.6
|
||||
pydantic-core==2.18.1
|
||||
# via pydantic
|
||||
pyee==11.0.1
|
||||
pyee==11.1.0
|
||||
# via playwright
|
||||
pygments==2.17.2
|
||||
# via rich
|
||||
pypandoc==1.12
|
||||
pypandoc==1.13
|
||||
# via -r requirements.in
|
||||
python-dotenv==1.0.1
|
||||
# via litellm
|
||||
pyyaml==6.0.1
|
||||
# via -r requirements.in
|
||||
referencing==0.32.0
|
||||
# via
|
||||
# -r requirements.in
|
||||
# huggingface-hub
|
||||
referencing==0.34.0
|
||||
# via
|
||||
# jsonschema
|
||||
# jsonschema-specifications
|
||||
regex==2023.12.25
|
||||
regex==2024.4.16
|
||||
# via tiktoken
|
||||
requests==2.31.0
|
||||
# via tiktoken
|
||||
rich==13.7.0
|
||||
# via
|
||||
# huggingface-hub
|
||||
# litellm
|
||||
# tiktoken
|
||||
rich==13.7.1
|
||||
# via -r requirements.in
|
||||
rpds-py==0.16.2
|
||||
rpds-py==0.18.0
|
||||
# via
|
||||
# jsonschema
|
||||
# referencing
|
||||
scipy==1.11.4
|
||||
scipy==1.13.0
|
||||
# via -r requirements.in
|
||||
smmap==5.0.1
|
||||
# via gitdb
|
||||
sniffio==1.3.0
|
||||
sniffio==1.3.1
|
||||
# via
|
||||
# anyio
|
||||
# httpx
|
||||
|
@ -127,21 +168,32 @@ soundfile==0.12.1
|
|||
# via -r requirements.in
|
||||
soupsieve==2.5
|
||||
# via beautifulsoup4
|
||||
tiktoken==0.5.2
|
||||
# via -r requirements.in
|
||||
tqdm==4.66.1
|
||||
# via openai
|
||||
tree-sitter==0.20.4
|
||||
# via tree-sitter-languages
|
||||
tree-sitter-languages==1.9.1
|
||||
# via grep-ast
|
||||
typing-extensions==4.9.0
|
||||
tiktoken==0.6.0
|
||||
# via
|
||||
# -r requirements.in
|
||||
# litellm
|
||||
tokenizers==0.19.1
|
||||
# via litellm
|
||||
tqdm==4.66.2
|
||||
# via
|
||||
# huggingface-hub
|
||||
# openai
|
||||
tree-sitter==0.21.3
|
||||
# via tree-sitter-languages
|
||||
tree-sitter-languages==1.10.2
|
||||
# via grep-ast
|
||||
typing-extensions==4.11.0
|
||||
# via
|
||||
# huggingface-hub
|
||||
# openai
|
||||
# pydantic
|
||||
# pydantic-core
|
||||
# pyee
|
||||
urllib3==2.1.0
|
||||
urllib3==2.2.1
|
||||
# via requests
|
||||
wcwidth==0.2.12
|
||||
wcwidth==0.2.13
|
||||
# via prompt-toolkit
|
||||
yarl==1.9.4
|
||||
# via aiohttp
|
||||
zipp==3.18.1
|
||||
# via importlib-metadata
|
||||
|
|
|
@ -6,21 +6,16 @@ from unittest.mock import MagicMock, patch
|
|||
import git
|
||||
import openai
|
||||
|
||||
from aider import models
|
||||
from aider.coders import Coder
|
||||
from aider.dump import dump # noqa: F401
|
||||
from aider.io import InputOutput
|
||||
from aider.models import Model
|
||||
from aider.utils import ChdirTemporaryDirectory, GitTemporaryDirectory
|
||||
|
||||
|
||||
class TestCoder(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.patcher = patch("aider.coders.base_coder.check_model_availability")
|
||||
self.mock_check = self.patcher.start()
|
||||
self.mock_check.return_value = True
|
||||
|
||||
def tearDown(self):
|
||||
self.patcher.stop()
|
||||
self.GPT35 = Model("gpt-3.5-turbo")
|
||||
|
||||
def test_allowed_to_edit(self):
|
||||
with GitTemporaryDirectory():
|
||||
|
@ -38,7 +33,7 @@ class TestCoder(unittest.TestCase):
|
|||
|
||||
# YES!
|
||||
io = InputOutput(yes=True)
|
||||
coder = Coder.create(models.GPT4, None, io, fnames=["added.txt"])
|
||||
coder = Coder.create(self.GPT35, None, io, fnames=["added.txt"])
|
||||
|
||||
self.assertTrue(coder.allowed_to_edit("added.txt"))
|
||||
self.assertTrue(coder.allowed_to_edit("repo.txt"))
|
||||
|
@ -66,7 +61,7 @@ class TestCoder(unittest.TestCase):
|
|||
# say NO
|
||||
io = InputOutput(yes=False)
|
||||
|
||||
coder = Coder.create(models.GPT4, None, io, fnames=["added.txt"])
|
||||
coder = Coder.create(self.GPT35, None, io, fnames=["added.txt"])
|
||||
|
||||
self.assertTrue(coder.allowed_to_edit("added.txt"))
|
||||
self.assertFalse(coder.allowed_to_edit("repo.txt"))
|
||||
|
@ -90,7 +85,7 @@ class TestCoder(unittest.TestCase):
|
|||
# say NO
|
||||
io = InputOutput(yes=False)
|
||||
|
||||
coder = Coder.create(models.GPT4, None, io, fnames=["added.txt"])
|
||||
coder = Coder.create(self.GPT35, None, io, fnames=["added.txt"])
|
||||
|
||||
self.assertTrue(coder.allowed_to_edit("added.txt"))
|
||||
self.assertFalse(coder.need_commit_before_edits)
|
||||
|
@ -111,7 +106,7 @@ class TestCoder(unittest.TestCase):
|
|||
repo.git.commit("-m", "new")
|
||||
|
||||
# Initialize the Coder object with the mocked IO and mocked repo
|
||||
coder = Coder.create(models.GPT4, None, mock_io)
|
||||
coder = Coder.create(self.GPT35, None, mock_io)
|
||||
|
||||
mod = coder.get_last_modified()
|
||||
|
||||
|
@ -134,7 +129,7 @@ class TestCoder(unittest.TestCase):
|
|||
files = [file1, file2]
|
||||
|
||||
# Initialize the Coder object with the mocked IO and mocked repo
|
||||
coder = Coder.create(models.GPT4, None, io=InputOutput(), fnames=files)
|
||||
coder = Coder.create(self.GPT35, None, io=InputOutput(), fnames=files)
|
||||
|
||||
content = coder.get_files_content().splitlines()
|
||||
self.assertIn("file1.txt", content)
|
||||
|
@ -157,7 +152,7 @@ class TestCoder(unittest.TestCase):
|
|||
repo.git.commit("-m", "new")
|
||||
|
||||
# Initialize the Coder object with the mocked IO and mocked repo
|
||||
coder = Coder.create(models.GPT4, None, mock_io)
|
||||
coder = Coder.create(self.GPT35, None, mock_io)
|
||||
|
||||
# Call the check_for_file_mentions method
|
||||
coder.check_for_file_mentions("Please check file1.txt and file2.py")
|
||||
|
@ -175,7 +170,7 @@ class TestCoder(unittest.TestCase):
|
|||
def test_check_for_ambiguous_filename_mentions_of_longer_paths(self):
|
||||
with GitTemporaryDirectory():
|
||||
io = InputOutput(pretty=False, yes=True)
|
||||
coder = Coder.create(models.GPT4, None, io)
|
||||
coder = Coder.create(self.GPT35, None, io)
|
||||
|
||||
fname = Path("file1.txt")
|
||||
fname.touch()
|
||||
|
@ -196,7 +191,7 @@ class TestCoder(unittest.TestCase):
|
|||
def test_check_for_subdir_mention(self):
|
||||
with GitTemporaryDirectory():
|
||||
io = InputOutput(pretty=False, yes=True)
|
||||
coder = Coder.create(models.GPT4, None, io)
|
||||
coder = Coder.create(self.GPT35, None, io)
|
||||
|
||||
fname = Path("other") / "file1.txt"
|
||||
fname.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
@ -225,7 +220,7 @@ class TestCoder(unittest.TestCase):
|
|||
files = [file1, file2]
|
||||
|
||||
# Initialize the Coder object with the mocked IO and mocked repo
|
||||
coder = Coder.create(models.GPT4, None, io=InputOutput(), fnames=files)
|
||||
coder = Coder.create(self.GPT35, None, io=InputOutput(), fnames=files)
|
||||
|
||||
def mock_send(*args, **kwargs):
|
||||
coder.partial_response_content = "ok"
|
||||
|
@ -251,7 +246,7 @@ class TestCoder(unittest.TestCase):
|
|||
files = [file1, file2]
|
||||
|
||||
# Initialize the Coder object with the mocked IO and mocked repo
|
||||
coder = Coder.create(models.GPT4, None, io=InputOutput(), fnames=files)
|
||||
coder = Coder.create(self.GPT35, None, io=InputOutput(), fnames=files)
|
||||
|
||||
def mock_send(*args, **kwargs):
|
||||
coder.partial_response_content = "ok"
|
||||
|
@ -281,7 +276,7 @@ class TestCoder(unittest.TestCase):
|
|||
files = [file1]
|
||||
|
||||
# Initialize the Coder object with the mocked IO and mocked repo
|
||||
coder = Coder.create(models.GPT4, None, io=InputOutput(), fnames=files)
|
||||
coder = Coder.create(self.GPT35, None, io=InputOutput(), fnames=files)
|
||||
|
||||
def mock_send(*args, **kwargs):
|
||||
coder.partial_response_content = "ok"
|
||||
|
@ -306,7 +301,7 @@ class TestCoder(unittest.TestCase):
|
|||
|
||||
# Initialize the Coder object with the mocked IO and mocked repo
|
||||
coder = Coder.create(
|
||||
models.GPT4,
|
||||
self.GPT35,
|
||||
None,
|
||||
io=InputOutput(encoding=encoding),
|
||||
fnames=files,
|
||||
|
@ -336,21 +331,19 @@ class TestCoder(unittest.TestCase):
|
|||
# Mock the IO object
|
||||
mock_io = MagicMock()
|
||||
|
||||
mock_client = MagicMock()
|
||||
|
||||
# Initialize the Coder object with the mocked IO and mocked repo
|
||||
coder = Coder.create(models.GPT4, None, mock_io, client=mock_client)
|
||||
|
||||
# Set up the mock to raise
|
||||
mock_client.chat.completions.create.side_effect = openai.BadRequestError(
|
||||
message="Invalid request",
|
||||
response=MagicMock(),
|
||||
body=None,
|
||||
)
|
||||
coder = Coder.create(self.GPT35, None, mock_io)
|
||||
|
||||
# Call the run method and assert that InvalidRequestError is raised
|
||||
with self.assertRaises(openai.BadRequestError):
|
||||
coder.run(with_message="hi")
|
||||
with patch("litellm.completion") as Mock:
|
||||
Mock.side_effect = openai.BadRequestError(
|
||||
message="Invalid request",
|
||||
response=MagicMock(),
|
||||
body=None,
|
||||
)
|
||||
|
||||
coder.run(with_message="hi")
|
||||
|
||||
def test_new_file_edit_one_commit(self):
|
||||
"""A new file shouldn't get pre-committed before the GPT edit commit"""
|
||||
|
@ -360,7 +353,7 @@ class TestCoder(unittest.TestCase):
|
|||
fname = Path("file.txt")
|
||||
|
||||
io = InputOutput(yes=True)
|
||||
coder = Coder.create(models.GPT4, "diff", io=io, fnames=[str(fname)])
|
||||
coder = Coder.create(self.GPT35, "diff", io=io, fnames=[str(fname)])
|
||||
|
||||
self.assertTrue(fname.exists())
|
||||
|
||||
|
@ -416,7 +409,7 @@ new
|
|||
fname1.write_text("ONE\n")
|
||||
|
||||
io = InputOutput(yes=True)
|
||||
coder = Coder.create(models.GPT4, "diff", io=io, fnames=[str(fname1), str(fname2)])
|
||||
coder = Coder.create(self.GPT35, "diff", io=io, fnames=[str(fname1), str(fname2)])
|
||||
|
||||
def mock_send(*args, **kwargs):
|
||||
coder.partial_response_content = f"""
|
||||
|
@ -468,7 +461,7 @@ TWO
|
|||
fname2.write_text("OTHER\n")
|
||||
|
||||
io = InputOutput(yes=True)
|
||||
coder = Coder.create(models.GPT4, "diff", io=io, fnames=[str(fname)])
|
||||
coder = Coder.create(self.GPT35, "diff", io=io, fnames=[str(fname)])
|
||||
|
||||
def mock_send(*args, **kwargs):
|
||||
coder.partial_response_content = f"""
|
||||
|
@ -545,7 +538,7 @@ three
|
|||
repo.git.commit("-m", "initial")
|
||||
|
||||
io = InputOutput(yes=True)
|
||||
coder = Coder.create(models.GPT4, "diff", io=io, fnames=[str(fname)])
|
||||
coder = Coder.create(self.GPT35, "diff", io=io, fnames=[str(fname)])
|
||||
|
||||
def mock_send(*args, **kwargs):
|
||||
coder.partial_response_content = f"""
|
||||
|
@ -595,7 +588,7 @@ two
|
|||
|
||||
io = InputOutput(yes=True)
|
||||
coder = Coder.create(
|
||||
models.GPT4,
|
||||
self.GPT35,
|
||||
None,
|
||||
io,
|
||||
fnames=[fname1, fname2, fname3],
|
||||
|
|
|
@ -6,15 +6,14 @@ import tempfile
|
|||
from io import StringIO
|
||||
from pathlib import Path
|
||||
from unittest import TestCase
|
||||
from unittest.mock import patch
|
||||
|
||||
import git
|
||||
|
||||
from aider import models
|
||||
from aider.coders import Coder
|
||||
from aider.commands import Commands
|
||||
from aider.dump import dump # noqa: F401
|
||||
from aider.io import InputOutput
|
||||
from aider.models import Model
|
||||
from aider.utils import ChdirTemporaryDirectory, GitTemporaryDirectory, make_repo
|
||||
|
||||
|
||||
|
@ -24,9 +23,7 @@ class TestCommands(TestCase):
|
|||
self.tempdir = tempfile.mkdtemp()
|
||||
os.chdir(self.tempdir)
|
||||
|
||||
self.patcher = patch("aider.coders.base_coder.check_model_availability")
|
||||
self.mock_check = self.patcher.start()
|
||||
self.mock_check.return_value = True
|
||||
self.GPT35 = Model("gpt-3.5-turbo")
|
||||
|
||||
def tearDown(self):
|
||||
os.chdir(self.original_cwd)
|
||||
|
@ -37,7 +34,7 @@ class TestCommands(TestCase):
|
|||
io = InputOutput(pretty=False, yes=True)
|
||||
from aider.coders import Coder
|
||||
|
||||
coder = Coder.create(models.GPT35, None, io)
|
||||
coder = Coder.create(self.GPT35, None, io)
|
||||
commands = Commands(io, coder)
|
||||
|
||||
# Call the cmd_add method with 'foo.txt' and 'bar.txt' as a single string
|
||||
|
@ -53,7 +50,7 @@ class TestCommands(TestCase):
|
|||
io = InputOutput(pretty=False, yes=False)
|
||||
from aider.coders import Coder
|
||||
|
||||
coder = Coder.create(models.GPT35, None, io)
|
||||
coder = Coder.create(self.GPT35, None, io)
|
||||
commands = Commands(io, coder)
|
||||
|
||||
commands.cmd_add("**.txt")
|
||||
|
@ -63,7 +60,7 @@ class TestCommands(TestCase):
|
|||
io = InputOutput(pretty=False, yes=True)
|
||||
from aider.coders import Coder
|
||||
|
||||
coder = Coder.create(models.GPT35, None, io)
|
||||
coder = Coder.create(self.GPT35, None, io)
|
||||
commands = Commands(io, coder)
|
||||
|
||||
# Create some test files
|
||||
|
@ -89,7 +86,7 @@ class TestCommands(TestCase):
|
|||
io = InputOutput(pretty=False, yes=False)
|
||||
from aider.coders import Coder
|
||||
|
||||
coder = Coder.create(models.GPT35, None, io)
|
||||
coder = Coder.create(self.GPT35, None, io)
|
||||
commands = Commands(io, coder)
|
||||
|
||||
# Call the cmd_add method with a non-existent file pattern
|
||||
|
@ -103,7 +100,7 @@ class TestCommands(TestCase):
|
|||
io = InputOutput(pretty=False, yes=True)
|
||||
from aider.coders import Coder
|
||||
|
||||
coder = Coder.create(models.GPT35, None, io)
|
||||
coder = Coder.create(self.GPT35, None, io)
|
||||
commands = Commands(io, coder)
|
||||
|
||||
fname = Path("[abc].nonexistent")
|
||||
|
@ -120,7 +117,7 @@ class TestCommands(TestCase):
|
|||
io = InputOutput(pretty=False, yes=False)
|
||||
from aider.coders import Coder
|
||||
|
||||
coder = Coder.create(models.GPT35, None, io)
|
||||
coder = Coder.create(self.GPT35, None, io)
|
||||
commands = Commands(io, coder)
|
||||
|
||||
# Create a directory and add files to it using pathlib
|
||||
|
@ -171,7 +168,7 @@ class TestCommands(TestCase):
|
|||
io = InputOutput(pretty=False, yes=True)
|
||||
from aider.coders import Coder
|
||||
|
||||
coder = Coder.create(models.GPT35, None, io)
|
||||
coder = Coder.create(self.GPT35, None, io)
|
||||
commands = Commands(io, coder)
|
||||
|
||||
subdir = Path("subdir")
|
||||
|
@ -198,7 +195,7 @@ class TestCommands(TestCase):
|
|||
io = InputOutput(pretty=False, yes=True)
|
||||
from aider.coders import Coder
|
||||
|
||||
coder = Coder.create(models.GPT35, None, io)
|
||||
coder = Coder.create(self.GPT35, None, io)
|
||||
commands = Commands(io, coder)
|
||||
|
||||
# Create a new file foo.bad which will fail to decode as utf-8
|
||||
|
@ -218,7 +215,7 @@ class TestCommands(TestCase):
|
|||
with open(f"{tempdir}/test.txt", "w") as f:
|
||||
f.write("test")
|
||||
|
||||
coder = Coder.create(models.GPT35, None, io)
|
||||
coder = Coder.create(self.GPT35, None, io)
|
||||
commands = Commands(io, coder)
|
||||
|
||||
# Run the cmd_git method with the arguments "commit -a -m msg"
|
||||
|
@ -234,7 +231,7 @@ class TestCommands(TestCase):
|
|||
# Initialize the Commands and InputOutput objects
|
||||
io = InputOutput(pretty=False, yes=True)
|
||||
|
||||
coder = Coder.create(models.GPT35, None, io)
|
||||
coder = Coder.create(self.GPT35, None, io)
|
||||
commands = Commands(io, coder)
|
||||
|
||||
commands.cmd_add("foo.txt bar.txt")
|
||||
|
@ -275,7 +272,7 @@ class TestCommands(TestCase):
|
|||
os.chdir("subdir")
|
||||
|
||||
io = InputOutput(pretty=False, yes=True)
|
||||
coder = Coder.create(models.GPT35, None, io)
|
||||
coder = Coder.create(self.GPT35, None, io)
|
||||
commands = Commands(io, coder)
|
||||
|
||||
# this should get added
|
||||
|
@ -293,7 +290,7 @@ class TestCommands(TestCase):
|
|||
io = InputOutput(pretty=False, yes=False)
|
||||
from aider.coders import Coder
|
||||
|
||||
coder = Coder.create(models.GPT35, None, io)
|
||||
coder = Coder.create(self.GPT35, None, io)
|
||||
commands = Commands(io, coder)
|
||||
|
||||
Path("side_dir").mkdir()
|
||||
|
@ -317,7 +314,7 @@ class TestCommands(TestCase):
|
|||
repo.git.commit("-m", "initial")
|
||||
|
||||
io = InputOutput(pretty=False, yes=True)
|
||||
coder = Coder.create(models.GPT35, None, io)
|
||||
coder = Coder.create(self.GPT35, None, io)
|
||||
commands = Commands(io, coder)
|
||||
|
||||
self.assertFalse(repo.is_dirty())
|
||||
|
@ -338,7 +335,7 @@ class TestCommands(TestCase):
|
|||
io = InputOutput(pretty=False, yes=False)
|
||||
from aider.coders import Coder
|
||||
|
||||
coder = Coder.create(models.GPT35, None, io)
|
||||
coder = Coder.create(self.GPT35, None, io)
|
||||
commands = Commands(io, coder)
|
||||
|
||||
outside_file = Path(tmp_dname) / "outside.txt"
|
||||
|
@ -361,7 +358,7 @@ class TestCommands(TestCase):
|
|||
io = InputOutput(pretty=False, yes=False)
|
||||
from aider.coders import Coder
|
||||
|
||||
coder = Coder.create(models.GPT35, None, io)
|
||||
coder = Coder.create(self.GPT35, None, io)
|
||||
commands = Commands(io, coder)
|
||||
|
||||
outside_file = Path(tmp_dname) / "outside.txt"
|
||||
|
@ -379,7 +376,7 @@ class TestCommands(TestCase):
|
|||
io = InputOutput(pretty=False, yes=False)
|
||||
from aider.coders import Coder
|
||||
|
||||
coder = Coder.create(models.GPT35, None, io)
|
||||
coder = Coder.create(self.GPT35, None, io)
|
||||
commands = Commands(io, coder)
|
||||
|
||||
fname = Path("with[brackets].txt")
|
||||
|
@ -394,7 +391,7 @@ class TestCommands(TestCase):
|
|||
io = InputOutput(pretty=False, yes=False)
|
||||
from aider.coders import Coder
|
||||
|
||||
coder = Coder.create(models.GPT35, None, io)
|
||||
coder = Coder.create(self.GPT35, None, io)
|
||||
commands = Commands(io, coder)
|
||||
|
||||
fname = Path("file.txt")
|
||||
|
@ -409,7 +406,7 @@ class TestCommands(TestCase):
|
|||
io = InputOutput(pretty=False, yes=False)
|
||||
from aider.coders import Coder
|
||||
|
||||
coder = Coder.create(models.GPT35, None, io)
|
||||
coder = Coder.create(self.GPT35, None, io)
|
||||
commands = Commands(io, coder)
|
||||
|
||||
fname = Path("file with spaces.txt")
|
||||
|
@ -437,7 +434,7 @@ class TestCommands(TestCase):
|
|||
io = InputOutput(pretty=False, yes=True)
|
||||
from aider.coders import Coder
|
||||
|
||||
coder = Coder.create(models.GPT35, None, io)
|
||||
coder = Coder.create(self.GPT35, None, io)
|
||||
commands = Commands(io, coder)
|
||||
|
||||
# There's no reason this /add should trigger a commit
|
||||
|
@ -460,7 +457,7 @@ class TestCommands(TestCase):
|
|||
io = InputOutput(pretty=False, yes=True)
|
||||
from aider.coders import Coder
|
||||
|
||||
coder = Coder.create(models.GPT35, None, io)
|
||||
coder = Coder.create(self.GPT35, None, io)
|
||||
commands = Commands(io, coder)
|
||||
|
||||
fname = "file.txt"
|
||||
|
@ -479,7 +476,7 @@ class TestCommands(TestCase):
|
|||
io = InputOutput(pretty=False, yes=False)
|
||||
from aider.coders import Coder
|
||||
|
||||
coder = Coder.create(models.GPT35, None, io)
|
||||
coder = Coder.create(self.GPT35, None, io)
|
||||
commands = Commands(io, coder)
|
||||
|
||||
fname = Path("test.txt")
|
||||
|
@ -502,7 +499,7 @@ class TestCommands(TestCase):
|
|||
with GitTemporaryDirectory() as repo_dir:
|
||||
repo = git.Repo(repo_dir)
|
||||
io = InputOutput(pretty=False, yes=True)
|
||||
coder = Coder.create(models.GPT35, None, io)
|
||||
coder = Coder.create(self.GPT35, None, io)
|
||||
commands = Commands(io, coder)
|
||||
|
||||
other_path = Path(repo_dir) / "other_file.txt"
|
||||
|
@ -563,7 +560,7 @@ class TestCommands(TestCase):
|
|||
|
||||
io = InputOutput(yes=True)
|
||||
coder = Coder.create(
|
||||
models.GPT4, None, io, fnames=[fname1, fname2], aider_ignore_file=str(aignore)
|
||||
self.GPT35, None, io, fnames=[fname1, fname2], aider_ignore_file=str(aignore)
|
||||
)
|
||||
commands = Commands(io, coder)
|
||||
|
||||
|
|
|
@ -5,21 +5,16 @@ import unittest
|
|||
from pathlib import Path
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
from aider import models
|
||||
from aider.coders import Coder
|
||||
from aider.coders import editblock_coder as eb
|
||||
from aider.dump import dump # noqa: F401
|
||||
from aider.io import InputOutput
|
||||
from aider.models import Model
|
||||
|
||||
|
||||
class TestUtils(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.patcher = patch("aider.coders.base_coder.check_model_availability")
|
||||
self.mock_check = self.patcher.start()
|
||||
self.mock_check.return_value = True
|
||||
|
||||
def tearDown(self):
|
||||
self.patcher.stop()
|
||||
self.GPT35 = Model("gpt-3.5-turbo")
|
||||
|
||||
# fuzzy logic disabled v0.11.2-dev
|
||||
def __test_replace_most_similar_chunk(self):
|
||||
|
@ -302,7 +297,7 @@ These changes replace the `subprocess.run` patches with `subprocess.check_output
|
|||
files = [file1]
|
||||
|
||||
# Initialize the Coder object with the mocked IO and mocked repo
|
||||
coder = Coder.create(models.GPT4, "diff", io=InputOutput(), fnames=files)
|
||||
coder = Coder.create(self.GPT35, "diff", io=InputOutput(), fnames=files)
|
||||
|
||||
def mock_send(*args, **kwargs):
|
||||
coder.partial_response_content = f"""
|
||||
|
@ -339,7 +334,7 @@ new
|
|||
|
||||
# Initialize the Coder object with the mocked IO and mocked repo
|
||||
coder = Coder.create(
|
||||
models.GPT4,
|
||||
self.GPT35,
|
||||
"diff",
|
||||
io=InputOutput(dry_run=True),
|
||||
fnames=files,
|
||||
|
|
|
@ -22,14 +22,10 @@ class TestMain(TestCase):
|
|||
self.original_cwd = os.getcwd()
|
||||
self.tempdir = tempfile.mkdtemp()
|
||||
os.chdir(self.tempdir)
|
||||
self.patcher = patch("aider.coders.base_coder.check_model_availability")
|
||||
self.mock_check = self.patcher.start()
|
||||
self.mock_check.return_value = True
|
||||
|
||||
def tearDown(self):
|
||||
os.chdir(self.original_cwd)
|
||||
shutil.rmtree(self.tempdir, ignore_errors=True)
|
||||
self.patcher.stop()
|
||||
|
||||
def test_main_with_empty_dir_no_files_on_command(self):
|
||||
main(["--no-git"], input=DummyInput(), output=DummyOutput())
|
||||
|
|
|
@ -1,54 +1,27 @@
|
|||
import unittest
|
||||
from unittest.mock import MagicMock
|
||||
|
||||
from aider.models import Model, OpenRouterModel
|
||||
from aider.models import Model
|
||||
|
||||
|
||||
class TestModels(unittest.TestCase):
|
||||
def test_max_context_tokens(self):
|
||||
model = Model.create("gpt-3.5-turbo")
|
||||
self.assertEqual(model.max_context_tokens, 4 * 1024)
|
||||
model = Model("gpt-3.5-turbo")
|
||||
self.assertEqual(model.info["max_input_tokens"], 16385)
|
||||
|
||||
model = Model.create("gpt-3.5-turbo-16k")
|
||||
self.assertEqual(model.max_context_tokens, 16385)
|
||||
model = Model("gpt-3.5-turbo-16k")
|
||||
self.assertEqual(model.info["max_input_tokens"], 16385)
|
||||
|
||||
model = Model.create("gpt-3.5-turbo-1106")
|
||||
self.assertEqual(model.max_context_tokens, 16385)
|
||||
model = Model("gpt-3.5-turbo-1106")
|
||||
self.assertEqual(model.info["max_input_tokens"], 16385)
|
||||
|
||||
model = Model.create("gpt-4")
|
||||
self.assertEqual(model.max_context_tokens, 8 * 1024)
|
||||
model = Model("gpt-4")
|
||||
self.assertEqual(model.info["max_input_tokens"], 8 * 1024)
|
||||
|
||||
model = Model.create("gpt-4-32k")
|
||||
self.assertEqual(model.max_context_tokens, 32 * 1024)
|
||||
model = Model("gpt-4-32k")
|
||||
self.assertEqual(model.info["max_input_tokens"], 32 * 1024)
|
||||
|
||||
model = Model.create("gpt-4-0613")
|
||||
self.assertEqual(model.max_context_tokens, 8 * 1024)
|
||||
|
||||
def test_openrouter_model_properties(self):
|
||||
client = MagicMock()
|
||||
|
||||
class ModelData:
|
||||
def __init__(self, id, object, context_length, pricing):
|
||||
self.id = id
|
||||
self.object = object
|
||||
self.context_length = context_length
|
||||
self.pricing = pricing
|
||||
|
||||
model_data = ModelData(
|
||||
"openai/gpt-4", "model", "8192", {"prompt": "0.00006", "completion": "0.00012"}
|
||||
)
|
||||
|
||||
class ModelList:
|
||||
def __init__(self, data):
|
||||
self.data = data
|
||||
|
||||
client.models.list.return_value = ModelList([model_data])
|
||||
|
||||
model = OpenRouterModel(client, "gpt-4")
|
||||
self.assertEqual(model.name, "openai/gpt-4")
|
||||
self.assertEqual(model.max_context_tokens, 8192)
|
||||
self.assertEqual(model.prompt_price, 0.06)
|
||||
self.assertEqual(model.completion_price, 0.12)
|
||||
model = Model("gpt-4-0613")
|
||||
self.assertEqual(model.info["max_input_tokens"], 8 * 1024)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
|
|
@ -1,17 +1,17 @@
|
|||
from collections import defaultdict
|
||||
import os
|
||||
import unittest
|
||||
from pathlib import Path
|
||||
import networkx as nx
|
||||
|
||||
from aider.dump import dump # noqa: F401
|
||||
from aider.io import InputOutput
|
||||
from aider.models import Model
|
||||
from aider.repomap import RepoMap
|
||||
from aider import models
|
||||
from aider.utils import IgnorantTemporaryDirectory
|
||||
|
||||
|
||||
class TestRepoMap(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.GPT35 = Model("gpt-3.5-turbo")
|
||||
|
||||
def test_get_repo_map(self):
|
||||
# Create a temporary directory with sample files for testing
|
||||
test_files = [
|
||||
|
@ -27,7 +27,7 @@ class TestRepoMap(unittest.TestCase):
|
|||
f.write("")
|
||||
|
||||
io = InputOutput()
|
||||
repo_map = RepoMap(root=temp_dir, io=io)
|
||||
repo_map = RepoMap(main_model=self.GPT35, root=temp_dir, io=io)
|
||||
other_files = [os.path.join(temp_dir, file) for file in test_files]
|
||||
result = repo_map.get_repo_map([], other_files)
|
||||
|
||||
|
@ -75,7 +75,7 @@ print(my_function(3, 4))
|
|||
f.write(file_content3)
|
||||
|
||||
io = InputOutput()
|
||||
repo_map = RepoMap(root=temp_dir, io=io)
|
||||
repo_map = RepoMap(main_model=self.GPT35, root=temp_dir, io=io)
|
||||
other_files = [
|
||||
os.path.join(temp_dir, test_file1),
|
||||
os.path.join(temp_dir, test_file2),
|
||||
|
@ -109,7 +109,7 @@ print(my_function(3, 4))
|
|||
with open(os.path.join(temp_dir, file), "w") as f:
|
||||
f.write("")
|
||||
|
||||
repo_map = RepoMap(root=temp_dir, io=InputOutput())
|
||||
repo_map = RepoMap(main_model=self.GPT35, root=temp_dir, io=InputOutput())
|
||||
|
||||
other_files = [os.path.join(temp_dir, file) for file in test_files]
|
||||
result = repo_map.get_repo_map([], other_files)
|
||||
|
@ -138,7 +138,7 @@ print(my_function(3, 4))
|
|||
f.write("def foo(): pass\n")
|
||||
|
||||
io = InputOutput()
|
||||
repo_map = RepoMap(root=temp_dir, io=io)
|
||||
repo_map = RepoMap(main_model=self.GPT35, root=temp_dir, io=io)
|
||||
test_files = [os.path.join(temp_dir, file) for file in test_files]
|
||||
result = repo_map.get_repo_map(test_files[:2], test_files[2:])
|
||||
|
||||
|
@ -155,6 +155,9 @@ print(my_function(3, 4))
|
|||
|
||||
|
||||
class TestRepoMapTypescript(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.GPT35 = Model("gpt-3.5-turbo")
|
||||
|
||||
def test_get_repo_map_typescript(self):
|
||||
# Create a temporary directory with a sample TypeScript file
|
||||
test_file_ts = "test_file.ts"
|
||||
|
@ -193,7 +196,7 @@ export function myFunction(input: number): number {
|
|||
f.write(file_content_ts)
|
||||
|
||||
io = InputOutput()
|
||||
repo_map = RepoMap(root=temp_dir, io=io)
|
||||
repo_map = RepoMap(main_model=self.GPT35, root=temp_dir, io=io)
|
||||
other_files = [os.path.join(temp_dir, test_file_ts)]
|
||||
result = repo_map.get_repo_map([], other_files)
|
||||
|
||||
|
@ -209,5 +212,6 @@ export function myFunction(input: number): number {
|
|||
# close the open cache files, so Windows won't error
|
||||
del repo_map
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
|
|
|
@ -12,12 +12,11 @@ class PrintCalled(Exception):
|
|||
|
||||
|
||||
class TestSendChat(unittest.TestCase):
|
||||
@patch("litellm.completion")
|
||||
@patch("builtins.print")
|
||||
def test_send_with_retries_rate_limit_error(self, mock_print):
|
||||
mock_client = MagicMock()
|
||||
|
||||
def test_send_with_retries_rate_limit_error(self, mock_print, mock_completion):
|
||||
# Set up the mock to raise
|
||||
mock_client.chat.completions.create.side_effect = [
|
||||
mock_completion.side_effect = [
|
||||
openai.RateLimitError(
|
||||
"rate limit exceeded",
|
||||
response=MagicMock(),
|
||||
|
@ -27,20 +26,18 @@ class TestSendChat(unittest.TestCase):
|
|||
]
|
||||
|
||||
# Call the send_with_retries method
|
||||
send_with_retries(mock_client, "model", ["message"], None, False)
|
||||
send_with_retries("model", ["message"], None, False)
|
||||
mock_print.assert_called_once()
|
||||
|
||||
@patch("aider.sendchat.openai.ChatCompletion.create")
|
||||
@patch("litellm.completion")
|
||||
@patch("builtins.print")
|
||||
def test_send_with_retries_connection_error(self, mock_print, mock_chat_completion_create):
|
||||
mock_client = MagicMock()
|
||||
|
||||
def test_send_with_retries_connection_error(self, mock_print, mock_completion):
|
||||
# Set up the mock to raise
|
||||
mock_client.chat.completions.create.side_effect = [
|
||||
mock_completion.side_effect = [
|
||||
httpx.ConnectError("Connection error"),
|
||||
None,
|
||||
]
|
||||
|
||||
# Call the send_with_retries method
|
||||
send_with_retries(mock_client, "model", ["message"], None, False)
|
||||
send_with_retries("model", ["message"], None, False)
|
||||
mock_print.assert_called_once()
|
||||
|
|
|
@ -3,13 +3,13 @@ import shutil
|
|||
import tempfile
|
||||
import unittest
|
||||
from pathlib import Path
|
||||
from unittest.mock import MagicMock, patch
|
||||
from unittest.mock import MagicMock
|
||||
|
||||
from aider import models
|
||||
from aider.coders import Coder
|
||||
from aider.coders.wholefile_coder import WholeFileCoder
|
||||
from aider.dump import dump # noqa: F401
|
||||
from aider.io import InputOutput
|
||||
from aider.models import Model
|
||||
|
||||
|
||||
class TestWholeFileCoder(unittest.TestCase):
|
||||
|
@ -18,21 +18,17 @@ class TestWholeFileCoder(unittest.TestCase):
|
|||
self.tempdir = tempfile.mkdtemp()
|
||||
os.chdir(self.tempdir)
|
||||
|
||||
self.patcher = patch("aider.coders.base_coder.check_model_availability")
|
||||
self.mock_check = self.patcher.start()
|
||||
self.mock_check.return_value = True
|
||||
self.GPT35 = Model("gpt-3.5-turbo")
|
||||
|
||||
def tearDown(self):
|
||||
os.chdir(self.original_cwd)
|
||||
shutil.rmtree(self.tempdir, ignore_errors=True)
|
||||
|
||||
self.patcher.stop()
|
||||
|
||||
def test_no_files(self):
|
||||
# Initialize WholeFileCoder with the temporary directory
|
||||
io = InputOutput(yes=True)
|
||||
|
||||
coder = WholeFileCoder(None, main_model=models.GPT35, io=io, fnames=[])
|
||||
coder = WholeFileCoder(main_model=self.GPT35, io=io, fnames=[])
|
||||
coder.partial_response_content = (
|
||||
'To print "Hello, World!" in most programming languages, you can use the following'
|
||||
' code:\n\n```python\nprint("Hello, World!")\n```\n\nThis code will output "Hello,'
|
||||
|
@ -44,7 +40,7 @@ class TestWholeFileCoder(unittest.TestCase):
|
|||
|
||||
def test_no_files_new_file_should_ask(self):
|
||||
io = InputOutput(yes=False) # <- yes=FALSE
|
||||
coder = WholeFileCoder(None, main_model=models.GPT35, io=io, fnames=[])
|
||||
coder = WholeFileCoder(main_model=self.GPT35, io=io, fnames=[])
|
||||
coder.partial_response_content = (
|
||||
'To print "Hello, World!" in most programming languages, you can use the following'
|
||||
' code:\n\nfoo.js\n```python\nprint("Hello, World!")\n```\n\nThis code will output'
|
||||
|
@ -61,7 +57,7 @@ class TestWholeFileCoder(unittest.TestCase):
|
|||
|
||||
# Initialize WholeFileCoder with the temporary directory
|
||||
io = InputOutput(yes=True)
|
||||
coder = WholeFileCoder(None, main_model=models.GPT35, io=io, fnames=[sample_file])
|
||||
coder = WholeFileCoder(main_model=self.GPT35, io=io, fnames=[sample_file])
|
||||
|
||||
# Set the partial response content with the updated content
|
||||
coder.partial_response_content = f"{sample_file}\n```\nUpdated content\n```"
|
||||
|
@ -85,7 +81,7 @@ class TestWholeFileCoder(unittest.TestCase):
|
|||
|
||||
# Initialize WholeFileCoder with the temporary directory
|
||||
io = InputOutput(yes=True)
|
||||
coder = WholeFileCoder(None, main_model=models.GPT35, io=io, fnames=[sample_file])
|
||||
coder = WholeFileCoder(main_model=self.GPT35, io=io, fnames=[sample_file])
|
||||
|
||||
# Set the partial response content with the updated content
|
||||
coder.partial_response_content = f"{sample_file}\n```\n0\n\1\n2\n"
|
||||
|
@ -109,7 +105,7 @@ Quote!
|
|||
|
||||
# Initialize WholeFileCoder with the temporary directory
|
||||
io = InputOutput(yes=True)
|
||||
coder = WholeFileCoder(None, main_model=models.GPT35, io=io, fnames=[sample_file])
|
||||
coder = WholeFileCoder(main_model=self.GPT35, io=io, fnames=[sample_file])
|
||||
|
||||
coder.choose_fence()
|
||||
|
||||
|
@ -139,7 +135,7 @@ Quote!
|
|||
|
||||
# Initialize WholeFileCoder with the temporary directory
|
||||
io = InputOutput(yes=True)
|
||||
coder = WholeFileCoder(None, main_model=models.GPT35, io=io, fnames=[sample_file])
|
||||
coder = WholeFileCoder(main_model=self.GPT35, io=io, fnames=[sample_file])
|
||||
|
||||
# Set the partial response content with the updated content
|
||||
# With path/to/ prepended onto the filename
|
||||
|
@ -164,7 +160,7 @@ Quote!
|
|||
|
||||
# Initialize WholeFileCoder with the temporary directory
|
||||
io = InputOutput(yes=True)
|
||||
coder = WholeFileCoder(None, main_model=models.GPT35, io=io)
|
||||
coder = WholeFileCoder(main_model=self.GPT35, io=io)
|
||||
|
||||
# Set the partial response content with the updated content
|
||||
coder.partial_response_content = f"{sample_file}\n```\nUpdated content\n```"
|
||||
|
@ -192,7 +188,7 @@ Quote!
|
|||
|
||||
# Initialize WholeFileCoder with the temporary directory
|
||||
io = InputOutput(yes=True)
|
||||
coder = WholeFileCoder(None, main_model=models.GPT35, io=io, fnames=[sample_file])
|
||||
coder = WholeFileCoder(main_model=self.GPT35, io=io, fnames=[sample_file])
|
||||
|
||||
# Set the partial response content with the updated content
|
||||
coder.partial_response_content = (
|
||||
|
@ -235,7 +231,7 @@ after b
|
|||
"""
|
||||
# Initialize WholeFileCoder with the temporary directory
|
||||
io = InputOutput(yes=True)
|
||||
coder = WholeFileCoder(None, main_model=models.GPT35, io=io, fnames=[fname_a, fname_b])
|
||||
coder = WholeFileCoder(main_model=self.GPT35, io=io, fnames=[fname_a, fname_b])
|
||||
|
||||
# Set the partial response content with the updated content
|
||||
coder.partial_response_content = response
|
||||
|
@ -259,7 +255,7 @@ after b
|
|||
|
||||
# Initialize WholeFileCoder with the temporary directory
|
||||
io = InputOutput(yes=True)
|
||||
coder = WholeFileCoder(None, main_model=models.GPT35, io=io, fnames=[sample_file])
|
||||
coder = WholeFileCoder(main_model=self.GPT35, io=io, fnames=[sample_file])
|
||||
|
||||
# Set the partial response content with the updated content
|
||||
coder.partial_response_content = (
|
||||
|
@ -292,7 +288,7 @@ after b
|
|||
files = [file1]
|
||||
|
||||
# Initialize the Coder object with the mocked IO and mocked repo
|
||||
coder = Coder.create(models.GPT4, "whole", io=InputOutput(), fnames=files)
|
||||
coder = Coder.create(self.GPT35, "whole", io=InputOutput(), fnames=files)
|
||||
|
||||
# no trailing newline so the response content below doesn't add ANOTHER newline
|
||||
new_content = "new\ntwo\nthree"
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue