Merge branch 'main' into dvf_llm_log

This commit is contained in:
Daniel Vainsencher 2024-06-12 18:32:48 -04:00 committed by GitHub
commit 3e9f6dcca2
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
68 changed files with 755 additions and 356 deletions

View file

@ -42,4 +42,4 @@ jobs:
- name: Run tests
run: |
python -m unittest discover -s tests
python -m unittest discover -s aider/tests

View file

@ -41,4 +41,4 @@ jobs:
- name: Run tests
run: |
python -m unittest discover -s tests
python -m unittest discover -s aider/tests

4
.gitignore vendored
View file

@ -1,3 +1,6 @@
.DS_Store
.vscode/
aider.code-workspace
*.pyc
.aider*
aider_chat.egg-info/
@ -5,3 +8,4 @@ build
Gemfile.lock
_site
.jekyll-cache/
.jekyll-metadata

View file

@ -1,4 +1,6 @@
<!-- Edit README.md, not index.md -->
# Aider is AI pair programming in your terminal
Aider lets you pair program with LLMs,
@ -24,47 +26,57 @@ and can [connect to almost any LLM](https://aider.chat/docs/llms.html).
</p>
## Getting started
<!--[[[cog
# We can't do this here: {% include get-started.md %}
# Because this page is rendered by GitHub as the repo README
cog.out(open("website/_includes/get-started.md").read())
]]]-->
You can get started quickly like this:
```
$ pip install aider-chat
# To work with GPT-4o
# Change directory into a git repo
$ cd /to/your/git/repo
# Work with GPT-4o on your repo
$ export OPENAI_API_KEY=your-key-goes-here
$ aider
# To work with Claude 3 Opus:
# Or, work with Claude 3 Opus on your repo
$ export ANTHROPIC_API_KEY=your-key-goes-here
$ aider --opus
```
<!--[[[end]]]-->
**See the
See the
[installation instructions](https://aider.chat/docs/install.html)
and other
[documentation](https://aider.chat/docs/usage.html)
for more details.**
for more details.
## Features
- Chat with aider about your code: `aider <file1> <file2> ...`
- Run aider with the files you want to edit: `aider <file1> <file2> ...`
- Ask for changes:
- New features, test cases, improvements.
- Bug fixes, updated docs or code refactors.
- Paste in a GitHub issue that needs to be solved.
- Add new features or test cases.
- Describe a bug.
- Paste in an error message or or GitHub issue URL.
- Refactor code.
- Update docs.
- Aider will edit your files to complete your request.
- Aider [automatically git commits](https://aider.chat/docs/git.html) changes with a sensible commit message.
- Aider works with [most popular languages](https://aider.chat/docs/languages.html): python, javascript, typescript, php, html, css, and more...
- Aider works best with GPT-4o and Claude 3 Opus
and can [connect to almost any LLM](https://aider.chat/docs/llms.html).
- Aider can make coordinated changes across multiple files at once.
- Aider can edit multiple files at once for complex requests.
- Aider uses a [map of your entire git repo](https://aider.chat/docs/repomap.html), which helps it work well in larger codebases.
- You can also edit files in your editor while chatting with aider.
Aider will notice and always use the latest version.
So you can bounce back and forth between aider and your editor, to collaboratively code with AI.
- Images can be added to the chat (GPT-4o, GPT-4 Turbo, etc).
- URLs can be added to the chat and aider will read their content.
- [Code with your voice](https://aider.chat/docs/voice.html) using speech recognition.
- Edit files in your editor while chatting with aider,
and it will always use the latest version.
Pair program with AI.
- Add images to the chat (GPT-4o, GPT-4 Turbo, etc).
- Add URLs to the chat and aider will read their content.
- [Code with your voice](https://aider.chat/docs/voice.html).
## State of the art
@ -81,22 +93,25 @@ projects like django, scikitlearn, matplotlib, etc.
</a>
</p>
## Documentation
## More info
- [Documentation](https://aider.chat/)
- [Installation](https://aider.chat/docs/install.html)
- [Usage](https://aider.chat/docs/usage.html)
- [Tutorial videos](https://aider.chat/docs/tutorials.html)
- [Connecting to LLMs](https://aider.chat/docs/llms.html)
- [Configuration](https://aider.chat/docs/config.html)
- [Troubleshooting](https://aider.chat/docs/troubleshooting.html)
- [LLM Leaderboards](https://aider.chat/docs/leaderboards/)
- [FAQ](https://aider.chat/docs/faq.html)
- [GitHub](https://github.com/paul-gauthier/aider)
- [Discord](https://discord.gg/Tv2uQnR88V)
- [Blog](https://aider.chat/blog/)
## Kind words from users
- *The best free open source AI coding assistant.* -- [IndyDevDan](https://youtu.be/YALpX8oOn78)
- *The best AI coding assistant so far.* -- [Matthew Berman](https://www.youtube.com/watch?v=df8afeb1FY8)
- *Hands down, this is the best AI coding assistant tool so far.* -- [IndyDevDan](https://www.youtube.com/watch?v=MPYFPvxfGZs)
- *Aider ... has easily quadrupled my coding productivity.* -- [SOLAR_FIELDS](https://news.ycombinator.com/item?id=36212100)
- *It's a cool workflow... Aider's ergonomics are perfect for me.* -- [qup](https://news.ycombinator.com/item?id=38185326)
- *It's really like having your senior developer live right in your Git repo - truly amazing!* -- [rappster](https://github.com/paul-gauthier/aider/issues/124)
@ -111,4 +126,5 @@ projects like django, scikitlearn, matplotlib, etc.
- *I have been recovering from multiple shoulder surgeries ... and have used aider extensively. It has allowed me to continue productivity.* -- [codeninja](https://www.reddit.com/r/OpenAI/s/nmNwkHy1zG)
- *I am an aider addict. I'm getting so much more work done, but in less time.* -- [dandandan](https://discord.com/channels/1131200896827654144/1131200896827654149/1135913253483069470)
- *After wasting $100 on tokens trying to find something better, I'm back to Aider. It blows everything else out of the water hands down, there's no competition whatsoever.* -- [SystemSculpt](https://discord.com/channels/1131200896827654144/1131200896827654149/1178736602797846548)
- *Hands down, this is the best AI coding assistant tool so far.* -- [IndyDevDan](https://www.youtube.com/watch?v=MPYFPvxfGZs)
- *Best agent for actual dev work in existing codebases.* -- [Nick Dobos](https://twitter.com/NickADobos/status/1690408967963652097?s=20)

View file

@ -42,7 +42,7 @@ def get_parser(default_config_files, git_root):
"--anthropic-api-key",
metavar="ANTHROPIC_API_KEY",
env_var="ANTHROPIC_API_KEY",
help="Specify the OpenAI API key",
help="Specify the Anthropic API key",
)
default_model = models.DEFAULT_MODEL_NAME
group.add_argument(
@ -141,6 +141,12 @@ def get_parser(default_config_files, git_root):
env_var="OPENAI_ORGANIZATION_ID",
help="Specify the OpenAI organization ID",
)
group.add_argument(
"--model-metadata-file",
metavar="MODEL_FILE",
default=None,
help="Specify a file with context window and costs for unknown models",
)
group.add_argument(
"--edit-format",
metavar="EDIT_FORMAT",
@ -363,6 +369,12 @@ def get_parser(default_config_files, git_root):
##########
group = parser.add_argument_group("Other Settings")
group.add_argument(
"--vim",
action="store_true",
help="Use VI editing mode in the terminal (default: False)",
default=False,
)
group.add_argument(
"--voice-language",
metavar="VOICE_LANGUAGE",

View file

@ -18,7 +18,7 @@ from jsonschema import Draft7Validator
from rich.console import Console, Text
from rich.markdown import Markdown
from aider import __version__, models, prompts, utils
from aider import __version__, models, prompts, urls, utils
from aider.commands import Commands
from aider.history import ChatSummary
from aider.io import InputOutput
@ -587,6 +587,9 @@ class Coder:
while new_user_message:
self.reflected_message = None
list(self.send_new_user_message(new_user_message))
new_user_message = None
if self.reflected_message:
if self.num_reflections < self.max_reflections:
self.num_reflections += 1
new_user_message = self.reflected_message
@ -594,7 +597,6 @@ class Coder:
self.io.tool_error(
f"Only {self.max_reflections} reflections allowed, stopping."
)
new_user_message = None
if with_message:
return self.partial_response_content
@ -1221,9 +1223,7 @@ class Coder:
return
self.io.tool_error("Warning: it's best to only add files that need changes to the chat.")
self.io.tool_error(
"https://aider.chat/docs/faq.html#how-can-i-add-all-the-files-to-the-chat"
)
self.io.tool_error(urls.edit_errors)
self.warning_given = True
def prepare_to_edit(self, edits):
@ -1263,9 +1263,7 @@ class Coder:
err = err.args[0]
self.io.tool_error("The LLM did not conform to the edit format.")
self.io.tool_error(
"For more info see: https://aider.chat/docs/faq.html#aider-isnt-editing-my-files"
)
self.io.tool_error(urls.edit_errors)
self.io.tool_error()
self.io.tool_error(str(err), strip=False)
@ -1330,8 +1328,8 @@ class Coder:
return context
def auto_commit(self, edited):
context = self.get_context_from_history(self.cur_messages)
res = self.repo.commit(fnames=edited, context=context, prefix="aider: ")
# context = self.get_context_from_history(self.cur_messages)
res = self.repo.commit(fnames=edited, prefix="aider: ")
if res:
commit_hash, commit_message = res
self.last_aider_commit_hash = commit_hash

View file

@ -4,140 +4,12 @@ from .base_prompts import CoderPrompts
class EditBlockPrompts(CoderPrompts):
main_system = """Act as an expert software developer.
Always use best practices when coding.
Respect and use existing conventions, libraries, etc that are already present in the code base.
{lazy_prompt}
Take requests for changes to the supplied code.
If the request is ambiguous, ask questions.
main_system = """Act as a software expert and answer user questions about how to use the aider program.
You never write code, just answer questions.
Always reply to the user in the same language they are using.
Once you understand the request you MUST:
1. Decide if you need to propose *SEARCH/REPLACE* edits to any files that haven't been added to the chat. You can create new files without asking. But if you need to propose edits to existing files not already added to the chat, you *MUST* tell the user their full path names and ask them to *add the files to the chat*. End your reply and wait for their approval. You can keep asking if you then decide you need to edit more files.
2. Think step-by-step and explain the needed changes with a numbered list of short sentences.
3. Describe each change with a *SEARCH/REPLACE block* per the examples below. All changes to files must use this *SEARCH/REPLACE block* format. ONLY EVER RETURN CODE IN A *SEARCH/REPLACE BLOCK*!
All changes to files must use the *SEARCH/REPLACE block* format.
Decide if you need to see any files that haven't been added to the chat. If so, you *MUST* tell the user their full path names and ask them to *add the files to the chat*. End your reply and wait for their approval. You can keep asking if you then decide you need to see more files.
"""
example_messages = [
dict(
role="user",
content="Change get_factorial() to use math.factorial",
),
dict(
role="assistant",
content="""To make this change we need to modify `mathweb/flask/app.py` to:
example_messages = []
1. Import the math package.
2. Remove the existing factorial() function.
3. Update get_factorial() to call math.factorial instead.
Here are the *SEARCH/REPLACE* blocks:
mathweb/flask/app.py
{fence[0]}python
<<<<<<< SEARCH
from flask import Flask
=======
import math
from flask import Flask
>>>>>>> REPLACE
{fence[1]}
mathweb/flask/app.py
{fence[0]}python
<<<<<<< SEARCH
def factorial(n):
"compute factorial"
if n == 0:
return 1
else:
return n * factorial(n-1)
=======
>>>>>>> REPLACE
{fence[1]}
mathweb/flask/app.py
{fence[0]}python
<<<<<<< SEARCH
return str(factorial(n))
=======
return str(math.factorial(n))
>>>>>>> REPLACE
{fence[1]}
""",
),
dict(
role="user",
content="Refactor hello() into its own file.",
),
dict(
role="assistant",
content="""To make this change we need to modify `main.py` and make a new file `hello.py`:
1. Make a new hello.py file with hello() in it.
2. Remove hello() from main.py and replace it with an import.
Here are the *SEARCH/REPLACE* blocks:
hello.py
{fence[0]}python
<<<<<<< SEARCH
=======
def hello():
"print a greeting"
print("hello")
>>>>>>> REPLACE
{fence[1]}
main.py
{fence[0]}python
<<<<<<< SEARCH
def hello():
"print a greeting"
print("hello")
=======
from hello import hello
>>>>>>> REPLACE
{fence[1]}
""",
),
]
system_reminder = """# *SEARCH/REPLACE block* Rules:
Every *SEARCH/REPLACE block* must use this format:
1. The file path alone on a line, verbatim. No bold asterisks, no quotes around it, no escaping of characters, etc.
2. The opening fence and code language, eg: {fence[0]}python
3. The start of search block: <<<<<<< SEARCH
4. A contiguous chunk of lines to search for in the existing source code
5. The dividing line: =======
6. The lines to replace into the source code
7. The end of the replace block: >>>>>>> REPLACE
8. The closing fence: {fence[1]}
Every *SEARCH* section must *EXACTLY MATCH* the existing source code, character for character, including all comments, docstrings, etc.
*SEARCH/REPLACE* blocks will replace *all* matching occurrences.
Include enough lines to make the SEARCH blocks unique.
Include *ALL* the code being searched and replaced!
Only create *SEARCH/REPLACE* blocks for files that the user has added to the chat!
To move code within a file, use 2 *SEARCH/REPLACE* blocks: 1 to delete it from its current location, 1 to insert it in the new location.
If you want to put code in a new file, use a *SEARCH/REPLACE block* with:
- A new file path, including dir name if needed
- An empty `SEARCH` section
- The new file's contents in the `REPLACE` section
{lazy_prompt}
ONLY EVER RETURN CODE IN A *SEARCH/REPLACE BLOCK*!
"""
system_reminder = ""

View file

@ -121,7 +121,6 @@ class Commands:
def run(self, inp):
if inp.startswith("!"):
return self.do_run("run", inp[1:])
return
res = self.matching_commands(inp)
if res is None:

View file

@ -6,6 +6,7 @@ import sys
import streamlit as st
from aider import urls
from aider.coders import Coder
from aider.dump import dump # noqa: F401
from aider.io import InputOutput
@ -18,9 +19,11 @@ class CaptureIO(InputOutput):
def tool_output(self, msg):
self.lines.append(msg)
super().tool_output(msg)
def tool_error(self, msg):
self.lines.append(msg)
super().tool_error(msg)
def get_captured_lines(self):
lines = self.lines
@ -75,6 +78,9 @@ def get_coder():
# coder.io = io # this breaks the input_history
coder.commands.io = io
for line in coder.get_announcements():
coder.io.tool_output(line)
return coder
@ -159,12 +165,12 @@ class GUI:
pass
def do_recommended_actions(self):
text = "Aider works best when your code is stored in a git repo. \n"
text += f"[See the FAQ for more info]({urls.git})"
with st.expander("Recommended actions", expanded=True):
with st.popover("Create a git repo to track changes"):
st.write(
"Aider works best when your code is stored in a git repo. \n[See the FAQ"
" for more info](https://aider.chat/docs/git.html)"
)
st.write(text)
self.button("Create git repo", key=random.random(), help="?")
with st.popover("Update your `.gitignore` file"):
@ -405,12 +411,20 @@ class GUI:
prompt = self.state.prompt
self.state.prompt = None
# This duplicates logic from within Coder
self.num_reflections = 0
self.max_reflections = 3
while prompt:
with self.messages.chat_message("assistant"):
res = st.write_stream(self.coder.run_stream(prompt))
self.state.messages.append({"role": "assistant", "content": res})
# self.cost()
prompt = None
if self.coder.reflected_message:
if self.num_reflections < self.max_reflections:
self.num_reflections += 1
self.info(self.coder.reflected_message)
prompt = self.coder.reflected_message
@ -513,9 +527,9 @@ def gui_main():
st.set_page_config(
layout="wide",
page_title="Aider",
page_icon="https://aider.chat/assets/favicon-32x32.png",
page_icon=urls.favicon,
menu_items={
"Get Help": "https://aider.chat/",
"Get Help": urls.website,
"Report a bug": "https://github.com/paul-gauthier/aider/issues",
"About": "# Aider\nAI pair programming in your browser.",
},

View file

@ -5,6 +5,7 @@ from datetime import datetime
from pathlib import Path
from prompt_toolkit.completion import Completer, Completion
from prompt_toolkit.enums import EditingMode
from prompt_toolkit.history import FileHistory
from prompt_toolkit.key_binding import KeyBindings
from prompt_toolkit.lexers import PygmentsLexer
@ -107,7 +108,9 @@ class InputOutput:
encoding="utf-8",
dry_run=False,
llm_history_file=None,
editingmode=EditingMode.EMACS,
):
self.editingmode = editingmode
no_color = os.environ.get("NO_COLOR")
if no_color is not None and no_color != "":
pretty = False
@ -237,7 +240,9 @@ class InputOutput:
def _(event):
event.current_buffer.insert_text("\n")
session = PromptSession(key_bindings=kb, **session_kwargs)
session = PromptSession(
key_bindings=kb, editing_mode=self.editingmode, **session_kwargs
)
line = session.prompt()
if line and line[0] == "{" and not multiline_input:

View file

@ -6,6 +6,7 @@ from pathlib import Path
import git
from dotenv import load_dotenv
from prompt_toolkit.enums import EditingMode
from streamlit.web import cli
from aider import __version__, models, utils
@ -66,7 +67,7 @@ def setup_git(git_root, io):
with repo.config_reader() as config:
try:
user_name = config.get_value("user", "name", None)
except configparser.NoSectionError:
except (configparser.NoSectionError, configparser.NoOptionError):
pass
try:
user_email = config.get_value("user", "email", None)
@ -246,6 +247,8 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
if return_coder and args.yes is None:
args.yes = True
editing_mode = EditingMode.VI if args.vim else EditingMode.EMACS
io = InputOutput(
args.pretty,
args.yes,
@ -259,6 +262,7 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
dry_run=args.dry_run,
encoding=args.encoding,
llm_history_file=args.llm_history_file,
editingmode=editing_mode,
)
fnames = [str(Path(fn).resolve()) for fn in args.files]
@ -333,6 +337,26 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
if args.openai_organization_id:
os.environ["OPENAI_ORGANIZATION"] = args.openai_organization_id
model_def_files = []
model_def_fname = Path(".aider.models.json")
model_def_files.append(Path.home() / model_def_fname) # homedir
if git_root:
model_def_files.append(Path(git_root) / model_def_fname) # git root
if args.model_metadata_file:
model_def_files.append(args.model_metadata_file)
model_def_files.append(model_def_fname.resolve())
model_def_files = list(map(str, model_def_files))
model_def_files = list(dict.fromkeys(model_def_files))
try:
model_metadata_files_loaded = models.register_models(model_def_files)
if len(model_metadata_files_loaded) > 0:
io.tool_output(f"Loaded {len(model_metadata_files_loaded)} model file(s)")
for model_metadata_file in model_metadata_files_loaded:
io.tool_output(f" - {model_metadata_file}")
except Exception as e:
io.tool_error(f"Error loading model info/cost: {e}")
return 1
main_model = models.Model(args.model, weak_model=args.weak_model)
lint_cmds = parse_lint_cmds(args.lint_cmd, io)
@ -389,6 +413,9 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
return
if args.commit:
if args.dry_run:
io.tool_output("Dry run enabled, skipping commit.")
else:
coder.commands.cmd_commit()
return

View file

@ -8,6 +8,7 @@ from typing import Optional
from PIL import Image
from aider import urls
from aider.dump import dump # noqa: F401
from aider.litellm import litellm
@ -426,6 +427,23 @@ class Model:
return res
def register_models(model_def_fnames):
model_metadata_files_loaded = []
for model_def_fname in model_def_fnames:
if not os.path.exists(model_def_fname):
continue
model_metadata_files_loaded.append(model_def_fname)
try:
with open(model_def_fname, "r") as model_def_file:
model_def = json.load(model_def_file)
except json.JSONDecodeError as e:
raise Exception(f"Error loading model definition from {model_def_fname}: {e}")
litellm.register_model(model_def)
return model_metadata_files_loaded
def validate_variables(vars):
missing = []
for var in vars:
@ -452,17 +470,17 @@ def sanity_check_model(io, model):
io.tool_error(f"- {key}")
elif not model.keys_in_environment:
show = True
io.tool_error(f"Model {model}: Unknown which environment variables are required.")
io.tool_output(f"Model {model}: Unknown which environment variables are required.")
if not model.info:
show = True
io.tool_error(
io.tool_output(
f"Model {model}: Unknown model, context window size and token costs unavailable."
)
possible_matches = fuzzy_match_models(model.name)
if possible_matches:
io.tool_error("Did you mean one of these?")
io.tool_output("Did you mean one of these?")
for match in possible_matches:
fq, m = match
if fq == m:
@ -471,7 +489,7 @@ def sanity_check_model(io, model):
io.tool_error(f"- {m} ({fq})")
if show:
io.tool_error("For more info see https://aider.chat/docs/llms/warnings.html")
io.tool_error(urls.model_warnings)
def fuzzy_match_models(name):

View file

@ -4,22 +4,24 @@ import re
import sys
import httpx
import playwright
import pypandoc
from bs4 import BeautifulSoup
from playwright.sync_api import sync_playwright
from aider import __version__
from aider import __version__, urls
from aider.dump import dump
aider_user_agent = f"Aider/{__version__} +https://aider.chat"
aider_user_agent = f"Aider/{__version__} +{urls.website}"
# Playwright is nice because it has a simple way to install dependencies on most
# platforms.
PLAYWRIGHT_INFO = """
PLAYWRIGHT_INFO = f"""
For better web scraping, install Playwright chromium with this command in your terminal:
playwright install --with-deps chromium
See https://aider.chat/docs/install/optional.html#enable-playwright for more info.
See {urls.enable_playwrite} for more info.
"""
@ -51,6 +53,7 @@ class Scraper:
else:
content = self.scrape_with_httpx(url)
dump(content)
if not content:
return
@ -79,7 +82,10 @@ class Scraper:
user_agent += " " + aider_user_agent
page = browser.new_page(user_agent=user_agent)
page.goto(url)
try:
page.goto(url, wait_until="networkidle", timeout=5000)
except playwright._impl._errors.TimeoutError:
pass
content = page.content()
browser.close()

15
aider/tests/test_urls.py Normal file
View file

@ -0,0 +1,15 @@
import requests
from aider import urls
def test_urls():
url_attributes = [
attr
for attr in dir(urls)
if not callable(getattr(urls, attr)) and not attr.startswith("__")
]
for attr in url_attributes:
url = getattr(urls, attr)
response = requests.get(url)
assert response.status_code == 200, f"URL {url} returned status code {response.status_code}"

7
aider/urls.py Normal file
View file

@ -0,0 +1,7 @@
website = "https://aider.chat/"
add_all_files = "https://aider.chat/docs/faq.html#how-can-i-add-all-the-files-to-the-chat"
edit_errors = "https://aider.chat/docs/troubleshooting/edit-errors.html"
git = "https://aider.chat/docs/git.html"
enable_playwrite = "https://aider.chat/docs/install/optional.html#enable-playwright"
favicon = "https://aider.chat/assets/icons/favicon-32x32.png"
model_warnings = "https://aider.chat/docs/llms/warnings.html"

View file

@ -26,6 +26,7 @@ pypandoc
litellm
google-generativeai
streamlit
watchdog
flake8
# v3.3 no longer works on python 3.9

View file

@ -321,6 +321,8 @@ uritemplate==4.1.1
# via google-api-python-client
urllib3==2.2.1
# via requests
watchdog==4.0.1
# via -r requirements.in
wcwidth==0.2.13
# via prompt-toolkit
yarl==1.9.4

View file

@ -9,7 +9,9 @@ else
ARG=$1
fi
# README.md before index.md, because index.md uses cog to include README.md
cog $ARG \
README.md \
website/index.md \
website/docs/commands.md \
website/docs/languages.md \

View file

@ -475,3 +475,140 @@
seconds_per_case: 17.6
total_cost: 1.6205
- dirname: 2024-06-08-22-37-55--qwen2-72b-instruct-whole
test_cases: 133
model: Qwen2 72B Instruct
edit_format: whole
commit_hash: 02c7335-dirty, 1a97498-dirty
pass_rate_1: 44.4
pass_rate_2: 55.6
percent_cases_well_formed: 100.0
error_outputs: 3
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 3
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model together_ai/qwen/Qwen2-72B-Instruct
date: 2024-06-08
versions: 0.37.1-dev
seconds_per_case: 14.3
total_cost: 0.0000
- dirname: 2024-06-08-23-45-41--gemini-1.5-flash-latest-whole
test_cases: 133
model: gemini-1.5-flash-latest
edit_format: whole
commit_hash: 86ea47f-dirty
pass_rate_1: 33.8
pass_rate_2: 44.4
percent_cases_well_formed: 100.0
error_outputs: 16
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 12
lazy_comments: 0
syntax_errors: 9
indentation_errors: 1
exhausted_context_windows: 0
test_timeouts: 3
command: aider --model gemini/gemini-1.5-flash-latest
date: 2024-06-08
versions: 0.37.1-dev
seconds_per_case: 7.2
total_cost: 0.0000
- dirname: 2024-06-09-03-28-21--codestral-whole
test_cases: 133
model: codestral-2405
edit_format: whole
commit_hash: effc88a
pass_rate_1: 35.3
pass_rate_2: 51.1
percent_cases_well_formed: 100.0
error_outputs: 4
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 4
lazy_comments: 1
syntax_errors: 0
indentation_errors: 1
exhausted_context_windows: 0
test_timeouts: 4
command: aider --model mistral/codestral-2405
date: 2024-06-09
versions: 0.37.1-dev
seconds_per_case: 7.5
total_cost: 0.6805
- dirname: 2024-06-08-19-25-26--codeqwen:7b-chat-v1.5-q8_0-whole
test_cases: 133
model: codeqwen:7b-chat-v1.5-q8_0
edit_format: whole
commit_hash: be0520f-dirty
pass_rate_1: 32.3
pass_rate_2: 34.6
percent_cases_well_formed: 100.0
error_outputs: 8
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 8
lazy_comments: 0
syntax_errors: 1
indentation_errors: 2
exhausted_context_windows: 0
test_timeouts: 1
command: aider --model ollama/codeqwen:7b-chat-v1.5-q8_0
date: 2024-06-08
versions: 0.37.1-dev
seconds_per_case: 15.6
total_cost: 0.0000
- dirname: 2024-06-08-16-12-31--codestral:22b-v0.1-q8_0-whole
test_cases: 133
model: codestral:22b-v0.1-q8_0
edit_format: whole
commit_hash: be0520f-dirty
pass_rate_1: 35.3
pass_rate_2: 48.1
percent_cases_well_formed: 100.0
error_outputs: 8
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 8
lazy_comments: 2
syntax_errors: 0
indentation_errors: 1
exhausted_context_windows: 0
test_timeouts: 3
command: aider --model ollama/codestral:22b-v0.1-q8_0
date: 2024-06-08
versions: 0.37.1-dev
seconds_per_case: 46.4
total_cost: 0.0000
- dirname: 2024-06-08-17-54-04--qwen2:72b-instruct-q8_0-whole
test_cases: 133
model: qwen2:72b-instruct-q8_0
edit_format: whole
commit_hash: 74e51d5-dirty
pass_rate_1: 43.6
pass_rate_2: 49.6
percent_cases_well_formed: 100.0
error_outputs: 27
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 27
lazy_comments: 0
syntax_errors: 5
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 0
command: aider --model ollama/qwen2:72b-instruct-q8_0
date: 2024-06-08
versions: 0.37.1-dev
seconds_per_case: 280.6
total_cost: 0.0000

View file

@ -0,0 +1,16 @@
You can get started quickly like this:
```
$ pip install aider-chat
# Change directory into a git repo
$ cd /to/your/git/repo
# Work with GPT-4o on your repo
$ export OPENAI_API_KEY=your-key-goes-here
$ aider
# Or, work with Claude 3 Opus on your repo
$ export ANTHROPIC_API_KEY=your-key-goes-here
$ aider --opus
```

22
website/_includes/help.md Normal file
View file

@ -0,0 +1,22 @@
If you need more help, please check our
[GitHub issues](https://github.com/paul-gauthier/aider/issues)
and file a new issue if your problem isn't discussed.
Or drop into our
[Discord](https://discord.gg/Tv2uQnR88V)
to chat with us.
When reporting problems, it is very helpful if you can provide:
- Aider version
- LLM model you are using
Including the "announcement" lines that
aider prints at startup
is an easy way to share this helpful info.
```
Aider v0.37.1-dev
Models: gpt-4o with diff edit format, weak model gpt-3.5-turbo
Git repo: .git with 243 files
Repo-map: using 1024 tokens
```

View file

@ -0,0 +1,57 @@
Aider tries to sanity check that it is configured correctly
to work with the LLM you specified:
- It checks to see that all required environment variables are set for the model. These variables are required to configure things like API keys, API base URLs, etc.
These settings are required to be correct.
- It checks a metadata database to look up the context window size and token costs for the model.
It's usually OK if this extra metadata isn't available.
Sometimes one or both of these checks will fail, so aider will issue
some of the following warnings.
## Missing environment variables
```
Model azure/gpt-4-turbo: Missing these environment variables:
- AZURE_API_BASE
- AZURE_API_VERSION
- AZURE_API_KEY
```
You need to set the listed environment variables.
Otherwise you will get error messages when you start chatting with the model.
## Unknown which environment variables are required
```
Model gpt-5: Unknown which environment variables are required.
```
Aider is unable verify the environment because it doesn't know
which variables are required for the model.
If required variables are missing,
you may get errors when you attempt to chat with the model.
You can look in the
[litellm provider documentation](https://docs.litellm.ai/docs/providers)
to see if the required variables are listed there.
## Context window size and token costs unavailable.
```
Model foobar: Unknown model, context window size and token costs unavailable.
```
If you specify a model that aider has never heard of, you will get an
"unknown model" warning.
This means aider doesn't know the context window size and token costs
for that model.
Some minor functionality will be limited when using such models, but
it's not really a significant problem.
Aider will also try to suggest similarly named models,
in case you made a typo or mistake when specifying the model name.

View file

@ -0,0 +1,4 @@
You can send long, multi-line messages in the chat in a few ways:
- Paste a multi-line message directly into the chat.
- Enter `{` alone on the first line to start a multiline message and `}` alone on the last line to end it.
- Use Meta-ENTER to start a new line without sending the message (Esc+ENTER in some environments).

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

BIN
website/assets/install.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 139 KiB

BIN
website/assets/install.mp4 Normal file

Binary file not shown.

View file

@ -15,7 +15,7 @@
## Specify the OpenAI API key
#openai-api-key:
## Specify the OpenAI API key
## Specify the Anthropic API key
#anthropic-api-key:
## Specify the model to use for the main chat (default: gpt-4o)
@ -60,6 +60,9 @@
## Specify the OpenAI organization ID
#openai-organization-id:
## Specify a file with context window and costs for unknown models
#model-metadata-file:
## Specify what edit format the LLM should use (default depends on model)
#edit-format:
@ -171,6 +174,9 @@
#################
# Other Settings:
## Use VI editing mode in the terminal (default: False)
#vim: false
## Specify the language for voice using ISO 639-1 code (default: auto)
#voice-language: en

View file

@ -1,11 +1,12 @@
---
parent: Configuration
nav_order: 15
description: How to configure aider with a yaml config file.
---
# YAML config file
Most options can also be set in an `.aider.conf.yml` file
Most of aider's options can be set in an `.aider.conf.yml` file,
which can be placed in your home directory or at the root of
your git repo.
@ -40,7 +41,7 @@ cog.outl("```")
## Specify the OpenAI API key
#openai-api-key:
## Specify the OpenAI API key
## Specify the Anthropic API key
#anthropic-api-key:
## Specify the model to use for the main chat (default: gpt-4o)
@ -85,6 +86,9 @@ cog.outl("```")
## Specify the OpenAI organization ID
#openai-organization-id:
## Specify a file with context window and costs for unknown models
#model-metadata-file:
## Specify what edit format the LLM should use (default depends on model)
#edit-format:
@ -196,6 +200,9 @@ cog.outl("```")
#################
# Other Settings:
## Use VI editing mode in the terminal (default: False)
#vim: false
## Specify the language for voice using ISO 639-1 code (default: auto)
#voice-language: en

View file

@ -3,6 +3,7 @@ title: Aider in your browser
highlight_image: /assets/browser.jpg
parent: Usage
nav_order: 800
description: Aider can run in your browser, not just on the command line.
---
{% if page.date %}
<p class="post-date">{{ page.date | date: "%B %d, %Y" }}</p>

View file

@ -1,6 +1,7 @@
---
parent: Usage
nav_order: 50
description: Control aider with in-chat commands like /add, /model, etc.
---
# In-chat commands
@ -31,9 +32,15 @@ cog.out(get_help_md())
- **/web** Use headless selenium to scrape a webpage and add the content to the chat
<!--[[[end]]]-->
# Entering multi-line chat messages
{% include multi-line.md %}
# Keybindings
The interactive prompt is built with [prompt-toolkit](https://github.com/prompt-toolkit/python-prompt-toolkit) which provides a lot of Emacs and Vi-style keyboard. Some emacs bindings you may find useful are
The interactive prompt is built with [prompt-toolkit](https://github.com/prompt-toolkit/python-prompt-toolkit) which provides emacs and vi keybindings.
## Emacs
- `Ctrl-A` : Move cursor to the start of the line.
- `Ctrl-B` : Move cursor back one character.
@ -46,5 +53,27 @@ The interactive prompt is built with [prompt-toolkit](https://github.com/prompt-
- `Ctrl-P` : Move up to the previous history entry.
- `Ctrl-R` : Reverse search in command history.
Note: aider currently exits vi normal mode after a single command, (maybe something to do with the esc keybinding?).
Feel free to investigate and make a PR if you would like to see it fully supported.
## Vi
To use vi/vim keybindings, run aider with the `--vim` switch.
- `Esc` : Switch to command mode.
- `i` : Switch to insert mode.
- `a` : Move cursor one character to the right and switch to insert mode.
- `A` : Move cursor to the end of the line and switch to insert mode.
- `I` : Move cursor to the beginning of the line and switch to insert mode.
- `h` : Move cursor one character to the left.
- `j` : Move cursor down one line.
- `k` : Move cursor up one line.
- `l` : Move cursor one character to the right.
- `w` : Move cursor forward one word.
- `b` : Move cursor backward one word.
- `0` : Move cursor to the beginning of the line.
- `$` : Move cursor to the end of the line.
- `x` : Delete the character under the cursor.
- `dd` : Delete the current line.
- `u` : Undo the last change.
- `Ctrl-R` : Redo the last undone change.

View file

@ -1,6 +1,7 @@
---
nav_order: 55
has_children: true
description: Information on all of aider's settings and how to use them.
---
# Configuration

View file

@ -1,6 +1,7 @@
---
parent: Usage
nav_order: 800
description: Tell aider to follow your coding conventions when it works on your code.
---
# Specifying coding conventions

View file

@ -1,11 +1,12 @@
---
parent: Configuration
nav_order: 900
description: Using a .env file to store LLM API keys for aider.
---
# Storing LLM params in .env
You can use a `.env` file to store various keys and other settings for the
You can use a `.env` file to store API keys and other settings for the
models you use with aider.
You currently can not set general aider options
in the `.env` file, only LLM environment variables.

View file

@ -1,5 +1,6 @@
---
nav_order: 85
description: Frequently asked questions about aider.
---
# Frequently asked questions
@ -63,34 +64,6 @@ has provided this
[Colab notebook](https://colab.research.google.com/drive/1J9XynhrCqekPL5PR6olHP6eE--rnnjS9?usp=sharing).
## Aider isn't editing my files?
Sometimes the LLM will reply with some code changes that don't get applied to your local files.
In these cases, aider might say something like "Failed to apply edit to *filename*".
This usually happens because the LLM is not specifying the edits
to make in the format that aider expects.
GPT-3.5 is especially prone to disobeying the system prompt instructions in this manner, but it also happens with stronger models.
Aider makes every effort to get the LLM
to conform, and works hard to deal with
replies that are "almost" correctly formatted.
If Aider detects an improperly formatted reply, it gives
the LLM feedback to try again.
Also, before each release new versions of aider are
[benchmarked](https://aider.chat/docs/benchmarks.html).
This helps prevent regressions in the code editing
performance of an LLM that could have been inadvertantly
introduced.
But sometimes the LLM just won't cooperate.
In these cases, here are some things you might try:
- Use `/drop` to remove files from the chat session which aren't needed for the task at hand. This will reduce distractions and may help GPT produce properly formatted edits.
- Use `/clear` to remove the conversation history, again to help GPT focus.
- Try the a different LLM.
## Can I change the system prompts that aider uses?
Aider is set up to support different system prompts and edit formats

View file

@ -1,5 +1,6 @@
---
nav_order: 800
description: Aider is tightly integrated with git.
---
# Git integration

View file

@ -2,29 +2,42 @@
title: Installation
has_children: true
nav_order: 20
description: How to install and get started pair programming with aider.
---
# Quick start
You can get started quickly like this:
{% include get-started.md %}
```
$ pip install aider-chat
# To work with GPT-4o
$ export OPENAI_API_KEY=your-key-goes-here
$ aider
# To work with Claude 3 Opus:
$ export ANTHROPIC_API_KEY=your-key-goes-here
$ aider --opus
```
Or see
Or see the
[full installation instructions](/docs/install/install.html)
for more details,
or the
[usage instructions](https://aider.chat/docs/usage.html) to start coding with aider.
<div class="video-container">
<video controls poster="/assets/install.jpg">
<source src="/assets/install.mp4" type="video/mp4">
<a href="/assets/install.mp4">Installing aider</a>
</video>
</div>
<style>
.video-container {
position: relative;
padding-bottom: 76.2711864407%;
height: 0;
overflow: hidden;
}
.video-container video {
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
}
</style>

View file

@ -0,0 +1,39 @@
---
title: GitHub Codespaces
parent: Installation
nav_order: 900
---
# GitHub Codespaces
You can use aider in GitHub Codespaces via the built-in Terminal pane.
See below for an example,
but you can see the
[main install instructions](/docs/install.html)
for all the details.
<div class="video-container">
<video controls poster="/assets/codespaces.jpg">
<source src="/assets/codespaces.mp4" type="video/mp4">
<a href="/assets/codespaces.mp4">Install aider in GitHub Codespaces</a>
</video>
</div>
<style>
.video-container {
position: relative;
padding-bottom: 101.89%; /* 1080 / 1060 = 1.0189 */
height: 0;
overflow: hidden;
}
.video-container video {
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
}
</style>

View file

@ -26,6 +26,7 @@ Put a line in it like this to specify your api key:
openai-api-key: sk-...
```
## Enable Playwright
Aider supports adding web pages to the chat with the `/web <url>` command.

View file

@ -1,5 +1,6 @@
---
nav_order: 900
description: Aider supports pretty much all popular coding languages.
---
# Supported languages

View file

@ -15,19 +15,6 @@ The leaderboards below report the results from a number of popular LLMs.
While [aider can connect to almost any LLM](/docs/llms.html),
it works best with models that score well on the benchmarks.
## GPT-4o takes the #1 & #2 spots
GPT-4o tops the aider LLM code editing leaderboard at 72.9%, versus 68.4% for Opus. GPT-4o takes second on aider's refactoring leaderboard with 62.9%, versus Opus at 72.3%.
GPT-4o did much better than the 4-turbo models, and seems *much* less lazy.
GPT-4o is also able to use aider's established "diff" edit format that uses
`SEARCH/REPLACE` blocks.
This diff format is used by all the other capable models, including Opus and
the original GPT-4 models
The GPT-4 Turbo models have all required the "udiff" edit format, due to their
tendancy to lazy coding.
## Code editing leaderboard

View file

@ -2,6 +2,7 @@
title: Connecting to LLMs
nav_order: 40
has_children: true
description: Aider can connect to most LLMs for AI pair programming.
---
# Aider can connect to most LLMs

View file

@ -5,66 +5,38 @@ nav_order: 900
# Model warnings
Aider supports connecting to almost any LLM,
but it may not work well with less capable models.
If you see the model returning code, but aider isn't able to edit your files
and commit the changes...
this is usually because the model isn't capable of properly
returning "code edits".
Models weaker than GPT 3.5 may have problems working well with aider.
{% include model-warnings.md %}
Aider tries to sanity check that it is configured correctly
to work with the specified model:
- It checks to see that all required environment variables are set for the model. These variables are required to configure things like API keys, API base URLs, etc.
- It checks a metadata database to look up the context window size and token costs for the model.
## Specifying context window size and token costs
Sometimes one or both of these checks will fail, so aider will issue
some of the following warnings.
You can register context window limits and costs for models that aren't known
to aider. Create a `.aider.models.json` file in one of these locations:
## Missing environment variables
- Your home directory.
- The root if your git repo.
- The current directory where you launch aider.
- Or specify a specific file with the `--model-metadata-file <filename>` switch.
If the files above exist, they will be loaded in that order.
Files loaded last will take priority.
The json file should be a dictionary with an entry for each model, as follows:
```
Model azure/gpt-4-turbo: Missing these environment variables:
- AZURE_API_BASE
- AZURE_API_VERSION
- AZURE_API_KEY
{
"deepseek-chat": {
"max_tokens": 4096,
"max_input_tokens": 32000,
"max_output_tokens": 4096,
"input_cost_per_token": 0.00000014,
"output_cost_per_token": 0.00000028,
"litellm_provider": "deepseek",
"mode": "chat"
}
}
```
You need to set the listed environment variables.
Otherwise you will get error messages when you start chatting with the model.
## Unknown which environment variables are required
```
Model gpt-5: Unknown which environment variables are required.
```
Aider is unable verify the environment because it doesn't know
which variables are required for the model.
If required variables are missing,
you may get errors when you attempt to chat with the model.
You can look in the
[litellm provider documentation](https://docs.litellm.ai/docs/providers)
to see if the required variables are listed there.
## Unknown model, did you mean?
```
Model gpt-5: Unknown model, context window size and token costs unavailable.
Did you mean one of these?
- gpt-4
```
If you specify a model that aider has never heard of, you will get an
"unknown model" warning.
This means aider doesn't know the context window size and token costs
for that model.
Some minor functionality will be limited when using such models, but
it's not really a significant problem.
Aider will also try to suggest similarly named models,
in case you made a typo or mistake when specifying the model name.
See
[litellm's model_prices_and_context_window.json file](https://github.com/BerriAI/litellm/blob/main/model_prices_and_context_window.json) for more examples.

View file

@ -1,6 +1,7 @@
---
parent: Configuration
nav_order: 10
description: Details about all of aider's settings.
---
# Options reference
@ -18,7 +19,7 @@ usage: aider [-h] [--openai-api-key] [--anthropic-api-key] [--model]
[--35turbo] [--models] [--openai-api-base]
[--openai-api-type] [--openai-api-version]
[--openai-api-deployment-id] [--openai-organization-id]
[--edit-format] [--weak-model]
[--model-metadata-file] [--edit-format] [--weak-model]
[--show-model-warnings | --no-show-model-warnings]
[--map-tokens] [--max-chat-history-tokens] [--env-file]
[--input-history-file] [--chat-history-file]
@ -34,7 +35,7 @@ usage: aider [-h] [--openai-api-key] [--anthropic-api-key] [--model]
[--dry-run | --no-dry-run] [--commit] [--lint]
[--lint-cmd] [--auto-lint | --no-auto-lint]
[--test-cmd] [--auto-test | --no-auto-test] [--test]
[--voice-language] [--version] [--check-update]
[--vim] [--voice-language] [--version] [--check-update]
[--skip-check-update] [--apply] [--yes] [-v]
[--show-repo-map] [--show-prompts] [--message]
[--message-file] [--encoding] [-c] [--gui]
@ -56,7 +57,7 @@ Specify the OpenAI API key
Environment variable: `OPENAI_API_KEY`
### `--anthropic-api-key ANTHROPIC_API_KEY`
Specify the OpenAI API key
Specify the Anthropic API key
Environment variable: `ANTHROPIC_API_KEY`
### `--model MODEL`
@ -122,6 +123,10 @@ Environment variable: `OPENAI_API_DEPLOYMENT_ID`
Specify the OpenAI organization ID
Environment variable: `OPENAI_ORGANIZATION_ID`
### `--model-metadata-file MODEL_FILE`
Specify a file with context window and costs for unknown models
Environment variable: `AIDER_MODEL_METADATA_FILE`
### `--edit-format EDIT_FORMAT`
Specify what edit format the LLM should use (default depends on model)
Environment variable: `AIDER_EDIT_FORMAT`
@ -321,6 +326,11 @@ Environment variable: `AIDER_TEST`
## Other Settings:
### `--vim`
Use VI editing mode in the terminal (default: False)
Default: False
Environment variable: `AIDER_VIM`
### `--voice-language VOICE_LANGUAGE`
Specify the language for voice using ISO 639-1 code (default: auto)
Default: en

View file

@ -1,6 +1,7 @@
---
highlight_image: /assets/robot-ast.png
nav_order: 900
description: Aider uses a map of your git repository to provide code context to LLMs.
---
# Repository map
@ -8,26 +9,25 @@ nav_order: 900
![robot flowchat](/assets/robot-ast.png)
Aider
sends the LLM a **concise map of your whole git repository**
uses a **concise map of your whole git repository**
that includes
the most important classes and functions along with their types and call signatures.
This helps the LLM understand the code it needs to change,
This helps aider understand the code it's editing
and how it relates to the other parts of the codebase.
The repo map also helps the LLM write new code
The repo map also helps aider write new code
that respects and utilizes existing libraries, modules and abstractions
found elsewhere in the codebase.
## Using a repo map to provide context
Aider sends a **repo map** to the LLM along with
each request from the user to make a code change.
The map contains a list of the files in the
each change request from the user.
The repo map contains a list of the files in the
repo, along with the key symbols which are defined in each file.
It shows how each of these symbols are defined in the
source code, by including the critical lines of code for each definition.
It shows how each of these symbols are defined, by including the critical lines of code for each definition.
Here's a
sample of the map of the aider repo, just showing the maps of
Here's a part of
the repo map of aider's repo, for
[base_coder.py](https://github.com/paul-gauthier/aider/blob/main/aider/coders/base_coder.py)
and
[commands.py](https://github.com/paul-gauthier/aider/blob/main/aider/commands.py)
@ -70,7 +70,7 @@ aider/commands.py:
Mapping out the repo like this provides some key benefits:
- The LLM can see classes, methods and function signatures from everywhere in the repo. This alone may give it enough context to solve many tasks. For example, it can probably figure out how to use the API exported from a module just based on the details shown in the map.
- If it needs to see more code, the LLM can use the map to figure out by itself which files it needs to look at in more detail. The LLM will then ask to see these specific files, and aider will automatically add them to the chat context.
- If it needs to see more code, the LLM can use the map to figure out which files it needs to look at. The LLM can ask to see these specific files, and aider will offer to add them to the chat context.
## Optimizing the map
@ -98,7 +98,7 @@ the overall codebase.
## More info
Please the
Please check the
[repo map article on aider's blog](https://aider.chat/2023/10/22/repomap.html)
for more information on aider's repository map
and how it is constructed.

View file

@ -1,5 +1,6 @@
---
nav_order: 900
description: You can script aider via the command line or python.
---
# Scripting aider
@ -62,7 +63,7 @@ from aider.models import Model
# This is a list of files to add to the chat
fnames = ["greeting.py"]
model = Model("gpt-4-turbo", weak_model="gpt-3.5-turbo")
model = Model("gpt-4-turbo")
# Create a coder object
coder = Coder.create(main_model=model, fnames=fnames)
@ -84,6 +85,6 @@ It can also be helpful to set the equivalend of `--yes` by doing this:
from aider.io import InputOutput
io = InputOutput(yes=True)
# ...
coder = Coder.create(client=client, fnames=fnames, io=io)
coder = Coder.create(model=model, fnames=fnames, io=io)
```

View file

@ -1,6 +1,7 @@
---
parent: Usage
nav_order: 25
description: Tips for AI pair programming with aider.
---
# Tips
@ -19,10 +20,7 @@ Or just paste the errors into the chat. Let the aider figure out and fix the bug
- If test are failing, use the `/test` [in-chat command](/docs/commands.html)
to run tests and
share the error output with the aider.
- You can send long, multi-line messages in the chat:
- In most environments, you can paste multi-line messages directly into the chat.
- Enter `{` alone on the first line to start a multiline message and `}` alone on the last line to end and send it.
- You can use Meta-ENTER (Esc+ENTER in some environments) to start a new line without sending the message.
- {% include multi-line.md %}
- LLMs know about a lot of standard tools and libraries, but may get some of the fine details wrong about API versions and function arguments.
You can paste doc snippets into the chat to resolve these issues.

View file

@ -0,0 +1,11 @@
---
nav_order: 60
has_children: true
description: How to troubleshoot problems with aider and get help.
---
# Troubleshooting
Below are some approaches for troubleshooting problems with aider.
{% include help.md %}

View file

@ -0,0 +1,46 @@
---
parent: Troubleshooting
nav_order: 10
---
# File editing problems
Sometimes the LLM will reply with some code changes
that don't get applied to your local files.
In these cases, aider might say something like "Failed to apply edit to *filename*"
or other error messages.
This usually happens because the LLM is disobeying the system prompts
and trying to make edits in a format that aider doesn't expect.
Aider makes every effort to get the LLM
to conform, and works hard to deal with
LLMM edits that are "almost" correctly formatted.
But sometimes the LLM just won't cooperate.
In these cases, here are some things you might try.
## Use a capable model
If possible try using GPT-4o or Opus, as they are the strongest and most
capable models.
Weaker models
are more prone to
disobeying the system prompt instructions.
Most local models are just barely capable of working with aider,
so editing errors are probably unavoidable.
## Reduce distractions
Many LLM now have very large context windows,
but filling them with irrelevant code often
cofuses the model.
- Don't add too many files to the chat, *just* add the files you think need to be edited.
Aider also sends the LLM a [map of your entire git repo](https://aider.chat/docs/repomap.html), so other relevant code will be included automatically.
- Use `/drop` to remove files from the chat session which aren't needed for the task at hand. This will reduce distractions and may help GPT produce properly formatted edits.
- Use `/clear` to remove the conversation history, again to help GPT focus.
## More help
{% include help.md %}

View file

@ -0,0 +1,8 @@
---
parent: Troubleshooting
nav_order: 30
---
# Getting help
{% include help.md %}

View file

@ -0,0 +1,12 @@
---
parent: Troubleshooting
nav_order: 20
---
# Model warnings
{% include model-warnings.md %}
## More help
{% include help.md %}

View file

@ -1,6 +1,7 @@
---
parent: Usage
nav_order: 75
description: Intro and tutorial videos made by aider users.
---
# Tutorial videos

View file

@ -1,6 +1,7 @@
---
nav_order: 30
has_children: true
description: How to use aider to pair program with AI and edit code in your local git repo.
---
# Usage
@ -9,11 +10,31 @@ Run `aider` with the source code files you want to edit.
These files will be "added to the chat session", so that
aider can see their
contents and edit them for you.
They can be existing files or the name of files you want
aider to create for you.
```
aider <file1> <file2> ...
```
At the aider `>` prompt, ask for code changes and aider
will edit those files to accomplish your request.
```
$ aider factorial.py
Aider v0.37.1-dev
Models: gpt-4o with diff edit format, weak model gpt-3.5-turbo
Git repo: .git with 258 files
Repo-map: using 1024 tokens
Use /help to see in-chat commands, run with --help to see cmd line args
───────────────────────────────────────────────────────────────────────
>`Make a program that asks for a number and prints its factorial
...
```
## Adding files
Just add the files that the aider will need to *edit*.
@ -47,7 +68,7 @@ Or, during your chat you can switch models with the in-chat
Ask aider to make changes to your code.
It will show you some diffs of the changes it is making to
complete you request.
It will git commit all the changes it makes,
Aider will git commit all of its changes,
so they are easy to track and undo.
You can always use the `/undo` command to undo changes you don't

View file

@ -1,22 +1,23 @@
---
parent: Usage
nav_order: 100
description: Speak with aider about your code!
---
# Voice-to-code with aider
Speak with GPT about your code! Request new features, test cases or bug fixes using your voice and let GPT do the work of editing the files in your local git repo. As with all of aider's capabilities, you can use voice-to-code with an existing repo or to start a new project.
Speak with aider about your code! Request new features, test cases or bug fixes using your voice and let aider do the work of editing the files in your local git repo. As with all of aider's capabilities, you can use voice-to-code with an existing repo or to start a new project.
Voice support fits quite naturally into aider's AI pair programming
chat interface. Now you can fluidly switch between voice and text chat
when you ask GPT to edit your code.
when you ask aider to edit your code.
## How to use voice-to-code
Use the in-chat `/voice` command to start recording,
and press `ENTER` when you're done speaking.
Your voice coding instructions will be transcribed
and sent to GPT, as if you had typed them into
Your voice coding instructions will be transcribed,
as if you had typed them into
the aider chat session.
See the [installation instructions](https://aider.chat/docs/install/optional.html#enable-voice-coding) for

View file

@ -4,9 +4,17 @@ nav_order: 1
---
<!--[[[cog
cog.out(open("README.md").read())
# This page is a copy of README.md, adding the front matter above.
# Remove any cog markup before inserting the README text.
text = open("README.md").read()
text = text.replace('['*3 + 'cog', ' NOOP ')
text = text.replace('['*3 + 'end', ' NOOP ')
text = text.replace(']'*3, '')
cog.out(text)
]]]-->
<!-- Edit README.md, not index.md -->
# Aider is AI pair programming in your terminal
Aider lets you pair program with LLMs,
@ -32,47 +40,57 @@ and can [connect to almost any LLM](https://aider.chat/docs/llms.html).
</p>
## Getting started
<!-- NOOP
# We can't do this here: {% include get-started.md %}
# Because this page is rendered by GitHub as the repo README
cog.out(open("website/_includes/get-started.md").read())
-->
You can get started quickly like this:
```
$ pip install aider-chat
# To work with GPT-4o
# Change directory into a git repo
$ cd /to/your/git/repo
# Work with GPT-4o on your repo
$ export OPENAI_API_KEY=your-key-goes-here
$ aider
# To work with Claude 3 Opus:
# Or, work with Claude 3 Opus on your repo
$ export ANTHROPIC_API_KEY=your-key-goes-here
$ aider --opus
```
<!-- NOOP -->
**See the
See the
[installation instructions](https://aider.chat/docs/install.html)
and other
[documentation](https://aider.chat/docs/usage.html)
for more details.**
for more details.
## Features
- Chat with aider about your code: `aider <file1> <file2> ...`
- Run aider with the files you want to edit: `aider <file1> <file2> ...`
- Ask for changes:
- New features, test cases, improvements.
- Bug fixes, updated docs or code refactors.
- Paste in a GitHub issue that needs to be solved.
- Add new features or test cases.
- Describe a bug.
- Paste in an error message or or GitHub issue URL.
- Refactor code.
- Update docs.
- Aider will edit your files to complete your request.
- Aider [automatically git commits](https://aider.chat/docs/git.html) changes with a sensible commit message.
- Aider works with [most popular languages](https://aider.chat/docs/languages.html): python, javascript, typescript, php, html, css, and more...
- Aider works best with GPT-4o and Claude 3 Opus
and can [connect to almost any LLM](https://aider.chat/docs/llms.html).
- Aider can make coordinated changes across multiple files at once.
- Aider can edit multiple files at once for complex requests.
- Aider uses a [map of your entire git repo](https://aider.chat/docs/repomap.html), which helps it work well in larger codebases.
- You can also edit files in your editor while chatting with aider.
Aider will notice and always use the latest version.
So you can bounce back and forth between aider and your editor, to collaboratively code with AI.
- Images can be added to the chat (GPT-4o, GPT-4 Turbo, etc).
- URLs can be added to the chat and aider will read their content.
- [Code with your voice](https://aider.chat/docs/voice.html) using speech recognition.
- Edit files in your editor while chatting with aider,
and it will always use the latest version.
Pair program with AI.
- Add images to the chat (GPT-4o, GPT-4 Turbo, etc).
- Add URLs to the chat and aider will read their content.
- [Code with your voice](https://aider.chat/docs/voice.html).
## State of the art
@ -89,22 +107,25 @@ projects like django, scikitlearn, matplotlib, etc.
</a>
</p>
## Documentation
## More info
- [Documentation](https://aider.chat/)
- [Installation](https://aider.chat/docs/install.html)
- [Usage](https://aider.chat/docs/usage.html)
- [Tutorial videos](https://aider.chat/docs/tutorials.html)
- [Connecting to LLMs](https://aider.chat/docs/llms.html)
- [Configuration](https://aider.chat/docs/config.html)
- [Troubleshooting](https://aider.chat/docs/troubleshooting.html)
- [LLM Leaderboards](https://aider.chat/docs/leaderboards/)
- [FAQ](https://aider.chat/docs/faq.html)
- [GitHub](https://github.com/paul-gauthier/aider)
- [Discord](https://discord.gg/Tv2uQnR88V)
- [Blog](https://aider.chat/blog/)
## Kind words from users
- *The best free open source AI coding assistant.* -- [IndyDevDan](https://youtu.be/YALpX8oOn78)
- *The best AI coding assistant so far.* -- [Matthew Berman](https://www.youtube.com/watch?v=df8afeb1FY8)
- *Hands down, this is the best AI coding assistant tool so far.* -- [IndyDevDan](https://www.youtube.com/watch?v=MPYFPvxfGZs)
- *Aider ... has easily quadrupled my coding productivity.* -- [SOLAR_FIELDS](https://news.ycombinator.com/item?id=36212100)
- *It's a cool workflow... Aider's ergonomics are perfect for me.* -- [qup](https://news.ycombinator.com/item?id=38185326)
- *It's really like having your senior developer live right in your Git repo - truly amazing!* -- [rappster](https://github.com/paul-gauthier/aider/issues/124)
@ -119,5 +140,6 @@ projects like django, scikitlearn, matplotlib, etc.
- *I have been recovering from multiple shoulder surgeries ... and have used aider extensively. It has allowed me to continue productivity.* -- [codeninja](https://www.reddit.com/r/OpenAI/s/nmNwkHy1zG)
- *I am an aider addict. I'm getting so much more work done, but in less time.* -- [dandandan](https://discord.com/channels/1131200896827654144/1131200896827654149/1135913253483069470)
- *After wasting $100 on tokens trying to find something better, I'm back to Aider. It blows everything else out of the water hands down, there's no competition whatsoever.* -- [SystemSculpt](https://discord.com/channels/1131200896827654144/1131200896827654149/1178736602797846548)
- *Hands down, this is the best AI coding assistant tool so far.* -- [IndyDevDan](https://www.youtube.com/watch?v=MPYFPvxfGZs)
- *Best agent for actual dev work in existing codebases.* -- [Nick Dobos](https://twitter.com/NickADobos/status/1690408967963652097?s=20)
<!--[[[end]]]-->