Update docs github.com/paul-gauthier -> github.com/Aider-AI

This commit is contained in:
Paul Gauthier 2024-10-04 12:53:45 -07:00
parent d17bd6ebcc
commit 516e637c39
30 changed files with 47 additions and 47 deletions

View file

@ -107,7 +107,7 @@ projects like django, scikitlearn, matplotlib, etc.
- [Configuration](https://aider.chat/docs/config.html)
- [Troubleshooting](https://aider.chat/docs/troubleshooting.html)
- [LLM Leaderboards](https://aider.chat/docs/leaderboards/)
- [GitHub](https://github.com/paul-gauthier/aider)
- [GitHub](https://github.com/Aider-AI/aider)
- [Discord](https://discord.gg/Tv2uQnR88V)
- [Blog](https://aider.chat/blog/)

View file

@ -24,7 +24,7 @@ exclude:
aux_links:
"GitHub":
- "https://github.com/paul-gauthier/aider"
- "https://github.com/Aider-AI/aider"
"Discord":
- "https://discord.gg/Tv2uQnR88V"
"Blog":
@ -32,7 +32,7 @@ aux_links:
nav_external_links:
- title: "GitHub"
url: "https://github.com/paul-gauthier/aider"
url: "https://github.com/Aider-AI/aider"
- title: "Discord"
url: "https://discord.gg/Tv2uQnR88V"

View file

@ -1,5 +1,5 @@
If you need more help, please check our
[GitHub issues](https://github.com/paul-gauthier/aider/issues)
[GitHub issues](https://github.com/Aider-AI/aider/issues)
and file a new issue if your problem isn't discussed.
Or drop into our
[Discord](https://discord.gg/Tv2uQnR88V)

View file

@ -1,7 +1,7 @@
<footer class="site-footer">
Aider is AI pair programming in your terminal.
Aider is on
<a href="https://github.com/paul-gauthier/aider">GitHub</a>
<a href="https://github.com/Aider-AI/aider">GitHub</a>
and
<a href="https://discord.gg/Tv2uQnR88V">Discord</a>.
</footer>

View file

@ -110,9 +110,9 @@ source code, by including the critical lines of code for each definition.
Here's a
sample of the map of the aider repo, just showing the maps of
[base_coder.py](https://github.com/paul-gauthier/aider/blob/main/aider/coders/base_coder.py)
[base_coder.py](https://github.com/Aider-AI/aider/blob/main/aider/coders/base_coder.py)
and
[commands.py](https://github.com/paul-gauthier/aider/blob/main/aider/commands.py)
[commands.py](https://github.com/Aider-AI/aider/blob/main/aider/commands.py)
:
```
@ -188,7 +188,7 @@ It specifically uses the
[py-tree-sitter-languages](https://github.com/grantjenks/py-tree-sitter-languages)
python module,
which provides simple, pip-installable binary wheels for
[most popular programming languages](https://github.com/paul-gauthier/grep-ast/blob/main/grep_ast/parsers.py).
[most popular programming languages](https://github.com/Aider-AI/grep-ast/blob/main/grep_ast/parsers.py).
Tree-sitter parses source code into an Abstract Syntax Tree (AST) based
on the syntax of the programming language.
@ -245,7 +245,7 @@ just install [aider](https://aider.chat/docs/install.html).
## Credits
Aider uses
[modified versions of the tags.scm files](https://github.com/paul-gauthier/aider/tree/main/aider/queries)
[modified versions of the tags.scm files](https://github.com/Aider-AI/aider/tree/main/aider/queries)
from these
open source tree-sitter language implementations:

View file

@ -30,7 +30,7 @@ aider --opus
## Aider's code editing benchmark
[Aider](https://github.com/paul-gauthier/aider)
[Aider](https://github.com/Aider-AI/aider)
is an open source command line chat tool that lets you
pair program with AI on code in your local git repo.

View file

@ -52,7 +52,7 @@ def some_complex_method(foo, bar):
# ... implement method here ...
```
Aider uses a ["laziness" benchmark suite](https://github.com/paul-gauthier/refactor-benchmark)
Aider uses a ["laziness" benchmark suite](https://github.com/Aider-AI/refactor-benchmark)
which is designed to both provoke and quantify lazy coding.
It consists of
89 python refactoring tasks

View file

@ -15,7 +15,7 @@ nav_exclude: true
I recently wanted to draw a graph showing how LLM code editing skill has been
changing over time as new models have been released by OpenAI, Anthropic and others.
I have all the
[data in a yaml file](https://github.com/paul-gauthier/aider/blob/main/website/_data/edit_leaderboard.yml) that is used to render
[data in a yaml file](https://github.com/Aider-AI/aider/blob/main/website/_data/edit_leaderboard.yml) that is used to render
[aider's LLM leaderboards](https://aider.chat/docs/leaderboards/).
Below is the aider chat transcript, which shows:

View file

@ -25,7 +25,7 @@ This increases the ability of the LLM to understand the problem and
make the correct changes to resolve it.
Aider ships with basic linters built with tree-sitter that support
[most popular programming languages](https://github.com/paul-gauthier/grep-ast/blob/main/grep_ast/parsers.py).
[most popular programming languages](https://github.com/Aider-AI/grep-ast/blob/main/grep_ast/parsers.py).
These built in linters will detect syntax errors and other fatal problems with the code.
You can also configure aider to use your preferred linters.

View file

@ -76,7 +76,7 @@ The held out "acceptance tests" were *only* used
after benchmarking to compute statistics on which problems aider
correctly resolved.
The [full harness to run aider on SWE Bench Lite is available on GitHub](https://github.com/paul-gauthier/aider-swe-bench).
The [full harness to run aider on SWE Bench Lite is available on GitHub](https://github.com/Aider-AI/aider-swe-bench).
The benchmarking process was similar to how a developer might use aider to
resolve a GitHub issue:

View file

@ -13,7 +13,7 @@ nav_exclude: true
[![self assembly](/assets/self-assembly.jpg)](https://aider.chat/assets/self-assembly.jpg)
The
[aider git repo](https://github.com/paul-gauthier/aider)
[aider git repo](https://github.com/Aider-AI/aider)
currently contains about 4K commits and 14K lines of code.
Aider made 15% of the commits, inserting 4.8K and deleting 1.5K lines of code.

View file

@ -64,7 +64,7 @@ with the problem statement
submitted as the opening chat message from "the user".
- After that aider ran as normal, except all of aider's
suggestions were always accepted without user approval.
- A [simple harness](https://github.com/paul-gauthier/aider-swe-bench#the-aider-agent) was used to retry the SWE Bench problem if aider produced code that wasn't *plausibly correct*.
- A [simple harness](https://github.com/Aider-AI/aider-swe-bench#the-aider-agent) was used to retry the SWE Bench problem if aider produced code that wasn't *plausibly correct*.
Plausibly correct means that aider reported that it had successfully edited the repo
without causing syntax errors or breaking any *pre-existing* tests.
- If the solution from aider with GPT-4o wasn't plausible, the harness launched aider to try again from scratch using Claude 3 Opus.
@ -90,7 +90,7 @@ For a detailed discussion of the benchmark
methodology, see the
[article about aider's SWE Bench Lite results](https://aider.chat/2024/05/22/swe-bench-lite.html).
Also, the
[aider SWE Bench repository on GitHub](https://github.com/paul-gauthier/aider-swe-bench)
[aider SWE Bench repository on GitHub](https://github.com/Aider-AI/aider-swe-bench)
contains the harness and statistics code used for the benchmarks.
The benchmarking process was similar to how a developer might use aider to

View file

@ -37,8 +37,8 @@ Users who tested Sonnet with a preview of
[aider's latest release](https://aider.chat/HISTORY.html#aider-v0410)
were thrilled:
- *Works like a charm. It is a monster. It refactors files of any size like it is nothing. The continue trick with Sonnet is truly the holy grail. Aider beats [other tools] hands down. I'm going to cancel both subscriptions.* -- [Emasoft](https://github.com/paul-gauthier/aider/issues/705#issuecomment-2200338971)
- *Thanks heaps for this feature - it's a real game changer. I can be more ambitious when asking Claude for larger features.* -- [cngarrison](https://github.com/paul-gauthier/aider/issues/705#issuecomment-2196026656)
- *Works like a charm. It is a monster. It refactors files of any size like it is nothing. The continue trick with Sonnet is truly the holy grail. Aider beats [other tools] hands down. I'm going to cancel both subscriptions.* -- [Emasoft](https://github.com/Aider-AI/aider/issues/705#issuecomment-2200338971)
- *Thanks heaps for this feature - it's a real game changer. I can be more ambitious when asking Claude for larger features.* -- [cngarrison](https://github.com/Aider-AI/aider/issues/705#issuecomment-2196026656)
- *Fantastic...! It's such an improvement not being constrained by output token length issues. [I refactored] a single JavaScript file into seven smaller files using a single Aider request.* -- [John Galt](https://discord.com/channels/1131200896827654144/1253492379336441907/1256250487934554143)
## Hitting the 4k token output limit

View file

@ -19,7 +19,7 @@ and there's a lot
of interest about their ability to code compared to the previous versions.
With that in mind, I've been benchmarking the new models.
[Aider](https://github.com/paul-gauthier/aider)
[Aider](https://github.com/Aider-AI/aider)
is an open source command line chat tool that lets you work with GPT to edit
code in your local git repo.
To do this, aider needs to be able to reliably recognize when GPT wants to edit

View file

@ -20,7 +20,7 @@ and there's a lot
of interest about their capabilities and performance.
With that in mind, I've been benchmarking the new models.
[Aider](https://github.com/paul-gauthier/aider)
[Aider](https://github.com/Aider-AI/aider)
is an open source command line chat tool that lets you work with GPT to edit
code in your local git repo.
Aider relies on a

View file

@ -168,7 +168,7 @@ requests:
### whole
The
[whole](https://github.com/paul-gauthier/aider/blob/main/aider/coders/wholefile_prompts.py)
[whole](https://github.com/Aider-AI/aider/blob/main/aider/coders/wholefile_prompts.py)
format asks GPT to return an updated copy of the entire file, including any changes.
The file should be
formatted with normal markdown triple-backtick fences, inlined with the rest of its response text.
@ -187,7 +187,7 @@ def main():
### diff
The [diff](https://github.com/paul-gauthier/aider/blob/main/aider/coders/editblock_prompts.py)
The [diff](https://github.com/Aider-AI/aider/blob/main/aider/coders/editblock_prompts.py)
format also asks GPT to return edits as part of the normal response text,
in a simple diff format.
Each edit is a fenced code block that
@ -209,7 +209,7 @@ demo.py
### whole-func
The [whole-func](https://github.com/paul-gauthier/aider/blob/main/aider/coders/wholefile_func_coder.py)
The [whole-func](https://github.com/Aider-AI/aider/blob/main/aider/coders/wholefile_func_coder.py)
format requests updated copies of whole files to be returned using the function call API.
@ -227,7 +227,7 @@ format requests updated copies of whole files to be returned using the function
### diff-func
The
[diff-func](https://github.com/paul-gauthier/aider/blob/main/aider/coders/editblock_func_coder.py)
[diff-func](https://github.com/Aider-AI/aider/blob/main/aider/coders/editblock_func_coder.py)
format requests a list of
original/updated style edits to be returned using the function call API.

View file

@ -71,7 +71,7 @@ For example, below are all the pre-configured model settings
to give a sense for the settings which are supported.
You can also look at the `ModelSettings` class in
[models.py](https://github.com/paul-gauthier/aider/blob/main/aider/models.py)
[models.py](https://github.com/Aider-AI/aider/blob/main/aider/models.py)
file for more details about all of the model setting that aider supports.
<!--[[[cog

View file

@ -42,7 +42,7 @@ read: [CONVENTIONS.md, anotherfile.txt, thirdfile.py]
Below is a sample of the YAML config file, which you
can also
[download from GitHub](https://github.com/paul-gauthier/aider/blob/main/aider/website/assets/sample.aider.conf.yml).
[download from GitHub](https://github.com/Aider-AI/aider/blob/main/aider/website/assets/sample.aider.conf.yml).
<!--[[[cog
from aider.args import get_sample_yaml

View file

@ -28,7 +28,7 @@ If the files above exist, they will be loaded in that order. Files loaded last w
Below is a sample `.env` file, which you
can also
[download from GitHub](https://github.com/paul-gauthier/aider/blob/main/aider/website/assets/sample.env).
[download from GitHub](https://github.com/Aider-AI/aider/blob/main/aider/website/assets/sample.env).
<!--[[[cog
from aider.args import get_sample_dotenv

View file

@ -112,9 +112,9 @@ like functions and methods also include their signatures.
Here's a
sample of the map of the aider repo, just showing the maps of
[main.py](https://github.com/paul-gauthier/aider/blob/main/aider/main.py)
[main.py](https://github.com/Aider-AI/aider/blob/main/aider/main.py)
and
[io.py](https://github.com/paul-gauthier/aider/blob/main/aider/io.py)
[io.py](https://github.com/Aider-AI/aider/blob/main/aider/io.py)
:
```

View file

@ -198,7 +198,7 @@ Yes, you can now share aider chat logs in a pretty way.
1. Copy the markdown logs you want to share from `.aider.chat.history.md` and make a github gist. Or publish the raw markdown logs on the web any way you'd like.
```
https://gist.github.com/paul-gauthier/2087ab8b64034a078c0a209440ac8be0
https://gist.github.com/Aider-AI/2087ab8b64034a078c0a209440ac8be0
```
2. Take the gist URL and append it to:
@ -210,5 +210,5 @@ Yes, you can now share aider chat logs in a pretty way.
This will give you a URL like this, which shows the chat history like you'd see in a terminal:
```
https://aider.chat/share/?mdurl=https://gist.github.com/paul-gauthier/2087ab8b64034a078c0a209440ac8be0
https://aider.chat/share/?mdurl=https://gist.github.com/Aider-AI/2087ab8b64034a078c0a209440ac8be0
```

View file

@ -40,7 +40,7 @@ By default, aider creates commit messages which follow
[Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/).
You can customize the
[commit prompt](https://github.com/paul-gauthier/aider/blob/main/aider/prompts.py#L5)
[commit prompt](https://github.com/Aider-AI/aider/blob/main/aider/prompts.py#L5)
with the `--commit-prompt` option.
You can place that on the command line, or
[configure it via a config file or environment variables](https://aider.chat/docs/config.html).

View file

@ -78,7 +78,7 @@ since the VSCode terminal doesn't seem to support it well.
If you are interested in creating an aider plugin for your favorite editor,
please let me know by opening a
[GitHub issue](https://github.com/paul-gauthier/aider/issues).
[GitHub issue](https://github.com/Aider-AI/aider/issues).
## Install the development version of aider
@ -87,7 +87,7 @@ If you want the very latest development version of aider
you can install directly from GitHub:
```
python -m pip install --upgrade git+https://github.com/paul-gauthier/aider.git
python -m pip install --upgrade git+https://github.com/Aider-AI/aider.git
```
If you've git cloned the aider repository already, you can install "live" from your local copy. This is mostly useful if you are developing aider and want your current modifications to take effect immediately.

View file

@ -33,7 +33,7 @@ then it should be possible to add repo map support.
To build a repo map, aider needs the `tags.scm` file
from the given language's tree-sitter grammar.
If you can find and share that file in a
[GitHub issue](https://github.com/paul-gauthier/aider/issues),
[GitHub issue](https://github.com/Aider-AI/aider/issues),
then it may be possible to add repo map support.
If aider doesn't support linting, it will be complicated to

View file

@ -150,7 +150,7 @@ The model also has to successfully apply all its changes to the source file with
## Code refactoring leaderboard
[Aider's refactoring benchmark](https://github.com/paul-gauthier/refactor-benchmark) asks the LLM to refactor 89 large methods from large python classes. This is a more challenging benchmark, which tests the model's ability to output long chunks of code without skipping sections or making mistakes. It was developed to provoke and measure [GPT-4 Turbo's "lazy coding" habit](/2023/12/21/unified-diffs.html).
[Aider's refactoring benchmark](https://github.com/Aider-AI/refactor-benchmark) asks the LLM to refactor 89 large methods from large python classes. This is a more challenging benchmark, which tests the model's ability to output long chunks of code without skipping sections or making mistakes. It was developed to provoke and measure [GPT-4 Turbo's "lazy coding" habit](/2023/12/21/unified-diffs.html).
The refactoring benchmark requires a large context window to
work with large source files.
@ -291,10 +291,10 @@ since it is the easiest format for an LLM to use.
Contributions of benchmark results are welcome!
See the
[benchmark README](https://github.com/paul-gauthier/aider/blob/main/benchmark/README.md)
[benchmark README](https://github.com/Aider-AI/aider/blob/main/benchmark/README.md)
for information on running aider's code editing benchmarks.
Submit results by opening a PR with edits to the
[benchmark results data files](https://github.com/paul-gauthier/aider/blob/main/aider/website/_data/).
[benchmark results data files](https://github.com/Aider-AI/aider/blob/main/aider/website/_data/).
<p class="post-date">

View file

@ -29,9 +29,9 @@ It shows how each of these symbols are defined, by including the critical lines
Here's a part of
the repo map of aider's repo, for
[base_coder.py](https://github.com/paul-gauthier/aider/blob/main/aider/coders/base_coder.py)
[base_coder.py](https://github.com/Aider-AI/aider/blob/main/aider/coders/base_coder.py)
and
[commands.py](https://github.com/paul-gauthier/aider/blob/main/aider/commands.py)
[commands.py](https://github.com/Aider-AI/aider/blob/main/aider/commands.py)
:
```

View file

@ -83,7 +83,7 @@ coder.run("/tokens")
```
See the
[Coder.create() and Coder.__init__() methods](https://github.com/paul-gauthier/aider/blob/main/aider/coders/base_coder.py)
[Coder.create() and Coder.__init__() methods](https://github.com/Aider-AI/aider/blob/main/aider/coders/base_coder.py)
for all the supported arguments.
It can also be helpful to set the equivalent of `--yes` by doing this:

View file

@ -55,7 +55,7 @@ aider uses.
In particular, be careful with the packages with pinned versions
noted at the end of
[aider's requirements.in file](https://github.com/paul-gauthier/aider/blob/main/requirements/requirements.in).
[aider's requirements.in file](https://github.com/Aider-AI/aider/blob/main/requirements/requirements.in).
These versions are pinned because aider is known not to work with the
latest versions of these libraries.

View file

@ -317,7 +317,7 @@ the ones with the most code and which involve refactoring.
Based on this observation, I set out to build a benchmark based on refactoring
a non-trivial amount of code found in fairly large files.
To do this, I used python's `ast` module to analyze
[9 popular open source python repositories](https://github.com/paul-gauthier/refactor-benchmark)
[9 popular open source python repositories](https://github.com/Aider-AI/refactor-benchmark)
to identify challenging refactoring tasks.
The goal was to find:
@ -332,7 +332,7 @@ where we ask GPT to do something like:
> Name the new function `_set_csrf_cookie`, exactly the same name as the existing method.
> Update any existing `self._set_csrf_cookie` calls to work with the new `_set_csrf_cookie` function.
A [simple python AST scanning script](https://github.com/paul-gauthier/aider/blob/main/benchmark/refactor_tools.py)
A [simple python AST scanning script](https://github.com/Aider-AI/aider/blob/main/benchmark/refactor_tools.py)
found 89 suitable files
and packaged them up as benchmark tasks.
Each task has a test
@ -351,7 +351,7 @@ gathered during benchmarking like the
introduction of new comments that contain "...".
The result is a pragmatic
[benchmark suite that provokes, detects and quantifies GPT coding laziness](https://github.com/paul-gauthier/refactor-benchmark).
[benchmark suite that provokes, detects and quantifies GPT coding laziness](https://github.com/Aider-AI/refactor-benchmark).

View file

@ -134,7 +134,7 @@ projects like django, scikitlearn, matplotlib, etc.
- [Configuration](https://aider.chat/docs/config.html)
- [Troubleshooting](https://aider.chat/docs/troubleshooting.html)
- [LLM Leaderboards](https://aider.chat/docs/leaderboards/)
- [GitHub](https://github.com/paul-gauthier/aider)
- [GitHub](https://github.com/Aider-AI/aider)
- [Discord](https://discord.gg/Tv2uQnR88V)
- [Blog](https://aider.chat/blog/)