Update docs github.com/paul-gauthier -> github.com/Aider-AI

This commit is contained in:
Paul Gauthier 2024-10-04 12:53:45 -07:00
parent d17bd6ebcc
commit 516e637c39
30 changed files with 47 additions and 47 deletions

View file

@ -110,9 +110,9 @@ source code, by including the critical lines of code for each definition.
Here's a
sample of the map of the aider repo, just showing the maps of
[base_coder.py](https://github.com/paul-gauthier/aider/blob/main/aider/coders/base_coder.py)
[base_coder.py](https://github.com/Aider-AI/aider/blob/main/aider/coders/base_coder.py)
and
[commands.py](https://github.com/paul-gauthier/aider/blob/main/aider/commands.py)
[commands.py](https://github.com/Aider-AI/aider/blob/main/aider/commands.py)
:
```
@ -188,7 +188,7 @@ It specifically uses the
[py-tree-sitter-languages](https://github.com/grantjenks/py-tree-sitter-languages)
python module,
which provides simple, pip-installable binary wheels for
[most popular programming languages](https://github.com/paul-gauthier/grep-ast/blob/main/grep_ast/parsers.py).
[most popular programming languages](https://github.com/Aider-AI/grep-ast/blob/main/grep_ast/parsers.py).
Tree-sitter parses source code into an Abstract Syntax Tree (AST) based
on the syntax of the programming language.
@ -245,7 +245,7 @@ just install [aider](https://aider.chat/docs/install.html).
## Credits
Aider uses
[modified versions of the tags.scm files](https://github.com/paul-gauthier/aider/tree/main/aider/queries)
[modified versions of the tags.scm files](https://github.com/Aider-AI/aider/tree/main/aider/queries)
from these
open source tree-sitter language implementations:

View file

@ -30,7 +30,7 @@ aider --opus
## Aider's code editing benchmark
[Aider](https://github.com/paul-gauthier/aider)
[Aider](https://github.com/Aider-AI/aider)
is an open source command line chat tool that lets you
pair program with AI on code in your local git repo.

View file

@ -52,7 +52,7 @@ def some_complex_method(foo, bar):
# ... implement method here ...
```
Aider uses a ["laziness" benchmark suite](https://github.com/paul-gauthier/refactor-benchmark)
Aider uses a ["laziness" benchmark suite](https://github.com/Aider-AI/refactor-benchmark)
which is designed to both provoke and quantify lazy coding.
It consists of
89 python refactoring tasks

View file

@ -15,7 +15,7 @@ nav_exclude: true
I recently wanted to draw a graph showing how LLM code editing skill has been
changing over time as new models have been released by OpenAI, Anthropic and others.
I have all the
[data in a yaml file](https://github.com/paul-gauthier/aider/blob/main/website/_data/edit_leaderboard.yml) that is used to render
[data in a yaml file](https://github.com/Aider-AI/aider/blob/main/website/_data/edit_leaderboard.yml) that is used to render
[aider's LLM leaderboards](https://aider.chat/docs/leaderboards/).
Below is the aider chat transcript, which shows:

View file

@ -25,7 +25,7 @@ This increases the ability of the LLM to understand the problem and
make the correct changes to resolve it.
Aider ships with basic linters built with tree-sitter that support
[most popular programming languages](https://github.com/paul-gauthier/grep-ast/blob/main/grep_ast/parsers.py).
[most popular programming languages](https://github.com/Aider-AI/grep-ast/blob/main/grep_ast/parsers.py).
These built in linters will detect syntax errors and other fatal problems with the code.
You can also configure aider to use your preferred linters.

View file

@ -76,7 +76,7 @@ The held out "acceptance tests" were *only* used
after benchmarking to compute statistics on which problems aider
correctly resolved.
The [full harness to run aider on SWE Bench Lite is available on GitHub](https://github.com/paul-gauthier/aider-swe-bench).
The [full harness to run aider on SWE Bench Lite is available on GitHub](https://github.com/Aider-AI/aider-swe-bench).
The benchmarking process was similar to how a developer might use aider to
resolve a GitHub issue:

View file

@ -13,7 +13,7 @@ nav_exclude: true
[![self assembly](/assets/self-assembly.jpg)](https://aider.chat/assets/self-assembly.jpg)
The
[aider git repo](https://github.com/paul-gauthier/aider)
[aider git repo](https://github.com/Aider-AI/aider)
currently contains about 4K commits and 14K lines of code.
Aider made 15% of the commits, inserting 4.8K and deleting 1.5K lines of code.

View file

@ -64,7 +64,7 @@ with the problem statement
submitted as the opening chat message from "the user".
- After that aider ran as normal, except all of aider's
suggestions were always accepted without user approval.
- A [simple harness](https://github.com/paul-gauthier/aider-swe-bench#the-aider-agent) was used to retry the SWE Bench problem if aider produced code that wasn't *plausibly correct*.
- A [simple harness](https://github.com/Aider-AI/aider-swe-bench#the-aider-agent) was used to retry the SWE Bench problem if aider produced code that wasn't *plausibly correct*.
Plausibly correct means that aider reported that it had successfully edited the repo
without causing syntax errors or breaking any *pre-existing* tests.
- If the solution from aider with GPT-4o wasn't plausible, the harness launched aider to try again from scratch using Claude 3 Opus.
@ -90,7 +90,7 @@ For a detailed discussion of the benchmark
methodology, see the
[article about aider's SWE Bench Lite results](https://aider.chat/2024/05/22/swe-bench-lite.html).
Also, the
[aider SWE Bench repository on GitHub](https://github.com/paul-gauthier/aider-swe-bench)
[aider SWE Bench repository on GitHub](https://github.com/Aider-AI/aider-swe-bench)
contains the harness and statistics code used for the benchmarks.
The benchmarking process was similar to how a developer might use aider to

View file

@ -37,8 +37,8 @@ Users who tested Sonnet with a preview of
[aider's latest release](https://aider.chat/HISTORY.html#aider-v0410)
were thrilled:
- *Works like a charm. It is a monster. It refactors files of any size like it is nothing. The continue trick with Sonnet is truly the holy grail. Aider beats [other tools] hands down. I'm going to cancel both subscriptions.* -- [Emasoft](https://github.com/paul-gauthier/aider/issues/705#issuecomment-2200338971)
- *Thanks heaps for this feature - it's a real game changer. I can be more ambitious when asking Claude for larger features.* -- [cngarrison](https://github.com/paul-gauthier/aider/issues/705#issuecomment-2196026656)
- *Works like a charm. It is a monster. It refactors files of any size like it is nothing. The continue trick with Sonnet is truly the holy grail. Aider beats [other tools] hands down. I'm going to cancel both subscriptions.* -- [Emasoft](https://github.com/Aider-AI/aider/issues/705#issuecomment-2200338971)
- *Thanks heaps for this feature - it's a real game changer. I can be more ambitious when asking Claude for larger features.* -- [cngarrison](https://github.com/Aider-AI/aider/issues/705#issuecomment-2196026656)
- *Fantastic...! It's such an improvement not being constrained by output token length issues. [I refactored] a single JavaScript file into seven smaller files using a single Aider request.* -- [John Galt](https://discord.com/channels/1131200896827654144/1253492379336441907/1256250487934554143)
## Hitting the 4k token output limit