fixing broken links

This commit is contained in:
Paul Gauthier 2024-06-06 16:00:17 -07:00
parent f760eacfd6
commit 435e9a0d86
7 changed files with 8 additions and 8 deletions

View file

@ -15,7 +15,7 @@ nav_exclude: true
I recently wanted to draw a graph showing how LLM code editing skill has been I recently wanted to draw a graph showing how LLM code editing skill has been
changing over time as new models have been released by OpenAI, Anthropic and others. changing over time as new models have been released by OpenAI, Anthropic and others.
I have all the I have all the
[data in a yaml file](https://github.com/paul-gauthier/aider/blob/main/_data/edit_leaderboard.yml) that is used to render [data in a yaml file](https://github.com/paul-gauthier/aider/blob/main/website/_data/edit_leaderboard.yml) that is used to render
[aider's LLM leaderboards](https://aider.chat/docs/leaderboards/). [aider's LLM leaderboards](https://aider.chat/docs/leaderboards/).
Below is the aider chat transcript, which shows: Below is the aider chat transcript, which shows:

View file

@ -28,7 +28,7 @@ the new `gpt-4-0125-preview` model seems lazier
than the November `gpt-4-1106-preview` model: than the November `gpt-4-1106-preview` model:
- It gets worse benchmark scores when using the [unified diffs](https://aider.chat/docs/unified-diffs.html) code editing format. - It gets worse benchmark scores when using the [unified diffs](https://aider.chat/docs/unified-diffs.html) code editing format.
- Using aider's older [SEARCH/REPLACE block](https://github.com/paul-gauthier/aider/blob/9033be74bf74ae70459013e54b2ae6a97c47c2e6/aider/coders/editblock_prompts.py#L75-L80) editing format, the new January model outperforms the older November model. But it still performs worse than both models using unified diffs. - Using aider's older SEARCH/REPLACE block editing format, the new January model outperforms the older November model. But it still performs worse than both models using unified diffs.
## Related reports ## Related reports

View file

@ -61,7 +61,7 @@ To code with GPT-4 using the techniques discussed here:
- Install [aider](https://aider.chat/docs/install.html). - Install [aider](https://aider.chat/docs/install.html).
- Install [universal ctags](https://aider.chat/docs/install.html#install-universal-ctags-optional). - Install universal ctags.
- Run `aider` inside your repo, and it should say "Repo-map: universal-ctags using 1024 tokens". - Run `aider` inside your repo, and it should say "Repo-map: universal-ctags using 1024 tokens".
## The problem: code context ## The problem: code context
@ -246,5 +246,5 @@ specific language(s) of interest.
To use this experimental repo map feature: To use this experimental repo map feature:
- Install [aider](https://aider.chat/docs/install.html). - Install [aider](https://aider.chat/docs/install.html).
- Install [universal ctags](https://aider.chat/docs/install.html#install-universal-ctags-optional). - Install ctags.
- Run `aider` inside your repo, and it should say "Repo-map: universal-ctags using 1024 tokens". - Run `aider` inside your repo, and it should say "Repo-map: universal-ctags using 1024 tokens".

View file

@ -128,4 +128,4 @@ When experimenting with coder backends, it helps to run aider with `--verbose --
all the raw information being sent to/from the LLM in the conversation. all the raw information being sent to/from the LLM in the conversation.
You can also refer to the You can also refer to the
[instructions for installing a development version of aider](https://aider.chat/docs/install.html#install-development-versions-of-aider-optional). [instructions for installing a development version of aider](https://aider.chat/docs/install/optional.html#install-the-development-version-of-aider).

View file

@ -221,4 +221,4 @@ See the
[benchmark README](https://github.com/paul-gauthier/aider/blob/main/benchmark/README.md) [benchmark README](https://github.com/paul-gauthier/aider/blob/main/benchmark/README.md)
for information on running aider's code editing benchmarks. for information on running aider's code editing benchmarks.
Submit results by opening a PR with edits to the Submit results by opening a PR with edits to the
[benchmark results data files](https://github.com/paul-gauthier/aider/blob/main/_data/). [benchmark results data files](https://github.com/paul-gauthier/aider/blob/main/website/_data/).

View file

@ -307,7 +307,7 @@ radically increases the number of hunks which fail to apply.
## Refactoring benchmark ## Refactoring benchmark
Aider has long used a Aider has long used a
[benchmark suite based on 133 Exercism python exercises](). [benchmark suite based on 133 Exercism python exercises](https://aider.chat/2023/07/02/benchmarks.html).
But these are mostly small coding problems, But these are mostly small coding problems,
usually requiring only a few dozen lines of code. usually requiring only a few dozen lines of code.
GPT-4 Turbo is typically only lazy on 2-3 of these exercises: GPT-4 Turbo is typically only lazy on 2-3 of these exercises:

View file

@ -19,7 +19,7 @@ Your voice coding instructions will be transcribed
and sent to GPT, as if you had typed them into and sent to GPT, as if you had typed them into
the aider chat session. the aider chat session.
See the [installation instructions](https://aider.chat/docs/install.html#install-portaudio-optional) for See the [installation instructions](https://aider.chat/docs/install/optional.html#enable-voice-coding) for
information on how to enable the `/voice` command. information on how to enable the `/voice` command.
<br/> <br/>