From b5f2dcaeae0c21edbd31ef1d9098839838a15268 Mon Sep 17 00:00:00 2001 From: Paul Gauthier Date: Mon, 6 May 2024 12:06:24 -0700 Subject: [PATCH] copy --- docs/leaderboards/index.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/leaderboards/index.md b/docs/leaderboards/index.md index 6ec9614a4..5a5bcfe70 100644 --- a/docs/leaderboards/index.md +++ b/docs/leaderboards/index.md @@ -16,7 +16,7 @@ to measure an LLM's code editing ability: The leaderboards below report the results from a number of popular LLMs, to help users select which models to use with aider. -While [aider can connect to almost any LLM](/docs/llms.html) +While [aider can connect to almost any LLM](/docs/llms.html), it will work best with models that score well on the benchmarks. The key benchmarking results are: @@ -198,6 +198,6 @@ since it is the easiest format for an LLM to use. Contributions of benchmark results are welcome! See the [benchmark README](https://github.com/paul-gauthier/aider/blob/main/benchmark/README.md) -for information on running aider's code editing benchmark. +for information on running aider's code editing benchmarks. Submit results by opening a PR with edits to the [benchmark results data files](https://github.com/paul-gauthier/aider/blob/main/_data/).