From 3dc3ebe25ffc42d46059992eb1e921d473f6b824 Mon Sep 17 00:00:00 2001 From: Paul Gauthier Date: Mon, 17 Jun 2024 11:28:32 -0700 Subject: [PATCH] copy --- website/docs/leaderboards/index.md | 27 +++++++++++++-------------- 1 file changed, 13 insertions(+), 14 deletions(-) diff --git a/website/docs/leaderboards/index.md b/website/docs/leaderboards/index.md index 2130e9903..de1221cc2 100644 --- a/website/docs/leaderboards/index.md +++ b/website/docs/leaderboards/index.md @@ -3,8 +3,20 @@ highlight_image: /assets/leaderboard.jpg nav_order: 950 --- +# Aider LLM Leaderboards -# Deepseek Coder V2 beats GPT-4o, Opus on Aider Code Editing Leaderboard +Aider works best with LLMs which are good at *editing* code, not just good at writing +code. +To evaluate an LLM's editing skill, aider uses a pair of benchmarks that +assess a model's ability to consistently follow the system prompt +to successfully edit code. + +The leaderboards below report the results from a number of popular LLMs. +While [aider can connect to almost any LLM](/docs/llms.html), +it works best with models that score well on the benchmarks. + + +## Deepseek Coder V2 beats GPT-4o, Opus The new [Deepseek Coder V2](https://aider.chat/docs/llms/deepseek.html) @@ -21,19 +33,6 @@ These output limits are often as low as 4k tokens, even for models with very large context windows. -## Aider LLM Leaderboards - -Aider works best with LLMs which are good at *editing* code, not just good at writing -code. -To evaluate an LLM's editing skill, aider uses a pair of benchmarks that -assess a model's ability to consistently follow the system prompt -to successfully edit code. - -The leaderboards below report the results from a number of popular LLMs. -While [aider can connect to almost any LLM](/docs/llms.html), -it works best with models that score well on the benchmarks. - - ## Code editing leaderboard [Aider's code editing benchmark](/docs/benchmarks.html#the-benchmark) asks the LLM to edit python source files to complete 133 small coding exercises. This benchmark measures the LLM's coding ability, but also whether it can consistently emit code edits in the format specified in the system prompt.