diff --git a/docs/leaderboards/index.md b/docs/leaderboards/index.md index 3e64f0226..8ce72b7b5 100644 --- a/docs/leaderboards/index.md +++ b/docs/leaderboards/index.md @@ -19,13 +19,19 @@ to help users select which models to use with aider. While [aider can connect to almost any LLM](/docs/llms.html) it will work best with models that score well on the benchmarks. +The key benchmarking results are: + +- **Percent completed** - Measures what percentage of the coding tasks that the LLM completed successfully. To complete a task, the LLM must solve the programming assignment *and* edit the code to implement that solution. +- **Percent without edit errors** - Measures the percent of coding tasks that the LLM completed without making any mistakes in the code editing format. If the LLM makes edit mistakes, aider will give it feedback and ask for a fixed copy of the edit. But the best models can reliably conform to the edit format, without making errors. + ## Code editing leaderboard
Model | -Percent correct | +Percent completed | +Percent without edit errors | Command | Edit format |
---|---|---|---|---|---|
{{ row.model }} | {{ row.pass_rate_2 }}% | +{{ row.percent_cases_well_formed }}% | {{ row.command }} |
{{ row.edit_format }} | |
Model | -Percent correct | +Percent completed | +Percent without edit errors | Command | Edit format |
{{ row.model }} | {{ row.pass_rate_1 }}% | +{{ row.percent_cases_well_formed }}% | {{ row.command }} |
{{ row.edit_format }} |