diff --git a/aider/website/_posts/2024-12-21-polyglot.md b/aider/website/_posts/2024-12-21-polyglot.md new file mode 100644 index 000000000..639b8041d --- /dev/null +++ b/aider/website/_posts/2024-12-21-polyglot.md @@ -0,0 +1,203 @@ +--- +excerpt: TBD +highlight_image: /assets/polyglot.jpg +draft: false +nav_exclude: true +--- +{% if page.date %} +
{{ page.date | date: "%B %d, %Y" }}
+{% endif %} + +# o1 tops new aider polyglot leaderboard +{: .no_toc } + + + +OpenAI's new o1 model with "high" reasoning effort +gets the top score on the +new +[aider polyglot leaderboard](/docs/leaderboard/), significantly ahead of +other top LLMs. +The new polyglot benchmark was designed to be +*much more challenging* than aider's old +[code editing benchmark](/docs/leaderboard/edit.html). +This more clearly distinguishes +the performance of +today's strongest coding models and +leaves headroom for future LLMs. + +## The polyglot benchmark + +Like aider's original code editing benchmark, +the new polyglot benchmark is based on Exercism +coding exercises. + +The new polyglot benchmark: + +- Contains coding problems in C++, Go, Java, JavaScript, Python and Rust. +The old benchmark was solely based on Python exercises. +- Focuses on the *most difficult* 225 exercises out of the 697 that +Exercism provides for those languages. +The old benchmark simply included all 133 Python exercises, +regardless of difficulty. + +## Motivation and goals + +Aider's original code editing benchmark was +saturating as the top scores approached and then surpassed 80%. +Sonnet's score of 84.2% was based on solving 112 of the 133 +exercises, leaving only 21 unsolved exercises. +New champions were advancing the top score by +solving just 1-2 more problems than the previous record. +This made it hard to clearly +measure the +difference in code editing skill between these top models. + +Part of the problem is that many of the original +133 Python problems are very easy +and provide +little challenge to today's frontier LLMs. +Models as old as GPT 3.5 Turbo were able to solve half of the +133 problems. +Such easy problems simply inflate the benchmark scores +of modern LLMs without +providing any data about which models are better or worse. + +The main goal for a new benchmark +was to re-calibrate the scale so that +today's top coding LLMs +would occupy a wide range of scores between about 5% and 50%. +A 50% top score from today's best models +should leave lots of headroom for future LLMs. +And by spreading models across a wide 5-50% range, we +can more clearly compare relative performance. + +## Designing the polyglot benchmark + +The new benchmark: + +- Tests LLMs with more coding languages, to increase diversity and source a larger pool of problems. +- Includes just the most challenging coding problems and excludes easy problems that are solvable by most of today's top coding LLMs. +- Includes more total coding problems, to enable more granularity of comparison. + +The new benchmark is based on Exercism coding problems +from 6 of the most popular programming languages: + +- C++ +- Go +- Java +- JavaScript +- Python +- Rust + +Exercism provides a total of 697 coding problems in those 6 languages. +Although many of them are adaptations of the same conceptual problem, +just ported into the different languages. + +A set of 7 of today's top coding models each attempted all 697 of +the Exercism problems: + +- Sonnet +- Haiku +- o1 Mini +- DeepSeek +- GPT-4o +- Qwen 32B Coder Instruct +- GPT-4o Mini + +Based on their results, +the 697 coding problems were sorted by how many +solutions were found to each problem: + +| SolutionsModel | +Percent completed correctly | +Percent using correct edit format | +Command | +Edit format | +
---|---|---|---|---|
{{ row.model }} | +{{ row.pass_rate_2 }}% | +{{ row.percent_cases_well_formed }}% | +{{ row.command }} |
+ {{ row.edit_format }} | +
Model | +Percent completed correctly | +Percent using correct edit format | +Command | +Edit format | +
---|---|---|---|---|
{{ row.model }} | +{{ row.pass_rate_2 }}% | +{{ row.percent_cases_well_formed }}% | +{{ row.command }} |
+ {{ row.edit_format }} | +
+By Paul Gauthier, +last updated + +December 16, 2024. + +
diff --git a/aider/website/docs/leaderboards/notes.md b/aider/website/docs/leaderboards/notes.md new file mode 100644 index 000000000..01264a76e --- /dev/null +++ b/aider/website/docs/leaderboards/notes.md @@ -0,0 +1,29 @@ +--- +parent: Aider LLM Leaderboards +nav_order: 800 +--- + +# Benchmark notes + +## Notes on benchmarking results + +The key benchmarking results are: + +- **Percent completed correctly** - Measures what percentage of the coding tasks that the LLM completed successfully. To complete a task, the LLM must solve the programming assignment *and* edit the code to implement that solution. +- **Percent using correct edit format** - Measures the percent of coding tasks where the LLM complied with the edit format specified in the system prompt. If the LLM makes edit mistakes, aider will give it feedback and ask for a fixed copy of the edit. The best models can reliably conform to the edit format, without making errors. + + +## Notes on the edit format + +Aider uses different "edit formats" to collect code edits from different LLMs. +The "whole" format is the easiest for an LLM to use, but it uses a lot of tokens +and may limit how large a file can be edited. +Models which can use one of the diff formats are much more efficient, +using far fewer tokens. +Models that use a diff-like format are able to +edit larger files with less cost and without hitting token limits. + +Aider is configured to use the best edit format for the popular OpenAI and Anthropic models +and the [other models recommended on the LLM page](/docs/llms.html). +For lesser known models aider will default to using the "whole" editing format +since it is the easiest format for an LLM to use.