From 6bc18b9591a6160d17560a22e24a77138a8997c0 Mon Sep 17 00:00:00 2001 From: paul-gauthier <69695708+paul-gauthier@users.noreply.github.com> Date: Mon, 6 May 2024 17:15:26 -0700 Subject: [PATCH] Update index.md --- docs/leaderboards/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/leaderboards/index.md b/docs/leaderboards/index.md index d251f9285..3fb6627c3 100644 --- a/docs/leaderboards/index.md +++ b/docs/leaderboards/index.md @@ -8,7 +8,7 @@ highlight_image: /assets/leaderboard.jpg Aider works best with LLMs which are good at *editing* code, not just good at writing code. To evaluate an LLM's editing skill, aider uses a pair of benchmarks that -assess their ability to consistently follow the system instructions +assess a model's ability to consistently follow the system instructions to successfully edit code. The leaderboards below report the results from a number of popular LLMs.