From 18761b70be66063fbff23f3099258c02d5a09fc2 Mon Sep 17 00:00:00 2001 From: paul-gauthier <69695708+paul-gauthier@users.noreply.github.com> Date: Mon, 6 May 2024 17:16:29 -0700 Subject: [PATCH] Update index.md --- docs/leaderboards/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/leaderboards/index.md b/docs/leaderboards/index.md index 3fb6627c3..998dc78e8 100644 --- a/docs/leaderboards/index.md +++ b/docs/leaderboards/index.md @@ -8,7 +8,7 @@ highlight_image: /assets/leaderboard.jpg Aider works best with LLMs which are good at *editing* code, not just good at writing code. To evaluate an LLM's editing skill, aider uses a pair of benchmarks that -assess a model's ability to consistently follow the system instructions +assess a model's ability to consistently follow the system prompt to successfully edit code. The leaderboards below report the results from a number of popular LLMs.