mirror of
https://github.com/Aider-AI/aider.git
synced 2025-05-31 09:44:59 +00:00
Update benchmarks.md
This commit is contained in:
parent
1abbba2831
commit
0062f69d9e
1 changed files with 1 additions and 1 deletions
|
@ -40,7 +40,7 @@ This produced some interesting observations:
|
|||
- The new June (`0613`) versions of `gpt-3.5-turbo` are worse at code editing than the older Feb (`0301`) version. This was unexpected.
|
||||
- The GPT-4 models are much better at code editing than the GPT-3.5 models. This was expected.
|
||||
|
||||
These results agree with an intuition that I've been
|
||||
The quantitative benchmark results agree with an intuition that I've been
|
||||
developing about how to prompt GPT for complex tasks like coding.
|
||||
You want to minimize the "cognitive overhead" of formatting the response, so that
|
||||
GPT can focus on the task at hand.
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue