aider/docs/benchmarks-0125.md
Paul Gauthier a75e7c8472 copy
2024-01-25 13:17:18 -08:00

2 KiB
Raw Blame History

Lazy coding benchmark for gpt-4-0125-preview

benchmark results

OpenAI just released a new version of GPT-4 Turbo. This new model is intended to reduce the "lazy coding" that has been widely observed with the previous gpt-4-1106-preview model:

Today, we are releasing an updated GPT-4 Turbo preview model, gpt-4-0125-preview. This model completes tasks like code generation more thoroughly than the previous preview model and is intended to reduce cases of “laziness” where the model doesnt complete a task.

With that in mind, I've been benchmarking the new model using aider's existing lazy coding benchmark.

Benchmark results

These results are currently preliminary, and will be updated as additional benchmark runs complete.

Overall, the new gpt-4-0125-preview model does worse on the lazy coding benchmark as compared to the November gpt-4-1106-preview model:

  • It performs much worse when using the unified diffs code editing format.
  • Using aider's older SEARCH/REPLACE block editing format, the new January model outperforms the older November model. But it still performs worse than both models using unified diffs.

This is one in a series of reports that use the aider benchmarking suite to assess and compare the code editing capabilities of OpenAI's GPT models. You can review the other reports for additional information: