diff --git a/README.md b/README.md index 79e923616..4c148133a 100644 --- a/README.md +++ b/README.md @@ -34,9 +34,12 @@ Aider is unique in that it [works well with pre-existing, larger codebases](http ## New GPT-4 model with 128k context window Aider supports OpenAI's new GPT-4 model that has the massive 128k context window. -[Early benchmark results](https://aider.chat/docs/benchmarks-1106.html) -indicate that it is very fast and -a bit better at coding than previous GPT-4 models. +[Early benchmark results] +indicate that it is +[very fast](https://aider.chat/docs/benchmarks-speed-1106.html). +and a bit +[better at coding](https://aider.chat/docs/benchmarks-1106.html). +than previous GPT-4 models. To use it, run aider like this: diff --git a/docs/benchmarks-speed-1106.md b/docs/benchmarks-speed-1106.md index cc974f086..f14b5353d 100644 --- a/docs/benchmarks-speed-1106.md +++ b/docs/benchmarks-speed-1106.md @@ -40,8 +40,6 @@ Some observations: - **GPT-4 Turbo is 4-5x faster.** The new `gpt-4-1106-preview` model is 4-5x faster than the June (0613) version which has been the default `gpt-4` model. - The old March (0301) version of GPT-3.5 is actually faster than the June (0613) version. This was a surprising discovery. -### Preliminary results - **These are preliminary results.** OpenAI is enforcing very low rate limits on the new GPT-4 model.