mirror of
https://github.com/Aider-AI/aider.git
synced 2025-05-28 16:25:00 +00:00
copy
This commit is contained in:
parent
be0296318f
commit
cb63b61411
2 changed files with 6 additions and 5 deletions
|
@ -34,9 +34,12 @@ Aider is unique in that it [works well with pre-existing, larger codebases](http
|
|||
## New GPT-4 model with 128k context window
|
||||
|
||||
Aider supports OpenAI's new GPT-4 model that has the massive 128k context window.
|
||||
[Early benchmark results](https://aider.chat/docs/benchmarks-1106.html)
|
||||
indicate that it is very fast and
|
||||
a bit better at coding than previous GPT-4 models.
|
||||
[Early benchmark results]
|
||||
indicate that it is
|
||||
[very fast](https://aider.chat/docs/benchmarks-speed-1106.html).
|
||||
and a bit
|
||||
[better at coding](https://aider.chat/docs/benchmarks-1106.html).
|
||||
than previous GPT-4 models.
|
||||
|
||||
To use it, run aider like this:
|
||||
|
||||
|
|
|
@ -40,8 +40,6 @@ Some observations:
|
|||
- **GPT-4 Turbo is 4-5x faster.** The new `gpt-4-1106-preview` model is 4-5x faster than the June (0613) version which has been the default `gpt-4` model.
|
||||
- The old March (0301) version of GPT-3.5 is actually faster than the June (0613) version. This was a surprising discovery.
|
||||
|
||||
### Preliminary results
|
||||
|
||||
**These are preliminary results.**
|
||||
OpenAI is enforcing very low
|
||||
rate limits on the new GPT-4 model.
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue