mirror of
https://github.com/Aider-AI/aider.git
synced 2025-05-29 08:44:59 +00:00
Use gpt4 turbo as the default model
This commit is contained in:
parent
f565057ad4
commit
058c237a28
4 changed files with 28 additions and 31 deletions
17
README.md
17
README.md
|
@ -23,6 +23,7 @@ Aider is unique in that it lets you ask for changes to [pre-existing, larger cod
|
|||
- [Example chat transcripts](#example-chat-transcripts)
|
||||
- [Features](#features)
|
||||
- [Usage](#usage)
|
||||
- [Tutorial videos](https://aider.chat/docs/install.html#tutorial-videos)
|
||||
- [In-chat commands](#in-chat-commands)
|
||||
- [Tips](#tips)
|
||||
- [Installation](https://aider.chat/docs/install.html)
|
||||
|
@ -31,22 +32,6 @@ Aider is unique in that it lets you ask for changes to [pre-existing, larger cod
|
|||
- [Discord](https://discord.gg/Tv2uQnR88V)
|
||||
- [Blog](https://aider.chat/blog/)
|
||||
|
||||
## GPT-4 Turbo with 128k context and unified diffs
|
||||
|
||||
Aider supports OpenAI's new GPT-4 model that has the massive 128k context window.
|
||||
Benchmark results indicate that it is
|
||||
[very fast](https://aider.chat/2023/11/06/benchmarks-speed-1106.html),
|
||||
and a bit [better at coding](https://aider.chat/docs/benchmarks-1106.html) than previous GPT-4 models.
|
||||
|
||||
Aider now supports a [unified diff editing format, which reduces GPT-4 Turbo's "lazy" coding](https://aider.chat/docs/unified-diffs.html).
|
||||
|
||||
To use it, run aider like this:
|
||||
|
||||
```
|
||||
aider --4turbo
|
||||
```
|
||||
|
||||
For more discussion about the [different OpenAI models see the FAQ](https://aider.chat/docs/faq.html#gpt-4-vs-gpt-35).
|
||||
|
||||
## Getting started
|
||||
|
||||
|
|
|
@ -150,11 +150,12 @@ def main(argv=None, input=None, output=None, force_git_root=None):
|
|||
env_var="OPENAI_API_KEY",
|
||||
help="Specify the OpenAI API key",
|
||||
)
|
||||
default_model = "gpt-4-1106-preview"
|
||||
core_group.add_argument(
|
||||
"--model",
|
||||
metavar="MODEL",
|
||||
default=models.GPT4_0613.name,
|
||||
help=f"Specify the model to use for the main chat (default: {models.GPT4_0613.name})",
|
||||
default=default_model,
|
||||
help=f"Specify the model to use for the main chat (default: {default_model})",
|
||||
)
|
||||
core_group.add_argument(
|
||||
"--skip-model-availability-check",
|
||||
|
@ -162,11 +163,19 @@ def main(argv=None, input=None, output=None, force_git_root=None):
|
|||
default=False,
|
||||
help="Override to skip model availability check (default: False)",
|
||||
)
|
||||
default_4_model = "gpt-4-0613"
|
||||
core_group.add_argument(
|
||||
"--4",
|
||||
"-4",
|
||||
action="store_const",
|
||||
dest="model",
|
||||
const=default_4_model,
|
||||
help=f"Use {default_4_model} model for the main chat",
|
||||
)
|
||||
default_4_turbo_model = "gpt-4-1106-preview"
|
||||
core_group.add_argument(
|
||||
"--4turbo",
|
||||
"--4-turbo",
|
||||
"--4",
|
||||
action="store_const",
|
||||
dest="model",
|
||||
const=default_4_turbo_model,
|
||||
|
@ -181,7 +190,7 @@ def main(argv=None, input=None, output=None, force_git_root=None):
|
|||
action="store_const",
|
||||
dest="model",
|
||||
const=default_3_model.name,
|
||||
help=f"Use {default_3_model.name} model for the main chat (gpt-4 is better)",
|
||||
help=f"Use {default_3_model.name} model for the main chat",
|
||||
)
|
||||
core_group.add_argument(
|
||||
"--voice-language",
|
||||
|
|
|
@ -3,7 +3,6 @@ from .openai import OpenAIModel
|
|||
from .openrouter import OpenRouterModel
|
||||
|
||||
GPT4 = Model.create("gpt-4")
|
||||
GPT4_0613 = Model.create("gpt-4-0613")
|
||||
GPT35 = Model.create("gpt-3.5-turbo")
|
||||
GPT35_0125 = Model.create("gpt-3.5-turbo-0125")
|
||||
|
||||
|
|
24
docs/faq.md
24
docs/faq.md
|
@ -42,14 +42,19 @@ While it is not recommended, you can disable aider's use of git in a few ways:
|
|||
|
||||
## GPT-4 vs GPT-3.5
|
||||
|
||||
Aider supports all of OpenAI's chat models.
|
||||
You can choose a model with the `--model` command line argument.
|
||||
Aider supports all of OpenAI's chat models,
|
||||
and uses GPT-4 Turbo by default.
|
||||
It has a large context window, good coding skills and
|
||||
generally obeys the instructions in the system prompt.
|
||||
|
||||
You can choose another model with the `--model` command line argument
|
||||
or one of these shortcuts:
|
||||
|
||||
```
|
||||
aider -4 # to use gpt-4-0613
|
||||
aider -3 # to use gpt-3.5-turbo-0125
|
||||
```
|
||||
|
||||
You will probably get the best results with one of the GPT-4 Turbo models,
|
||||
which you can use by running `aider --4turbo`
|
||||
(this is a convenient shortcut for `--model gpt-4-1106-preview`).
|
||||
They have large context windows, good coding skills and
|
||||
they generally obey the instructions in the system prompt.
|
||||
The older `gpt-4-0613` model is a great choice if GPT-4 Turbo is having
|
||||
trouble with your coding task, although it has a smaller context window
|
||||
which can be a real limitation.
|
||||
|
@ -62,13 +67,12 @@ to improve its ability to make changes in larger codebases.
|
|||
GPT-3.5 is
|
||||
limited to editing somewhat smaller codebases.
|
||||
It is less able to follow instructions and
|
||||
can't reliably return code edits as "diffs".
|
||||
so can't reliably return code edits as "diffs".
|
||||
Aider disables the
|
||||
repository map
|
||||
when using GPT-3.5.
|
||||
To use GPT-3.5, you can run `aider --35turbo` as a shortcut for `--model gpt-3.5-turbo-0125`.
|
||||
|
||||
For detailed quantitative comparisons, please see the
|
||||
For detailed quantitative comparisons of the various models, please see the
|
||||
[aider blog](https://aider.chat/blog/)
|
||||
which contains many benchmarking articles.
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue