This commit is contained in:
Paul Gauthier 2025-02-04 09:33:56 -08:00
parent e17c29c258
commit 384ff3484c
5 changed files with 35 additions and 29 deletions

View file

@ -13,7 +13,7 @@ Model aliases allow you to create shorthand names for models you frequently use.
You can define aliases when launching aider using the `--alias` option:
```bash
aider --alias "fast:gpt-3.5-turbo" --alias "smart:gpt-4"
aider --alias "fast:gpt-4o-mini" --alias "smart:o3-mini"
```
Multiple aliases can be defined by using the `--alias` option multiple times. Each alias definition should be in the format `alias:model-name`.
@ -24,8 +24,8 @@ You can also define aliases in your [`.aider.conf.yml` file](https://aider.chat/
```yaml
alias:
- "fast:gpt-3.5-turbo"
- "smart:gpt-4"
- "fast:gpt-4o-mini"
- "smart:o3-mini"
- "hacker:claude-3-sonnet-20240229"
```
@ -34,8 +34,8 @@ alias:
Once defined, you can use the alias instead of the full model name:
```bash
aider --model fast # Uses gpt-3.5-turbo
aider --model smart # Uses gpt-4
aider --model fast # Uses gpt-4o-mini
aider --model smart # Uses o3-mini
```
## Built-in Aliases

View file

@ -8,7 +8,7 @@ nav_order: 100
To work with OpenAI's models, you need to provide your
[OpenAI API key](https://help.openai.com/en/articles/4936850-where-do-i-find-my-secret-api-key)
either in the `OPENAI_API_KEY` environment variable or
via the `--openai-api-key` command line switch.
via the `--api-key openai=<key>` command line switch.
Aider has some built in shortcuts for the most popular OpenAI models and
has been tested and benchmarked to work well with them:
@ -16,28 +16,34 @@ has been tested and benchmarked to work well with them:
```
python -m pip install -U aider-chat
export OPENAI_API_KEY=<key> # Mac/Linux
setx OPENAI_API_KEY <key> # Windows, restart shell after setx
# Aider uses gpt-4o by default (or use --4o)
aider
# GPT-4o
aider --4o
# GPT-3.5 Turbo
aider --35-turbo
# o3-mini
aider --model o3-mini --api-key openai=<key>
# o1-mini
aider --model o1-mini
aider --model o1-mini --api-key openai=<key>
# o1-preview
aider --model o1-preview
# GPT-4o
aider --4o --api-key openai=<key>
# List models available from OpenAI
aider --list-models openai/
# You can also store you API key in environment variables (or .env)
export OPENAI_API_KEY=<key> # Mac/Linux
setx OPENAI_API_KEY <key> # Windows, restart shell after setx
```
You can use `aider --model <model-name>` to use any other OpenAI model.
For example, if you want to use a specific version of GPT-4 Turbo
you could do `aider --model gpt-4-0125-preview`.
## o1 models from other providers
Many of OpenAI's o1
"reasoning" models have restrictions on streaming and setting the temperature parameter.
Aider is configured to work properly with these models
when served through major provider APIs.
You may need to [configure reasoning model settings](/docs/config/reasoning.html)
if you are using them through another provider
and see errors related to temperature or system prompt.

View file

@ -16,13 +16,13 @@ aider --model deepseek --api-key deepseek=your-key-goes-here
# Work with Claude 3.5 Sonnet via Anthropic's API
aider --model sonnet --api-key anthropic=your-key-goes-here
# Work with GPT-4o via OpenAI's API
aider --model gpt-4o --api-key openai=your-key-goes-here
# Work with o3-mini via OpenAI's API
aider --model o3-mini --api-key openai=your-key-goes-here
# Work with Sonnet via OpenRouter's API
aider --model openrouter/anthropic/claude-3.5-sonnet --api-key openrouter=your-key-goes-here
# Work with DeepSeek via OpenRouter's API
# Work with DeepSeek Chat V3 via OpenRouter's API
aider --model openrouter/deepseek/deepseek-chat --api-key openrouter=your-key-goes-here
```

View file

@ -29,7 +29,7 @@ Total tokens: 4864 of 16385
To reduce output tokens:
- Ask for smaller changes in each request.
- Break your code into smaller source files.
- Try using a stronger model like gpt-4o or opus that can return diffs.
- Try using a stronger model like DeepSeek V3 or Sonnet that can return diffs.
For more info: https://aider.chat/docs/token-limits.html
```
@ -47,7 +47,7 @@ overflowing its context window.
Technically you can exhaust the context window if the input is
too large or if the input plus output are too large.
Strong models like GPT-4o and Opus have quite
Strong models like GPT-4o and Sonnet have quite
large context windows, so this sort of error is
typically only an issue when working with weaker models.
@ -73,7 +73,7 @@ To avoid hitting output token limits:
- Ask for smaller changes in each request.
- Break your code into smaller source files.
- Use a strong model like gpt-4o, sonnet or opus that can return diffs.
- Use a strong model like gpt-4o, sonnet or DeepSeek V3 that can return diffs.
- Use a model that supports [infinite output](/docs/more/infinite-output.html).
## Other causes

View file

@ -68,11 +68,11 @@ relevant context from the rest of your repo.
{% include works-best.md %}
```
# GPT-4o
$ aider --4o
# o3-mini
$ aider --model o3-mini --api-key openai=<key>
# Claude 3.5 Sonnet
$ aider --sonnet
$ aider --model sonnet --api-key anthropic=<key>
```
Or you can run `aider --model XXX` to launch aider with