docs: Update reasoning model compatibility and settings documentation

This commit is contained in:
Paul Gauthier 2025-03-18 12:36:27 -07:00 committed by Paul Gauthier (aider)
parent 7c10600044
commit 8d7300a522

View file

@ -26,7 +26,7 @@ This switch is useful for Sonnet 3.7.
### Model compatibility warnings ### Model compatibility warnings
Not all models support both settings. Aider will warn you when you use a setting that may not be supported by your chosen model: Not all models support these settings. Aider will warn you when you use a setting that may not be supported by your chosen model:
``` ```
Warning: The model claude-3-sonnet@20240229 may not support the 'reasoning_effort' setting. Warning: The model claude-3-sonnet@20240229 may not support the 'reasoning_effort' setting.
@ -37,8 +37,8 @@ You can disable these warnings with the `--no-check-model-accepts-settings` flag
Each model has a predefined list of supported settings in its configuration. For example: Each model has a predefined list of supported settings in its configuration. For example:
- OpenAI models generally support `reasoning_effort` - OpenAI reasoning models generally support `reasoning_effort`
- Anthropic models (Claude) generally support `thinking_tokens` - Anthropic reasoning models generally support `thinking_tokens`
## Thinking tokens in XML tags ## Thinking tokens in XML tags
@ -65,13 +65,6 @@ Aider will rely on the non-thinking output for instructions on how to make code
### Model-specific reasoning tags ### Model-specific reasoning tags
Different models use different XML tags for their reasoning: Different models use different XML tags for their reasoning:
| Model | Reasoning Tag |
|-------|---------------|
| DeepSeek R1 | `think` |
| Deepseek V3 | `thinking` |
| Google Gemini | `reasoning` |
When using custom or self-hosted models, you may need to specify the appropriate reasoning tag in your configuration. When using custom or self-hosted models, you may need to specify the appropriate reasoning tag in your configuration.
```yaml ```yaml
@ -95,18 +88,6 @@ they sometimes prohibit streaming, use of temperature and/or the system prompt.
Aider is configured to work properly with these models Aider is configured to work properly with these models
when served through major provider APIs. when served through major provider APIs.
### Model settings compatibility matrix
Here's a summary of common reasoning settings compatibility:
| Model Type | `reasoning_effort` | `thinking_tokens` | Notes |
|------------|-------------------|-------------------|-------|
| OpenAI reasoning models | ✅ | ❌ | Supports values 0-1 |
| Claude 3.7+ | ❌ | ✅ | Supports integer values |
| Claude 3.0-3.5 | ❌ | ❌ | Uses built-in reasoning |
| Google Gemini | ❌ | ❌ | Uses built-in reasoning |
| DeepSeek | ❌ | ❌ | Uses built-in reasoning |
If you're using a model through a different provider (like Azure or custom deployment), If you're using a model through a different provider (like Azure or custom deployment),
you may need to [configure model settings](/docs/config/adv-model-settings.html) you may need to [configure model settings](/docs/config/adv-model-settings.html)
if you see errors related to temperature or system prompt. if you see errors related to temperature or system prompt.
@ -135,6 +116,7 @@ for the model you are interested in, say o3-mini:
use_temperature: false # <--- use_temperature: false # <---
editor_model_name: gpt-4o editor_model_name: gpt-4o
editor_edit_format: editor-diff editor_edit_format: editor-diff
accepts_settings: ["reasoning_effort"]
``` ```
Pay attention to these settings, which must be set to `false` Pay attention to these settings, which must be set to `false`
@ -159,6 +141,7 @@ settings for a different provider.
use_temperature: false # <--- use_temperature: false # <---
editor_model_name: azure/gpt-4o editor_model_name: azure/gpt-4o
editor_edit_format: editor-diff editor_edit_format: editor-diff
accepts_settings: ["reasoning_effort"]
``` ```
### Accepting settings configuration ### Accepting settings configuration
@ -166,14 +149,12 @@ settings for a different provider.
Models define which reasoning settings they accept using the `accepts_settings` property: Models define which reasoning settings they accept using the `accepts_settings` property:
```yaml ```yaml
- name: gpt-4o - name: a-fancy-reasoning-model
edit_format: diff edit_format: diff
weak_model_name: gpt-4o-mini
use_repo_map: true use_repo_map: true
editor_model_name: gpt-4o
editor_edit_format: editor-diff
accepts_settings: # <--- accepts_settings: # <---
- reasoning_effort # <--- - reasoning_effort # <---
``` ```
This tells Aider that the model accepts the `reasoning_effort` setting but not `thinking_tokens`, which is why you'll get a warning if you try to use `--thinking-tokens` with this model. This tells Aider that the model accepts the `reasoning_effort` setting but not `thinking_tokens`.
So you would get a warning if you try to use `--thinking-tokens` with this model.