mirror of
https://github.com/Aider-AI/aider.git
synced 2025-05-28 16:25:00 +00:00
refactor: Update Ollama model detection and context window documentation
This commit is contained in:
parent
7d14d4ade9
commit
325cdfcf57
3 changed files with 16 additions and 26 deletions
|
@ -938,7 +938,7 @@ class Model(ModelSettings):
|
|||
self.edit_format = "diff"
|
||||
self.editor_edit_format = "editor-diff"
|
||||
self.use_repo_map = True
|
||||
if "ollama" in model:
|
||||
if model.startswith("ollama/") or model.startswith("ollama_chat/"):
|
||||
self.extra_params = dict(num_ctx=8 * 1024)
|
||||
return # <--
|
||||
|
||||
|
|
|
@ -116,19 +116,12 @@ in the chat to make it fit within the context window.
|
|||
|
||||
All of the Ollama results above were collected with at least an 8k context window, which
|
||||
is large enough to attempt all the coding problems in the benchmark.
|
||||
Aider sets Ollama's context window to 8k by default.
|
||||
|
||||
You can set the Ollama server's context window with a
|
||||
You can change the Ollama server's context window with a
|
||||
[`.aider.model.settings.yml` file](https://aider.chat/docs/config/adv-model-settings.html#model-settings)
|
||||
like this:
|
||||
|
||||
```
|
||||
- name: aider/extra_params
|
||||
extra_params:
|
||||
num_ctx: 8192
|
||||
```
|
||||
|
||||
That uses the special model name `aider/extra_params` to set it for *all* models. You should probably use a specific model name like:
|
||||
|
||||
```
|
||||
- name: ollama/qwen2.5-coder:32b-instruct-fp16
|
||||
extra_params:
|
||||
|
|
|
@ -44,28 +44,25 @@ setx OLLAMA_API_KEY <api-key> # Windows, restart shell after setx
|
|||
|
||||
[Ollama uses a 2k context window by default](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-specify-the-context-window-size),
|
||||
which is very small for working with aider.
|
||||
Unlike most other LLM servers, Ollama does not throw an error if you submit
|
||||
a request that exceeds the context window.
|
||||
Instead, it just silently truncates the request by discarding the "oldest" messages
|
||||
in the chat to make it fit within the context window.
|
||||
|
||||
All of the Ollama results above were collected with at least an 8k context window, which
|
||||
is large enough to attempt all the coding problems in the benchmark.
|
||||
|
||||
You can set the Ollama server's context window with a
|
||||
Aider sets Ollama's context window to 8k by default.
|
||||
If you would like
|
||||
a large context window
|
||||
you can use a
|
||||
[`.aider.model.settings.yml` file](https://aider.chat/docs/config/adv-model-settings.html#model-settings)
|
||||
like this:
|
||||
|
||||
```
|
||||
- name: aider/extra_params
|
||||
extra_params:
|
||||
num_ctx: 8192
|
||||
```
|
||||
|
||||
That uses the special model name `aider/extra_params` to set it for *all* models. You should probably use a specific model name like:
|
||||
|
||||
```
|
||||
- name: ollama/qwen2.5-coder:32b-instruct-fp16
|
||||
extra_params:
|
||||
num_ctx: 8192
|
||||
```
|
||||
|
||||
Unlike most other LLM servers, Ollama does not throw an error if you submit
|
||||
a request that exceeds the context window.
|
||||
Instead, it just silently truncates the request by discarding the "oldest" messages
|
||||
in the chat to make it fit within the context window.
|
||||
So if your context window is too small, you won't get an error.
|
||||
Aider will probably just fail to work well and experience
|
||||
a lot of
|
||||
[file editing problems](https://aider.chat/docs/troubleshooting/edit-errors.html).
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue