mirror of
https://github.com/Aider-AI/aider.git
synced 2025-06-15 00:54:59 +00:00
moved website/ -> aider/website/
This commit is contained in:
parent
eb80b32915
commit
22a494bb59
155 changed files with 9 additions and 9 deletions
46
aider/website/docs/troubleshooting/edit-errors.md
Normal file
46
aider/website/docs/troubleshooting/edit-errors.md
Normal file
|
@ -0,0 +1,46 @@
|
|||
---
|
||||
parent: Troubleshooting
|
||||
nav_order: 10
|
||||
---
|
||||
|
||||
# File editing problems
|
||||
|
||||
Sometimes the LLM will reply with some code changes
|
||||
that don't get applied to your local files.
|
||||
In these cases, aider might say something like "Failed to apply edit to *filename*"
|
||||
or other error messages.
|
||||
|
||||
This usually happens because the LLM is disobeying the system prompts
|
||||
and trying to make edits in a format that aider doesn't expect.
|
||||
Aider makes every effort to get the LLM
|
||||
to conform, and works hard to deal with
|
||||
LLMM edits that are "almost" correctly formatted.
|
||||
|
||||
But sometimes the LLM just won't cooperate.
|
||||
In these cases, here are some things you might try.
|
||||
|
||||
## Use a capable model
|
||||
|
||||
If possible try using GPT-4o, Claude 3.5 Sonnet or Claude 3 Opus,
|
||||
as they are the strongest and most capable models.
|
||||
|
||||
Weaker models
|
||||
are more prone to
|
||||
disobeying the system prompt instructions.
|
||||
Most local models are just barely capable of working with aider,
|
||||
so editing errors are probably unavoidable.
|
||||
|
||||
## Reduce distractions
|
||||
|
||||
Many LLM now have very large context windows,
|
||||
but filling them with irrelevant code or conversation
|
||||
can cofuse the model.
|
||||
|
||||
- Don't add too many files to the chat, *just* add the files you think need to be edited.
|
||||
Aider also sends the LLM a [map of your entire git repo](https://aider.chat/docs/repomap.html), so other relevant code will be included automatically.
|
||||
- Use `/drop` to remove files from the chat session which aren't needed for the task at hand. This will reduce distractions and may help GPT produce properly formatted edits.
|
||||
- Use `/clear` to remove the conversation history, again to help GPT focus.
|
||||
|
||||
## More help
|
||||
|
||||
{% include help.md %}
|
8
aider/website/docs/troubleshooting/support.md
Normal file
8
aider/website/docs/troubleshooting/support.md
Normal file
|
@ -0,0 +1,8 @@
|
|||
---
|
||||
parent: Troubleshooting
|
||||
nav_order: 30
|
||||
---
|
||||
|
||||
# Getting help
|
||||
|
||||
{% include help.md %}
|
86
aider/website/docs/troubleshooting/token-limits.md
Normal file
86
aider/website/docs/troubleshooting/token-limits.md
Normal file
|
@ -0,0 +1,86 @@
|
|||
---
|
||||
parent: Troubleshooting
|
||||
nav_order: 25
|
||||
---
|
||||
|
||||
# Token limits
|
||||
|
||||
Every LLM has limits on how many tokens it can process for each request:
|
||||
|
||||
- The model's **context window** limits how many total tokens of
|
||||
*input and output* it can process.
|
||||
- Each model has limit on how many **output tokens** it can
|
||||
produce.
|
||||
|
||||
Aider will report an error if a model responds indicating that
|
||||
it has exceeded a token limit.
|
||||
The error will include suggested actions to try and
|
||||
avoid hitting token limits.
|
||||
Here's an example error:
|
||||
|
||||
```
|
||||
Model gpt-3.5-turbo has hit a token limit!
|
||||
|
||||
Input tokens: 768 of 16385
|
||||
Output tokens: 4096 of 4096 -- exceeded output limit!
|
||||
Total tokens: 4864 of 16385
|
||||
|
||||
To reduce output tokens:
|
||||
- Ask for smaller changes in each request.
|
||||
- Break your code into smaller source files.
|
||||
- Try using a stronger model like gpt-4o or opus that can return diffs.
|
||||
|
||||
For more info: https://aider.chat/docs/token-limits.html
|
||||
```
|
||||
|
||||
## Input tokens & context window size
|
||||
|
||||
The most common problem is trying to send too much data to a
|
||||
model,
|
||||
overflowing its context window.
|
||||
Technically you can exhaust the context window if the input is
|
||||
too large or if the input plus output are too large.
|
||||
|
||||
Strong models like GPT-4o and Opus have quite
|
||||
large context windows, so this sort of error is
|
||||
typically only an issue when working with weaker models.
|
||||
|
||||
The easiest solution is to try and reduce the input tokens
|
||||
by removing files from the chat.
|
||||
It's best to only add the files that aider will need to *edit*
|
||||
to complete your request.
|
||||
|
||||
- Use `/tokens` to see token usage.
|
||||
- Use `/drop` to remove unneeded files from the chat session.
|
||||
- Use `/clear` to clear the chat history.
|
||||
- Break your code into smaller source files.
|
||||
|
||||
## Output token limits
|
||||
|
||||
Most models have quite small output limits, often as low
|
||||
as 4k tokens.
|
||||
If you ask aider to make a large change that affects a lot
|
||||
of code, the LLM may hit output token limits
|
||||
as it tries to send back all the changes.
|
||||
|
||||
To avoid hitting output token limits:
|
||||
|
||||
- Ask for smaller changes in each request.
|
||||
- Break your code into smaller source files.
|
||||
- Use a strong model like gpt-4o, sonnet or opus that can return diffs.
|
||||
|
||||
|
||||
## Other causes
|
||||
|
||||
Sometimes token limit errors are caused by
|
||||
non-compliant API proxy servers
|
||||
or bugs in the API server you are using to host a local model.
|
||||
Aider has been well tested when directly connecting to
|
||||
major
|
||||
[LLM provider cloud APIs](https://aider.chat/docs/llms.html).
|
||||
For serving local models,
|
||||
[Ollama](https://aider.chat/docs/llms/ollama.html) is known to work well with aider.
|
||||
|
||||
Try using aider without an API proxy server
|
||||
or directly with one of the recommended cloud APIs
|
||||
and see if your token limit problems resolve.
|
12
aider/website/docs/troubleshooting/warnings.md
Normal file
12
aider/website/docs/troubleshooting/warnings.md
Normal file
|
@ -0,0 +1,12 @@
|
|||
---
|
||||
parent: Troubleshooting
|
||||
nav_order: 20
|
||||
---
|
||||
|
||||
# Model warnings
|
||||
|
||||
{% include model-warnings.md %}
|
||||
|
||||
## More help
|
||||
|
||||
{% include help.md %}
|
Loading…
Add table
Add a link
Reference in a new issue