mirror of
https://github.com/Aider-AI/aider.git
synced 2025-05-31 09:44:59 +00:00
copy
This commit is contained in:
parent
c24c9d33a5
commit
1af38e943d
1 changed files with 31 additions and 19 deletions
50
docs/faq.md
50
docs/faq.md
|
@ -109,14 +109,41 @@ In these cases, here are some things you might try:
|
|||
|
||||
Aider does not officially support use with LLMs other than OpenAI's gpt-3.5-turbo and gpt-4
|
||||
and their variants.
|
||||
This is because while it's "easy" to connect aider to a new LLM, it's "hard"
|
||||
to actually teach new LLMs to *edit* code.
|
||||
|
||||
It seems to require model-specific tuning to get prompts and
|
||||
editing formats working well with a new model. For example, GPT-3.5 and GPT-4 use very
|
||||
different prompts and editing formats in aider right now.
|
||||
GPT-3.5 is just barely able to understand how to modify existing source code files,
|
||||
and GPT-4 is quite good at it.
|
||||
Getting them working that well was a significant undertaking, involving
|
||||
[specific code editing prompts and backends for each model and extensive benchmarking](https://aider.chat/docs/benchmarks.html).
|
||||
Adopting new LLMs will probably require a similar effort to tailor the
|
||||
prompting and edit formats.
|
||||
prompts and editing backends.
|
||||
|
||||
That said, aider does provide some features to experiment with other models.
|
||||
Numerous users have already done experiments with numerous models.
|
||||
So far, no one has reported much success in working with them the way aider
|
||||
can work with GPT-3.5 and GPT-4.
|
||||
|
||||
Once we see signs that a *particular* model is capable of code editing,
|
||||
it would be reasonable for aider to attempt to officially support such a model.
|
||||
Until then, aider will simply maintain experimental support for using alternative models
|
||||
as describe below.
|
||||
|
||||
### OpenAI API compatible LLMs
|
||||
|
||||
If you can make the model accessible via an OpenAI compatible API,
|
||||
you can use `--openai-api-base` to connect to a different API endpoint.
|
||||
|
||||
Here are some
|
||||
[GitHub issues which may contain relevant information](https://github.com/paul-gauthier/aider/issues?q=is%3Aissue+%22openai-api-base%22+).
|
||||
|
||||
### Local LLMs
|
||||
|
||||
[LocalAI](https://github.com/go-skynet/LocalAI)
|
||||
and
|
||||
[SimpleAI](https://github.com/lhenault/simpleAI)
|
||||
look like relevant tools to serve local models via a compatible API:
|
||||
|
||||
|
||||
### Azure
|
||||
|
||||
|
@ -149,21 +176,6 @@ See the
|
|||
[official Azure documentation on using OpenAI models](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/chatgpt-quickstart?tabs=command-line&pivots=programming-language-python)
|
||||
for more information on how to populate the above configuration values.
|
||||
|
||||
### Other LLMs
|
||||
|
||||
If you can make the model accessible via an OpenAI compatible API,
|
||||
you can use `--openai-api-base` to connect to a different API endpoint.
|
||||
|
||||
Here are some
|
||||
[GitHub issues which may contain relevant information](https://github.com/paul-gauthier/aider/issues?q=is%3Aissue+%22openai-api-base%22+).
|
||||
|
||||
### Local LLMs
|
||||
|
||||
[LocalAI](https://github.com/go-skynet/LocalAI)
|
||||
and
|
||||
[SimpleAI](https://github.com/lhenault/simpleAI)
|
||||
look like relevant tools to serve local models via a compatible API:
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue