From ff3cfa816b77e13ecd0e823442818db026598c3d Mon Sep 17 00:00:00 2001 From: linsun12 Date: Thu, 8 May 2025 22:35:09 +0000 Subject: [PATCH] add example to OpenAI compatible APIs document --- aider/website/docs/llms/openai-compat.md | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/aider/website/docs/llms/openai-compat.md b/aider/website/docs/llms/openai-compat.md index ea45a574f..8da1b5f6f 100644 --- a/aider/website/docs/llms/openai-compat.md +++ b/aider/website/docs/llms/openai-compat.md @@ -37,3 +37,25 @@ aider --model openai/ See the [model warnings](warnings.html) section for information on warnings which will occur when working with models that aider is not familiar with. + +Example: +Here's a simplified example demonstrating how to request LLM server deployed using SGLang via an OpenAI compatible API endpoint. + +Step 1: Launch the SGLang server +In this example, we use the [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) model. For demonstration purposes, the SGLang server is launched locally. After installing SGLang, or within an [SGLang docker container](https://hub.docker.com/r/lmsysorg/sglang/tags), before starting the server, ensure that the model checkpoints are available in your huggingface cache directory, or that you have authenticated with huggingface tokens. Then you can start the server using the following command: +``` +python3 -m sglang.launch_server --model-path meta-llama/Llama-3.1-8B-Instruct --host 0.0.0.0 --port 30000 +``` + + +Step 2: modify API endpoint and run aider +```bash +# Mac/Linux: +Export OPENAI_API_BASE=http://localhost:30000/v1 +Export OPENAI_API_KEY=None + +# Change directory into your codebase +cd /to/your/project + +aider --model openai/meta-llama/Llama-3.1-8B-Instruct +```