diff --git a/aider/website/docs/llms.md b/aider/website/docs/llms.md index 939bbef87..1e30795f8 100644 --- a/aider/website/docs/llms.md +++ b/aider/website/docs/llms.md @@ -19,7 +19,7 @@ Aider works best with these models, which are skilled at editing code: - [GPT-4o](/docs/llms/openai.html) - [Claude 3.5 Sonnet](/docs/llms/anthropic.html) - [Claude 3 Opus](/docs/llms/anthropic.html) -- [DeepSeek Coder V2](/docs/llms/deepseek.html) +- [DeepSeek V3](/docs/llms/deepseek.html) ## Free models diff --git a/aider/website/docs/usage/caching.md b/aider/website/docs/usage/caching.md index f79bc6d9c..3173a3e83 100644 --- a/aider/website/docs/usage/caching.md +++ b/aider/website/docs/usage/caching.md @@ -4,14 +4,13 @@ highlight_image: /assets/prompt-caching.jpg parent: Usage nav_order: 750 description: Aider supports prompt caching for cost savings and faster coding. - --- # Prompt caching Aider supports prompt caching for cost savings and faster coding. Currently Anthropic provides caching for Sonnet and Haiku, -and DeepSeek provides caching for Coder. +and DeepSeek provides caching for Chat. Aider organizes the chat history to try and cache: @@ -48,4 +47,3 @@ every 5 minutes to keep the cache warm. Aider will ping up to `N` times over a period of `N*5` minutes after each message you send. -