From 43e938367c4d6ec24c7d7c5efe2c038a82dd230e Mon Sep 17 00:00:00 2001 From: Paul Gauthier Date: Fri, 14 Jun 2024 16:52:43 -0700 Subject: [PATCH] copy --- website/docs/troubleshooting/token-limits.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/website/docs/troubleshooting/token-limits.md b/website/docs/troubleshooting/token-limits.md index ec4bb632e..dfd624549 100644 --- a/website/docs/troubleshooting/token-limits.md +++ b/website/docs/troubleshooting/token-limits.md @@ -5,9 +5,9 @@ nav_order: 25 # Token limits -Every LLM has limits on how many tokens it can process: +Every LLM has limits on how many tokens it can process for each request: -- The model's **context window** limits how many tokens of +- The model's **context window** limits how many total tokens of *input and output* it can process. - Each model has limit on how many **output tokens** it can produce.