From cd6a278684f8e22254be0a35b0919c14bf1029e3 Mon Sep 17 00:00:00 2001 From: Paul Gauthier Date: Fri, 30 Jun 2023 13:45:40 -0700 Subject: [PATCH] copy --- docs/benchmarks.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/benchmarks.md b/docs/benchmarks.md index 2d4fd235f..635e9e102 100644 --- a/docs/benchmarks.md +++ b/docs/benchmarks.md @@ -214,7 +214,7 @@ It feels like it is getting confused with training done for ChatGPT plugins? The OpenAI chat APIs are not deterministic, even at `temperature=0`. The same identical request will produce multiple distinct responses, usually on the order of 3-6 different variations. This feels -like they are load balancing across a number of different +like they are load balancing across a number of slightly different instances of the model. For some exercises, some of this variable responses pass the unit tests and other