From 587186d96cf3ede6b3b610cc798fb7c7f57a66a3 Mon Sep 17 00:00:00 2001 From: "AJ (@techfren)" Date: Mon, 31 Mar 2025 17:05:53 -0700 Subject: [PATCH] Update benchmark README.md to specify how to config other settings --- benchmark/README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/benchmark/README.md b/benchmark/README.md index 3f3f2db85..d4f580c03 100644 --- a/benchmark/README.md +++ b/benchmark/README.md @@ -82,6 +82,7 @@ You can run `./benchmark/benchmark.py --help` for a list of all the arguments, b - `--threads` specifies how many exercises to benchmark in parallel. Start with a single thread if you are working out the kinks on your benchmarking setup or working with a new model, etc. Once you are getting reliable results, you can speed up the process by running with more threads. 10 works well against the OpenAI APIs. - `--num-tests` specifies how many of the tests to run before stopping. This is another way to start gently as you debug your benchmarking setup. - `--keywords` filters the tests to run to only the ones whose name match the supplied argument (similar to `pytest -k xxxx`). +- `--read-model-settings=` specify any other configurations in the aider yml format, see here: https://aider.chat/docs/config/aider_conf.html ### Benchmark report