diff --git a/benchmark/README.md b/benchmark/README.md index 3f3f2db85..d4f580c03 100644 --- a/benchmark/README.md +++ b/benchmark/README.md @@ -82,6 +82,7 @@ You can run `./benchmark/benchmark.py --help` for a list of all the arguments, b - `--threads` specifies how many exercises to benchmark in parallel. Start with a single thread if you are working out the kinks on your benchmarking setup or working with a new model, etc. Once you are getting reliable results, you can speed up the process by running with more threads. 10 works well against the OpenAI APIs. - `--num-tests` specifies how many of the tests to run before stopping. This is another way to start gently as you debug your benchmarking setup. - `--keywords` filters the tests to run to only the ones whose name match the supplied argument (similar to `pytest -k xxxx`). +- `--read-model-settings=` specify any other configurations in the aider yml format, see here: https://aider.chat/docs/config/aider_conf.html ### Benchmark report