mirror of
https://github.com/Aider-AI/aider.git
synced 2025-05-31 01:35:00 +00:00
avoid stomping extra_params[max_tokens]=1 in cache warming, which causes loop of 1 token infinite-output responses with prefill models #1842 #1841
This commit is contained in:
parent
8fb0362b47
commit
f2e1e17741
1 changed files with 1 additions and 1 deletions
|
@ -1073,7 +1073,7 @@ class Coder:
|
|||
self.warming_pings_left -= 1
|
||||
self.next_cache_warm = time.time() + delay
|
||||
|
||||
kwargs = self.main_model.extra_params or dict()
|
||||
kwargs = dict(self.main_model.extra_params) or dict()
|
||||
kwargs["max_tokens"] = 1
|
||||
|
||||
try:
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue