aider/aider
Paul Gauthier df91e9ab1c aider: Implemented backoff library to handle rate limit issues with exponential backoff and a maximum of 5 retries.
# Context:
USER: Hello there,

thanks for making this open source!
I have a request: due to being still on the free credits of OpenAI API, I run into timeouts.

fetch_starred_repos.py
Add these files to the chat? y
RateLimitError: You exceeded your current quota, please check your plan and billing details.
Retry in 1 seconds.
...
Is it possible for you to implement this library to omit that?
https://pypi.org/project/backoff/

Thanks in advance.
Greetings from Germany

ASSISTANT: Sure, I can help you implement the `backoff` library to handle the rate limit issue. First, please make the `aider/coder.py` file *read-write* so I can propose changes to it.
USER: I added these *read-write* files: aider/coder.py
ASSISTANT: Great! First, you need to install the `backoff` library. You can do this by running:

```
pip install backoff
```

Next, I will show you the changes needed in the `aider/coder.py` file to implement the `backoff` library for handling rate limit issues.

1. Import the `backoff` library at the beginning of the file.
2. Replace the `send_with_retries` method with a new method that uses the `backoff` library.

Here are the changes:

```python
aider/coder.py
<<<<<<< ORIGINAL
import os
import sys
import time
import traceback
=======
import os
import sys
import traceback
import backoff
>>>>>>> UPDATED
```

```python
aider/coder.py
<<<<<<< ORIGINAL
    def send_with_retries(self, model, messages):
        while True:
            try:
                return openai.ChatCompletion.create(
                    model=model,
                    messages=messages,
                    temperature=0,
                    stream=True,
                )
            except RateLimitError as err:
                self.io.tool_error(f"RateLimitError: {err}")
            except requests.exceptions.ConnectionError as err:
                self.io.tool_error(f"ConnectionError: {err}")

            retry_after = 1
            self.io.tool_error(f"Retry in {retry_after} seconds.")
            time.sleep(retry_after)
=======
    @backoff.on_exception(
        backoff.expo,
        (RateLimitError, requests.exceptions.ConnectionError),
        max_tries=5,
        on_backoff=lambda details: self.io.tool_error(f"Retry in {details['wait']} seconds."),
    )
    def send_with_retries(self, model, messages):
        return openai.ChatCompletion.create(
            model=model,
            messages=messages,
            temperature=0,
            stream=True,
        )
>>>>>>> UPDATED
```

These changes will implement the `backoff` library to handle rate limit issues with exponential backoff and a maximum of 5 retries. If you want to adjust the number of retries, you can change the `max_tries` parameter in the `backoff.on_exception` decorator.
2023-06-04 05:53:06 -07:00
..
__init__.py initial 2023-05-09 11:00:32 -07:00
coder.py aider: Implemented backoff library to handle rate limit issues with exponential backoff and a maximum of 5 retries. 2023-06-04 05:53:06 -07:00
commands.py isort 2023-05-28 22:47:38 -07:00
dump.py isort 2023-05-28 22:47:38 -07:00
io.py reverted a4e57e17b5 2023-05-30 07:58:31 -07:00
main.py reverted c9ebbdf0 2023-05-30 07:58:54 -07:00
prompts.py Tweaked prompt 2023-06-02 16:30:19 -07:00
repomap.py Handle io=none 2023-06-01 10:55:39 -07:00
utils.py Allow GPT to omit the filename from edit blocks, if an earlier block provided it 2023-05-30 11:45:21 -07:00