feat(coder): Handle LLM response interruption

Introduce a mechanism to cleanly interrupt the LLM's streaming response, particularly useful for GUI or non-terminal interfaces.

- Add `AiderAbortException` to signal interruption from the IO layer.
- Implement `io.is_interrupted()` as a check within the streaming loop (`show_send_output_stream`).
- Update `send_message` to catch `AiderAbortException` and `KeyboardInterrupt`.
- Ensure partial response content is captured and marked as "(interrupted)" in the message history upon interruption.
- Skip post-processing steps (applying edits, running tests/lint, etc.) if the response was interrupted.
- Add tests for the new interruption handling logic.
This commit is contained in:
Derrick Hammer (aider) 2025-05-19 04:19:14 -04:00 committed by Derrick Hammer
parent 3caab85931
commit 40b75bc57f
4 changed files with 352 additions and 45 deletions

View file

@ -341,6 +341,11 @@ class TestInputOutput(unittest.TestCase):
self.assertEqual(mock_input.call_count, 2)
self.assertNotIn(("Do you want to proceed?", None), io.never_prompts)
def test_is_interrupted_default(self):
"""Test that is_interrupted returns False by default."""
io = InputOutput()
self.assertFalse(io.is_interrupted())
class TestInputOutputMultilineMode(unittest.TestCase):
def setUp(self):