Commit graph

1913 commits

Author SHA1 Message Date
Paul Gauthier
cab7460f94 catch 404s from azure on models.list 2023-12-07 07:44:21 -08:00
Paul Gauthier
ad092152d0 Revert "max_apply_update_errors = 1 for quick benchmark"
This reverts commit fd71b6a4a5.
2023-12-07 07:43:06 -08:00
Paul Gauthier
fd71b6a4a5 max_apply_update_errors = 1 for quick benchmark 2023-12-06 14:35:57 -08:00
Paul Gauthier
ff96369ed6 cleanup 2023-12-06 14:09:32 -08:00
Paul Gauthier
922a56b194 Merge branch 'main' into openai-upgrade 2023-12-06 09:26:05 -08:00
Paul Gauthier
b85ed45504 add deployment_id to AzureOpenAI call per @itlackey 2023-12-06 09:22:04 -08:00
Paul Gauthier
57ab2cc9da Revert "implement deployment id"
This reverts commit b107db98fa.
2023-12-06 09:20:53 -08:00
Your Name
4692a689fe Improvements: flag -f (not -mf); unitest improvements. 2023-12-05 23:55:10 +01:00
Your Name
743179a4bd aider: Add --message-file flag to specify a file containing the message to send GPT, process reply, then exit (disables chat mode).
# Aider chat conversation:

USER: Am I correct that currently if one wants to launch aider from commandline , the way to go is to use --message flag that allows providing prompt as command line parameter?

ASSISTANT: Yes, you are correct. The `--message` or `-m` flag can be used to provide a single message to send to GPT, process the reply, and then exit the application. This flag disables the interactive chat mode and is suitable for scripting or when you want to issue a single command without entering the interactive chat session.

USER: Is there a way to run prompt from command line like `--message` or `-m` however by providing prompt as file ?

ASSISTANT: Currently, there is no direct command-line option in the `aider` tool to provide a prompt from a file. The `--message` or `-m` flag only accepts a string as an argument. If you want to provide a prompt from a file, you would need to read the content of the file and pass it as a string to the `--message` flag.

However, this functionality can be added to the `aider` tool by implementing a new command-line option that reads the content of a specified file and uses it as the prompt message.

Would you like me to implement this feature for you? If so, please provide the `aider/main.py` file, and I will suggest the necessary changes.

USER: How would you name such flag to match current convention and how would you describe it?

ASSISTANT: Following the current naming convention used in the `aider` tool, a suitable name for the flag that allows providing a prompt from a file could be `--message-file` or `-mf`. This flag would be described as:

"Specify a file containing the message to send to GPT, process the reply, then exit (disables chat mode)."

This description aligns with the existing `--message` flag while clearly indicating that the input is expected to be a file containing the message rather than a direct string input.

USER: could you please add such flag?

ASSISTANT: To add the `--message-file` flag to the `aider` tool, we need to modify the `aider/main.py` file. Please add the `aider/main.py` file to the chat so I can provide you with the necessary changes.

USER: could you please add such flag?

ASSISTANT: To add the `--message-file` flag to the `aider` tool, we need to modify the `aider/main.py` file. Please add the `aider/main.py` file to the chat so I can provide you with the necessary changes.

USER: could you please add such flag?

ASSISTANT: To add the `--message-file` flag to the `aider` tool, we need to modify the `aider/main.py` file. Please add the `aider/main.py` file to the chat so I can provide you with the necessary changes.

USER: I added these *read-write* files: /home/gw-t490/github/VariousForks/aider-patching-for-pull-requests/aider/main.py

ASSISTANT: Great, now that you've added the `aider/main.py` file, I will provide you with the necessary changes to implement the `--message-file` flag.

We will need to:

1. Add a new command-line argument `--message-file` to the argument parser.
2. Read the content of the file specified by `--message-file` and pass it to the `coder.run()` method.

Here are the *SEARCH/REPLACE* blocks for the changes:

aider/main.py
```python
<<<<<<< SEARCH
    other_group.add_argument(
        "--message",
        "--msg",
        "-m",
        metavar="COMMAND",
        help="Specify a single message to send GPT, process reply then exit (disables chat mode)",
    )
=======
    other_group.add_argument(
        "--message",
        "--msg",
        "-m",
        metavar="COMMAND",
        help="Specify a single message to send GPT, process reply then exit (disables chat mode)",
    )
    other_group.add_argument(
        "--message-file",
        "-mf",
        metavar="MESSAGE_FILE",
        help="Specify a file containing the message to send GPT, process reply, then exit (disables chat mode)",
    )
>>>>>>> REPLACE
```

aider/main.py
```python
<<<<<<< SEARCH
    if args.message:
        io.tool_output()
        coder.run(with_message=args.message)
    else:
        coder.run()
=======
    if args.message:
        io.tool_output()
        coder.run(with_message=args.message)
    elif args.message_file:
        try:
            with open(args.message_file, 'r', encoding='utf-8') as file:
                message_from_file = file.read()
            io.tool_output()
            coder.run(with_message=message_from_file)
        except FileNotFoundError:
            io.tool_error(f"Message file not found: {args.message_file}")
            return 1
        except IOError as e:
            io.tool_error(f"Error reading message file: {e}")
            return 1
    else:
        coder.run()
>>>>>>> REPLACE
```

These changes will add the `--message-file` flag to the argument parser and handle the case where a message file is provided. If the file is found and can be read, its content will be used as the message to send to GPT. If there is an error reading the file, an error message will be displayed, and the program will exit with a status code of 1.
2023-12-05 23:55:10 +01:00
Paul Gauthier
b107db98fa implement deployment id 2023-12-05 11:31:17 -08:00
Paul Gauthier
bf03f43b44 fixed benchmark 2023-12-05 11:21:11 -08:00
Paul Gauthier
5b21d5704a fixed test_sendchat 2023-12-05 11:08:14 -08:00
Paul Gauthier
2ed0c8fb66 fixed test_repo 2023-12-05 10:58:44 -08:00
Paul Gauthier
23e6c4ee55 fixed test_coder 2023-12-05 10:51:50 -08:00
Paul Gauthier
fb07b784f6 move to gpt-3.5-turbo-1106 2023-12-05 10:28:27 -08:00
Paul Gauthier
6ebc142377 roughed in openai 1.x 2023-12-05 07:37:05 -08:00
Your Name
015ffa13e1 Fixes #374 - logging add_to_input_history if args.message is used.
Fixes https://github.com/paul-gauthier/aider/issues/374
2023-11-30 13:19:54 +01:00
Joshua Vial
91bbb0a02c TODOs for making image code robust 2023-11-30 11:09:56 +13:00
Joshua Vial
dc6aeb638d gpt-4-vision-preiview can receive 128k tokens 2023-11-29 23:26:02 +13:00
Joshua Vial
d8f33a8124 Auto switch to gpt-4-vision-preview if image files added to context 2023-11-29 22:48:45 +13:00
Paul Gauthier
ec95cd2d76 allow longer conversation histories for models with larger context windows 2023-11-27 10:55:36 -08:00
Paul Gauthier
e771adbff5 Refactor model_ids to be sorted in check_model_availability function. 2023-11-21 14:08:17 -08:00
Paul Gauthier
3d65217bed set version to 0.18.1-dev 2023-11-17 15:51:31 -08:00
Paul Gauthier
93f45a6d6b version bump to 0.18.0 2023-11-17 15:51:07 -08:00
Paul Gauthier
bf6cd8294e handle submodules (or other dirs) which are git tracked #336 2023-11-15 13:34:29 -08:00
Paul Gauthier
5d0e92296a number -> numbered 2023-11-15 13:06:58 -08:00
Paul Gauthier
d54f883698 do not include added files in the repo map #315 2023-11-15 13:06:22 -08:00
Paul Gauthier
95fe1be4c5 /add and /drop should work on untracked files in a repo (without committing them) 2023-11-15 13:06:22 -08:00
Paul Gauthier
4aba307ebd defend against no tokenizer 2023-11-14 13:09:50 -08:00
Paul Gauthier
d2f924d6d1 fixed /tokens 2023-11-14 13:03:25 -08:00
Paul Gauthier
9cbe114a74 positivity 2023-11-14 11:34:36 -08:00
Paul Gauthier
253354da98 stronger elide prompt 2023-11-14 11:16:39 -08:00
Paul Gauthier
60ca8f8c0d properly render the main sys prompt 2023-11-14 11:07:35 -08:00
Paul Gauthier
bbc174a8fb just use the Rules as the reminder 2023-11-14 11:01:29 -08:00
Paul Gauthier
e0fea58276 Put back 2nd reminder 2023-11-14 10:46:16 -08:00
Paul Gauthier
26a340e21f move the single format reminder into the top sys prompt 2023-11-14 10:38:39 -08:00
Paul Gauthier
df236e4dec stronger prompt to include comments, docstrings 2023-11-14 10:30:02 -08:00
Paul Gauthier
3986da7210 just use a single S/R format system prompt 2023-11-14 10:25:08 -08:00
Paul Gauthier
c24a4a4392 refactor into Model.token_count() 2023-11-14 10:11:13 -08:00
Paul Gauthier
90f57664a6 aider: Refactor cmd_tokens to use main_model.token_count() instead of calling tokenizer directly.
# Aider chat conversation:

USER: in `cmd_tokens` make use of main_model.token_count() rather than calling tokenizer directly

ASSISTANT: To make this change, we need to modify `aider/commands.py`. Specifically, we will update the `cmd_tokens` method to use `self.coder.main_model.token_count()` instead of calling `self.tokenizer.encode()` directly.

Here is the *SEARCH/REPLACE* block for the change:

aider/commands.py
<source>python
<<<<<<< SEARCH
        tokens = len(self.tokenizer.encode(json.dumps(msgs)))
=======
        tokens = self.coder.main_model.token_count(json.dumps(msgs))
>>>>>>> REPLACE
</source>

aider/commands.py
<source>python
<<<<<<< SEARCH
            tokens = len(self.tokenizer.encode(msgs))
=======
            tokens = self.coder.main_model.token_count(msgs)
>>>>>>> REPLACE
</source>

aider/commands.py
<source>python
<<<<<<< SEARCH
                tokens = len(self.tokenizer.encode(repo_content))
=======
                tokens = self.coder.main_model.token_count(repo_content)
>>>>>>> REPLACE
</source>

aider/commands.py
<source>python
<<<<<<< SEARCH
            tokens = len(self.tokenizer.encode(content))
=======
            tokens = self.coder.main_model.token_count(content)
>>>>>>> REPLACE
</source>

These changes will ensure that the `cmd_tokens` method uses the `token_count` method from the `main_model` to calculate the number of tokens, which is a more abstracted and potentially model-specific way of counting tokens.
2023-11-14 10:04:00 -08:00
Paul Gauthier
c63885fe8c More rules, instructions 2023-11-13 15:28:01 -08:00
Paul Gauthier
8a3d4bf14c added second example showing new file 2023-11-13 15:07:15 -08:00
Paul Gauthier
50f457d753 prompt with clear example using multiple S/R blocks on the same file 2023-11-13 14:59:19 -08:00
Paul Gauthier
0253588616 set version to 0.17.1-dev 2023-11-06 20:27:37 -08:00
Paul Gauthier
393878120e version bump to 0.17.0 2023-11-06 20:27:12 -08:00
Paul Gauthier
5098be6172 Add context windows and pricing for 1106 models 2023-11-06 20:18:40 -08:00
Paul Gauthier
af71638b06 less simple, but docker image builds 2023-11-03 14:53:22 -07:00
Paul Gauthier
208fc7ae78 simpler 2023-11-03 14:45:49 -07:00
Paul Gauthier
87ab5495a7 Simplify scripting aider 2023-11-03 14:29:45 -07:00
Paul Gauthier
1fe48fd161 Updated /help messages 2023-11-03 09:23:04 -07:00