mirror of
https://github.com/Aider-AI/aider.git
synced 2025-06-01 10:14:59 +00:00
Compare commits
497 commits
v0.82.3.de
...
main
Author | SHA1 | Date | |
---|---|---|---|
![]() |
0bb0f169d2 | ||
![]() |
45ad3cdf47 | ||
![]() |
fc30409f74 | ||
![]() |
6d872b6dc0 | ||
![]() |
6fdc956b9e | ||
![]() |
196721d27d | ||
![]() |
e331a967a6 | ||
![]() |
48376e59c2 | ||
![]() |
52510c7da5 | ||
![]() |
c24798c44f | ||
![]() |
6085be5883 | ||
![]() |
05c56fe904 | ||
![]() |
a7afbd0708 | ||
![]() |
3f2c403cf0 | ||
![]() |
d7504bed21 | ||
![]() |
119a44debe | ||
![]() |
87dee0a5f2 | ||
![]() |
1d0e463d83 | ||
![]() |
8304029b92 | ||
![]() |
ef2986a231 | ||
![]() |
b79a777936 | ||
![]() |
9c9eedd9c5 | ||
![]() |
ebaad9d865 | ||
![]() |
d922023815 | ||
![]() |
acebc11237 | ||
![]() |
214b811ef9 | ||
![]() |
de9df51b47 | ||
![]() |
3194a35230 | ||
![]() |
a8568c3c4f | ||
![]() |
114ec42563 | ||
![]() |
f7df96d224 | ||
![]() |
79edb0e1e0 | ||
![]() |
5a0951caaf | ||
![]() |
6b2bcf651e | ||
![]() |
fea0ff189f | ||
![]() |
803a8db60c | ||
![]() |
414b4e3882 | ||
![]() |
a17599152f | ||
![]() |
7b9d8e6ba7 | ||
![]() |
9ef3211365 | ||
![]() |
d9bf69041c | ||
![]() |
e3cb907767 | ||
![]() |
ef3f8bb301 | ||
![]() |
03a489ea35 | ||
![]() |
81389b87d7 | ||
![]() |
0d8ff295d6 | ||
![]() |
6176a8dee3 | ||
![]() |
299e6ae7a2 | ||
![]() |
0b1d49d630 | ||
![]() |
037a36edba | ||
![]() |
66bc9cf292 | ||
![]() |
2b9e669930 | ||
![]() |
cb88b7e62a | ||
![]() |
4e9943f2aa | ||
![]() |
9f5018e89e | ||
![]() |
3caab85931 | ||
![]() |
756372809e | ||
![]() |
6aa05ab11c | ||
![]() |
9cf373039e | ||
![]() |
bc1272f029 | ||
![]() |
0049e78250 | ||
![]() |
56b45ce1d3 | ||
![]() |
bdd67eb229 | ||
![]() |
57020a2d5e | ||
![]() |
6b9045a2a2 | ||
![]() |
5f24a0013a | ||
![]() |
b79052501d | ||
![]() |
9e0d7d9c46 | ||
![]() |
a53ab7d937 | ||
![]() |
c055602c6f | ||
![]() |
170e8fc9a1 | ||
![]() |
ee177054b8 | ||
![]() |
f018b5fab5 | ||
![]() |
5a29ba03dc | ||
![]() |
035d99d3d3 | ||
![]() |
702eff1033 | ||
![]() |
97f3885357 | ||
![]() |
f8653613bc | ||
![]() |
b1d47c47d9 | ||
![]() |
2c4a126093 | ||
![]() |
cdd1546243 | ||
![]() |
6a3bb0f4ec | ||
![]() |
24c0fbd326 | ||
![]() |
7b9eae117f | ||
![]() |
512b4d891b | ||
![]() |
a6b0f43dce | ||
![]() |
e8d9ae9a1f | ||
![]() |
2ab0074915 | ||
![]() |
225e01717c | ||
![]() |
4d39b88110 | ||
![]() |
5052150e2e | ||
![]() |
d8fbd9cbd3 | ||
![]() |
53cda2cc10 | ||
![]() |
543e5570ae | ||
![]() |
62c7e15a36 | ||
![]() |
17a2773a22 | ||
![]() |
b8758ca791 | ||
![]() |
bf9522a2fb | ||
![]() |
ddc8621d6e | ||
![]() |
7875de078a | ||
![]() |
ea1189b8ec | ||
![]() |
1127b8b559 | ||
![]() |
64f218a06e | ||
![]() |
efde8e867e | ||
![]() |
f815f0377e | ||
![]() |
883aa9e03d | ||
![]() |
2a410fab81 | ||
![]() |
34409311a3 | ||
![]() |
97379aa02f | ||
![]() |
ee4e9c9711 | ||
![]() |
7d3c817664 | ||
![]() |
8c755bf032 | ||
![]() |
0b112e948f | ||
![]() |
c11d21a230 | ||
![]() |
a9cb1a9d61 | ||
![]() |
43cd0164e0 | ||
![]() |
49b3f85cc5 | ||
![]() |
3daf7d4df3 | ||
![]() |
3dcb23c193 | ||
![]() |
cad31b638b | ||
![]() |
7fbe0d25f5 | ||
![]() |
637a31e083 | ||
![]() |
f928ffc3fc | ||
![]() |
23cb604e6e | ||
![]() |
09880ee8f4 | ||
![]() |
425fb6d7a8 | ||
![]() |
28d87767cd | ||
![]() |
ed262b8b06 | ||
![]() |
7f30320566 | ||
![]() |
9d74e8c730 | ||
![]() |
1b2eeaff56 | ||
![]() |
0632c7a90f | ||
![]() |
c806f18698 | ||
![]() |
91d7fbd659 | ||
![]() |
fcc85a7ae6 | ||
![]() |
dbfba029af | ||
![]() |
88fba5f20b | ||
![]() |
f7a073961c | ||
![]() |
f8c154edce | ||
![]() |
c6ad5c8cd2 | ||
![]() |
af9ae849bd | ||
![]() |
64b4d13880 | ||
![]() |
6620141420 | ||
![]() |
d79bc2c05b | ||
![]() |
9978f6c51e | ||
![]() |
5be642fbec | ||
![]() |
9f1ef3f49f | ||
![]() |
a3562d1d62 | ||
![]() |
4e608dbd77 | ||
![]() |
3f49acf390 | ||
![]() |
77deb35022 | ||
![]() |
1a7960810c | ||
![]() |
766a41d5de | ||
![]() |
df967e4b41 | ||
![]() |
781ed90653 | ||
![]() |
b9885bb76d | ||
![]() |
11480f6110 | ||
![]() |
2bc9386876 | ||
![]() |
04cbe87caa | ||
![]() |
4c959f4542 | ||
![]() |
8652fcf86e | ||
![]() |
23714d7db6 | ||
![]() |
81b86441fd | ||
![]() |
edb3bf84cc | ||
![]() |
4d5852a30e | ||
![]() |
7a5877ea50 | ||
![]() |
52ae22bcf8 | ||
![]() |
4fb2d78011 | ||
![]() |
c93c22ec98 | ||
![]() |
a26a3145ba | ||
![]() |
055a3d795a | ||
![]() |
2d34b738bc | ||
![]() |
292aa9bded | ||
![]() |
4e86a82a08 | ||
![]() |
784ac79da1 | ||
![]() |
647f556582 | ||
![]() |
aad6838e15 | ||
![]() |
95cc362c07 | ||
![]() |
9ef506dc25 | ||
![]() |
b236e0c801 | ||
![]() |
c706663841 | ||
![]() |
d7e091f315 | ||
![]() |
37601eb4b7 | ||
![]() |
a22772b388 | ||
![]() |
befff1f22e | ||
![]() |
0864a7ca76 | ||
![]() |
01592afac3 | ||
![]() |
3a5a46253d | ||
![]() |
5bb891b2bb | ||
![]() |
18f702b95a | ||
![]() |
e6a35be5b7 | ||
![]() |
6351964bcd | ||
![]() |
ede3061fe0 | ||
![]() |
f1121e3b7c | ||
![]() |
a1cb86dca3 | ||
![]() |
cf1d58745e | ||
![]() |
98dc8e5d57 | ||
![]() |
21a05ead4e | ||
![]() |
80f78ee85d | ||
![]() |
540b2519c2 | ||
![]() |
d3931f67ca | ||
![]() |
b6a32d8682 | ||
![]() |
023e939798 | ||
![]() |
38e7f04e60 | ||
![]() |
b40baaceea | ||
![]() |
ff549cf9ba | ||
![]() |
2c1685bb36 | ||
![]() |
2a61494442 | ||
![]() |
0af5563e77 | ||
![]() |
c147571b18 | ||
![]() |
311981f4e5 | ||
![]() |
79923c954b | ||
![]() |
0b4430f228 | ||
![]() |
ee9ad75509 | ||
![]() |
920b20b17d | ||
![]() |
9297ee982d | ||
![]() |
1d5c3c3a2b | ||
![]() |
217b45ae88 | ||
![]() |
1f6f480864 | ||
![]() |
40a5a88d56 | ||
![]() |
30097ab859 | ||
![]() |
09acfc8147 | ||
![]() |
a2ecc5883b | ||
![]() |
d127d45669 | ||
![]() |
2ebb2103b8 | ||
![]() |
c3d4fdb4c1 | ||
![]() |
e1ab9cc0ab | ||
![]() |
15317a9f4b | ||
![]() |
62dc55dd77 | ||
![]() |
20faadcbd9 | ||
![]() |
8f0fa6684d | ||
![]() |
7a3805d39f | ||
![]() |
4709a539c6 | ||
![]() |
8172125931 | ||
![]() |
b8f9d459fb | ||
![]() |
96bc57167f | ||
![]() |
606e27a337 | ||
![]() |
1d7c56b8c5 | ||
![]() |
6e1327f66d | ||
![]() |
82f33c1220 | ||
![]() |
cd7567fcf6 | ||
![]() |
e4274aa4f6 | ||
![]() |
acd7309b78 | ||
![]() |
d5ea078f24 | ||
![]() |
8776830306 | ||
![]() |
43dd9ef8a5 | ||
![]() |
f047b2928b | ||
![]() |
d89d500eab | ||
![]() |
35fe1df499 | ||
![]() |
d32d0b7909 | ||
![]() |
0a5c1960b3 | ||
![]() |
eef0051b93 | ||
![]() |
b5cde63b37 | ||
![]() |
043c42b2b4 | ||
![]() |
758fa6f67e | ||
![]() |
c2fce2699e | ||
![]() |
328584e5f4 | ||
![]() |
f12395f4d3 | ||
![]() |
024c3ed46e | ||
![]() |
3ed897c665 | ||
![]() |
bfcff84b28 | ||
![]() |
4124cee722 | ||
![]() |
d18a9f32bc | ||
![]() |
aef3863c4a | ||
![]() |
f31128706d | ||
![]() |
1307215b8f | ||
![]() |
cb380b423e | ||
![]() |
86d338c811 | ||
![]() |
dd3ef07881 | ||
![]() |
69f14ace01 | ||
![]() |
08220f598c | ||
![]() |
9badb711ff | ||
![]() |
90b5f897f9 | ||
![]() |
4a14aeb7d9 | ||
![]() |
fef0f1fa3a | ||
![]() |
a39cec8e1d | ||
![]() |
c89ac40f56 | ||
![]() |
114a0e5ab9 | ||
![]() |
371c82e5bb | ||
![]() |
71338a679e | ||
![]() |
aeaf259021 | ||
![]() |
bdec02e290 | ||
![]() |
5090f28151 | ||
![]() |
a98b531bcc | ||
![]() |
8727ffbe68 | ||
![]() |
e7de5382fb | ||
![]() |
8956eef339 | ||
![]() |
0c236d0035 | ||
![]() |
aaacee5d4d | ||
![]() |
da00455388 | ||
![]() |
03acee1ed2 | ||
![]() |
4ab8faf21e | ||
![]() |
2f45023f59 | ||
![]() |
1d2818a064 | ||
![]() |
582da0ee44 | ||
![]() |
592dea0f8c | ||
![]() |
dd8db78680 | ||
![]() |
23ce877bd2 | ||
![]() |
8bb971c15d | ||
![]() |
fe20e528b0 | ||
![]() |
8dd8fb52f4 | ||
![]() |
af9fcdcfa8 | ||
![]() |
9990965e82 | ||
![]() |
5b52063446 | ||
![]() |
b2e3d47d14 | ||
![]() |
67cbda3bd5 | ||
![]() |
84d6cf937b | ||
![]() |
765ac2a14d | ||
![]() |
1167700a53 | ||
![]() |
c6954f9972 | ||
![]() |
c72e5fcc5e | ||
![]() |
4ec075d290 | ||
![]() |
60a1a3a8c8 | ||
![]() |
bf38754846 | ||
![]() |
94197cb25d | ||
![]() |
cbaaf96324 | ||
![]() |
96899a140b | ||
![]() |
c756b080e8 | ||
![]() |
a61fb1e23b | ||
![]() |
9660d95ceb | ||
![]() |
eabc98b64a | ||
![]() |
5ff3d1a0c5 | ||
![]() |
b6587de389 | ||
![]() |
4d9f4e0202 | ||
![]() |
e9d2f527a1 | ||
![]() |
98e6939c48 | ||
![]() |
e3911f8621 | ||
![]() |
efd5f79368 | ||
![]() |
8e84b5c0b1 | ||
![]() |
c1dc473ed8 | ||
![]() |
3b08327792 | ||
![]() |
8b08c5a5f3 | ||
![]() |
eedea62ac1 | ||
![]() |
146f62abcc | ||
![]() |
1c854f2e83 | ||
![]() |
d27bb56cf3 | ||
![]() |
28aeb17cbe | ||
![]() |
b3cf318c5e | ||
![]() |
4acf65fcfb | ||
![]() |
4c871c6f50 | ||
![]() |
d56ce3ae56 | ||
![]() |
5225d7f50c | ||
![]() |
41392a1c6e | ||
![]() |
ca714157b8 | ||
![]() |
9dd2d2a3b1 | ||
![]() |
e53f2f7674 | ||
![]() |
edbfec0ce4 | ||
![]() |
d294e8cd49 | ||
![]() |
2229bb9817 | ||
![]() |
7ef7b6e042 | ||
![]() |
8159cbf7d3 | ||
![]() |
c23e609902 | ||
![]() |
2d9ea25273 | ||
![]() |
7773bbc908 | ||
![]() |
72476f0967 | ||
![]() |
a9883ccc25 | ||
![]() |
3b9b93a8a4 | ||
![]() |
f90b7bfb09 | ||
![]() |
edc941eb9e | ||
![]() |
5e7ef6c50e | ||
![]() |
fdc7be1318 | ||
![]() |
f00c1bf61b | ||
![]() |
09030de0b5 | ||
![]() |
bdba0ca1c5 | ||
![]() |
e17c7d938c | ||
![]() |
433f2908a0 | ||
![]() |
9fa5f5ace1 | ||
![]() |
849a379a8c | ||
![]() |
e205629a94 | ||
![]() |
9351f37935 | ||
![]() |
7d185bb710 | ||
![]() |
07759813ed | ||
![]() |
591d294052 | ||
![]() |
df1a0c5b8d | ||
![]() |
e743394537 | ||
![]() |
22f140ac05 | ||
![]() |
25a303935c | ||
![]() |
3bf20d4f7a | ||
![]() |
45413ce815 | ||
![]() |
8ffe466257 | ||
![]() |
d9aa3cb2d4 | ||
![]() |
5251a2452c | ||
![]() |
6df2c1595f | ||
![]() |
c56e4a08d3 | ||
![]() |
80515b69c1 | ||
![]() |
303645cffa | ||
![]() |
b3d32f65d3 | ||
![]() |
7c0aac7454 | ||
![]() |
7719eae023 | ||
![]() |
5e210c700d | ||
![]() |
c6ce871700 | ||
![]() |
f28504a2eb | ||
![]() |
48733a315b | ||
![]() |
16fbff8de1 | ||
![]() |
bbab0cea5e | ||
![]() |
19de93ae39 | ||
![]() |
230e5065c1 | ||
![]() |
c94340d493 | ||
![]() |
ac1ff231e0 | ||
![]() |
5423ffe518 | ||
![]() |
ba4d613cbc | ||
![]() |
ab11118c8a | ||
![]() |
3ca3f39f1d | ||
![]() |
8c3f167e8c | ||
![]() |
1a4d3927e7 | ||
![]() |
20a29e5cd1 | ||
![]() |
51e0fff822 | ||
![]() |
13b3e75d0e | ||
![]() |
de28178369 | ||
![]() |
2f38cd184c | ||
![]() |
d8caa76bc8 | ||
![]() |
506c3c928e | ||
![]() |
48ac1de8d3 | ||
![]() |
ebfce5b0f2 | ||
![]() |
58f4db4e52 | ||
![]() |
ba2c4d1eb7 | ||
![]() |
6656b5d973 | ||
![]() |
b4673fdc85 | ||
![]() |
ce1266be68 | ||
![]() |
226108d05d | ||
![]() |
b2d541f1eb | ||
![]() |
758020c574 | ||
![]() |
876569613b | ||
![]() |
82b26daf37 | ||
![]() |
be44b65095 | ||
![]() |
8596f0d4a3 | ||
![]() |
19a94e5f15 | ||
![]() |
7bde345b83 | ||
![]() |
d45a5747ea | ||
![]() |
e560ab61b6 | ||
![]() |
84c3ac93ef | ||
![]() |
7a50b7779a | ||
![]() |
328a3c3178 | ||
![]() |
21fa54d792 | ||
![]() |
ec7ac60cfc | ||
![]() |
f106993cd1 | ||
![]() |
1d42690824 | ||
![]() |
3f94fd5e4e | ||
![]() |
165e237be7 | ||
![]() |
38dfd6f4f9 | ||
![]() |
5851d66174 | ||
![]() |
6a970c3515 | ||
![]() |
9e91e8f1b2 | ||
![]() |
3e1bc77bf2 | ||
![]() |
d991cb6721 | ||
![]() |
37a252748a | ||
![]() |
5664b5b195 | ||
![]() |
278a596bfa | ||
![]() |
ea74f31b3e | ||
![]() |
dd4b61da20 | ||
![]() |
c56e836d22 | ||
![]() |
427f9c5b00 | ||
![]() |
aa07e16f18 | ||
![]() |
7b8c7edfd5 | ||
![]() |
cf7b35f90d | ||
![]() |
02bc9a85c0 | ||
![]() |
e1820522db | ||
![]() |
0a59c38f31 | ||
![]() |
66fdeceb3b | ||
![]() |
316d8f8e9b | ||
![]() |
15d623f2c0 | ||
![]() |
d1437b7666 | ||
![]() |
ff8e9850ba | ||
![]() |
f648a018a2 | ||
![]() |
072bd30443 | ||
![]() |
48f89f226f | ||
![]() |
d5671c2879 | ||
![]() |
80114e7a24 | ||
![]() |
dede701423 | ||
![]() |
43cb4d68f7 | ||
![]() |
4783ad3a73 | ||
![]() |
482e0c2d0b | ||
![]() |
e951164399 | ||
![]() |
c73b987cd0 | ||
![]() |
b22c9b8542 | ||
![]() |
a5327af5e9 | ||
![]() |
192f8bec26 | ||
![]() |
eb28e22891 | ||
![]() |
b6b8f30378 | ||
![]() |
67bb4f9552 | ||
![]() |
e980973621 | ||
![]() |
b3d9e0d1b0 | ||
![]() |
7c3d96d0e7 | ||
![]() |
cdd730e627 | ||
![]() |
21cca34392 | ||
![]() |
d64427d726 | ||
![]() |
87ccacb99f | ||
![]() |
b37773c630 | ||
![]() |
4765a90f97 | ||
![]() |
29587cd07c | ||
![]() |
2651d99676 | ||
![]() |
44e5525e6f | ||
![]() |
5e48f6898d | ||
![]() |
08d48f42ad | ||
![]() |
4600dbcda5 | ||
![]() |
c1b2ff20de | ||
![]() |
c980fd0e77 |
136 changed files with 6925 additions and 2082 deletions
2
.github/workflows/check_pypi_version.yml
vendored
2
.github/workflows/check_pypi_version.yml
vendored
|
@ -15,7 +15,7 @@ jobs:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
strategy:
|
strategy:
|
||||||
matrix:
|
matrix:
|
||||||
python-version: ["3.9", "3.10", "3.11", "3.12"]
|
python-version: ["3.10", "3.11", "3.12"]
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- name: Set up Python ${{ matrix.python-version }}
|
- name: Set up Python ${{ matrix.python-version }}
|
||||||
|
|
48
.github/workflows/pre-commit.yml
vendored
Normal file
48
.github/workflows/pre-commit.yml
vendored
Normal file
|
@ -0,0 +1,48 @@
|
||||||
|
---
|
||||||
|
name: pre-commit
|
||||||
|
on:
|
||||||
|
pull_request:
|
||||||
|
push:
|
||||||
|
workflow_dispatch:
|
||||||
|
jobs:
|
||||||
|
pre-commit:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
env:
|
||||||
|
RAW_LOG: pre-commit.log
|
||||||
|
CS_XML: pre-commit.xml
|
||||||
|
steps:
|
||||||
|
- run: sudo apt-get update && sudo apt-get install cppcheck uncrustify
|
||||||
|
if: false
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
- run: python -m pip install pre-commit
|
||||||
|
- uses: actions/cache/restore@v4
|
||||||
|
with:
|
||||||
|
path: ~/.cache/pre-commit/
|
||||||
|
key: pre-commit-4|${{ env.pythonLocation }}|${{ hashFiles('.pre-commit-config.yaml') }}
|
||||||
|
- name: Run pre-commit hooks
|
||||||
|
env:
|
||||||
|
SKIP: no-commit-to-branch
|
||||||
|
run: |
|
||||||
|
set -o pipefail
|
||||||
|
pre-commit gc
|
||||||
|
pre-commit run --show-diff-on-failure --color=always --all-files | tee ${RAW_LOG}
|
||||||
|
- name: Convert Raw Log to Checkstyle format (launch action)
|
||||||
|
uses: mdeweerd/logToCheckStyle@v2025.1.1
|
||||||
|
if: ${{ failure() }}
|
||||||
|
with:
|
||||||
|
in: ${{ env.RAW_LOG }}
|
||||||
|
# out: ${{ env.CS_XML }}
|
||||||
|
- uses: actions/cache/save@v4
|
||||||
|
if: ${{ ! cancelled() }}
|
||||||
|
with:
|
||||||
|
path: ~/.cache/pre-commit/
|
||||||
|
key: pre-commit-4|${{ env.pythonLocation }}|${{ hashFiles('.pre-commit-config.yaml') }}
|
||||||
|
- name: Provide log as artifact
|
||||||
|
uses: actions/upload-artifact@v4
|
||||||
|
if: ${{ ! cancelled() }}
|
||||||
|
with:
|
||||||
|
name: precommit-logs
|
||||||
|
path: |
|
||||||
|
${{ env.RAW_LOG }}
|
||||||
|
${{ env.CS_XML }}
|
||||||
|
retention-days: 2
|
2
.github/workflows/ubuntu-tests.yml
vendored
2
.github/workflows/ubuntu-tests.yml
vendored
|
@ -25,7 +25,7 @@ jobs:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
strategy:
|
strategy:
|
||||||
matrix:
|
matrix:
|
||||||
python-version: ["3.9", "3.10", "3.11", "3.12"]
|
python-version: ["3.10", "3.11", "3.12"]
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- name: Check out repository
|
- name: Check out repository
|
||||||
|
|
2
.github/workflows/windows-tests.yml
vendored
2
.github/workflows/windows-tests.yml
vendored
|
@ -25,7 +25,7 @@ jobs:
|
||||||
runs-on: windows-latest
|
runs-on: windows-latest
|
||||||
strategy:
|
strategy:
|
||||||
matrix:
|
matrix:
|
||||||
python-version: ["3.9", "3.10", "3.11", "3.12"]
|
python-version: ["3.10", "3.11", "3.12"]
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- name: Check out repository
|
- name: Check out repository
|
||||||
|
|
|
@ -15,7 +15,7 @@ jobs:
|
||||||
runs-on: windows-latest
|
runs-on: windows-latest
|
||||||
strategy:
|
strategy:
|
||||||
matrix:
|
matrix:
|
||||||
python-version: ["3.9", "3.10", "3.11", "3.12"]
|
python-version: ["3.10", "3.11", "3.12"]
|
||||||
defaults:
|
defaults:
|
||||||
run:
|
run:
|
||||||
shell: pwsh # Use PowerShell for all run steps
|
shell: pwsh # Use PowerShell for all run steps
|
||||||
|
|
91
HISTORY.md
91
HISTORY.md
|
@ -1,6 +1,89 @@
|
||||||
# Release history
|
# Release history
|
||||||
|
|
||||||
|
### Aider v0.84.0
|
||||||
|
|
||||||
|
- Added support for new Claude models including the Sonnet 4 and Opus 4 series (e.g., `claude-sonnet-4-20250514`,
|
||||||
|
`claude-opus-4-20250514`) across various providers. The default `sonnet` and `opus` aliases were updated to these newer
|
||||||
|
versions.
|
||||||
|
- Added support for the `vertex_ai/gemini-2.5-flash-preview-05-20` model.
|
||||||
|
- Fixed OpenRouter token cost calculation for improved accuracy.
|
||||||
|
- Updated default OpenRouter models during onboarding to `deepseek/deepseek-r1:free` for the free tier and
|
||||||
|
`anthropic/claude-sonnet-4` for paid tiers.
|
||||||
|
- Automatically refresh GitHub Copilot tokens when used as OpenAI API keys, by Lih Chen.
|
||||||
|
- Aider wrote 79% of the code in this release.
|
||||||
|
|
||||||
|
### Aider v0.83.2
|
||||||
|
|
||||||
|
- Bumped configargparse to 1.7.1 as 1.7 was pulled.
|
||||||
|
- Added shell tab completion for file path arguments (by saviour) and for `--edit-format`/`--editor-edit-format` options.
|
||||||
|
- Improved OpenRouter model metadata handling by introducing a local cache, increasing reliability and performance.
|
||||||
|
- The `/settings` command now displays detailed metadata for active main, editor, and weak models.
|
||||||
|
- Fixed an issue where files explicitly added via the command line were not correctly ignored if listed in `.gitignore`.
|
||||||
|
- Improved automatic commit messages by providing more context during their generation, by wangboxue.
|
||||||
|
|
||||||
|
### Aider v0.83.1
|
||||||
|
|
||||||
|
- Improved user language detection by correctly normalizing hyphenated language codes (e.g., `en-US` to `en`) and enhancing the validation of locale results.
|
||||||
|
- Prevented Aider from instructing the LLM to reply in 'C' or 'POSIX' when these are detected as the system locale.
|
||||||
|
- Displayed a spinner with the model name when generating commit messages.
|
||||||
|
|
||||||
|
### Aider v0.83.0
|
||||||
|
|
||||||
|
- Added support for `gemini-2.5-pro-preview-05-06` models.
|
||||||
|
- Added support for `qwen3-235b` models.
|
||||||
|
- Added repo-map support for OCaml and OCaml interface files, by Andrey Popp.
|
||||||
|
- Added a spinner animation while waiting for the LLM to start streaming its response.
|
||||||
|
- Updated the spinner animation to a Knight Rider style.
|
||||||
|
- Introduced `--attribute-co-authored-by` option to add co-author trailer to commit messages, by Andrew Grigorev.
|
||||||
|
- Updated Gemini model aliases (e.g., `gemini`, `gemini-2.5-pro`) to point to the `05-06` preview versions.
|
||||||
|
- Marked Gemini 2.5 Pro preview models as `overeager` by default.
|
||||||
|
- Commit message prompt specifies the user's language.
|
||||||
|
- Updated the default weak model for Gemini 2.5 Pro models to `gemini/gemini-2.5-flash-preview-04-17`.
|
||||||
|
- Corrected `gemini-2.5-pro-exp-03-25` model settings to reflect its lack of support for `thinking_budget`.
|
||||||
|
- Ensured model-specific system prompt prefixes are placed on a new line before the main system prompt.
|
||||||
|
- Added tracking of total tokens sent and received, now included in benchmark statistics.
|
||||||
|
- Automatically fetch model parameters (context window, pricing) for OpenRouter models directly from their website, by Stefan Hladnik.
|
||||||
|
- Enabled support for `thinking_tokens` and `reasoning_effort` parameters for OpenRouter models.
|
||||||
|
- Improved cost calculation using `litellm.completion_cost` where available.
|
||||||
|
- Added model settings for `openrouter/google/gemini-2.5-pro-preview-03-25`.
|
||||||
|
- Added `--disable-playwright` flag to prevent Playwright installation prompts and usage, by Andrew Grigorev.
|
||||||
|
- The `aider scrape` command-line tool will now use Playwright for web scraping if it is available, by Jon Keys.
|
||||||
|
- Fixed linter command execution on Windows by adopting `oslex` for argument quoting, by Titusz Pan.
|
||||||
|
- Improved cross-platform display of shell commands by using `oslex` for robust argument quoting, by Titusz Pan.
|
||||||
|
- Improved `/ask` mode to instruct the LLM to elide unchanging code in its responses.
|
||||||
|
- Ensured web scraping in the GUI also respects Playwright availability and the `--disable-playwright` flag.
|
||||||
|
- Improved display of filenames in the prompt header using rich Text formatting.
|
||||||
|
- Enabled `reasoning_effort` for Gemini 2.5 Flash models.
|
||||||
|
- Added a `--shell-completions` argument to generate shell completion scripts (e.g., for bash, zsh).
|
||||||
|
- Explicit `--attribute-author` or `--attribute-committer` flags now override the default behavior when `--attribute-co-authored-by` is used, allowing finer control over commit attribution, by Andrew Grigorev.
|
||||||
|
- Fixed an issue where read-only status of files might not be preserved correctly by some commands (e.g. `/drop` after adding a read-only file).
|
||||||
|
- The `aider-args` utility (or `python -m aider.args`) now defaults to printing a sample YAML configuration if no arguments are provided.
|
||||||
|
- Displayed token count progress and the name of the file or identifier being processed during repo map updates.
|
||||||
|
- Extended the waiting spinner to also show for non-streaming responses and further enhanced its animation with console width clipping, cursor hiding, and a more continuous appearance.
|
||||||
|
- Dropped support for Python 3.9.
|
||||||
|
- Aider wrote 55% of the code in this release.
|
||||||
|
|
||||||
|
### Aider v0.82.3
|
||||||
|
|
||||||
|
- Add support for `gemini-2.5-flash-preview-04-17` models.
|
||||||
|
- Improved robustness of edit block parsing when filenames start with backticks or fences.
|
||||||
|
- Add new `udiff-simple` edit format, for Gemini 2.5 Pro.
|
||||||
|
- Update default weak/editor models for Gemini 2.5 Pro models to use `gemini-2.5-flash-preview-04-17`.
|
||||||
|
- Instruct models to reply in the user's detected system language.
|
||||||
|
- Fix parsing of diffs for newly created files (`--- /dev/null`).
|
||||||
|
- Add markdown syntax highlighting support when editing multi-line commit messages via `/commit`, by Kay Gosho.
|
||||||
|
- Set Gemini 2.5 Pro models to use the `overeager` prompt setting by default.
|
||||||
|
- Add common file types (`.svg`, `.pdf`) to the default list of ignored files for AI comment scanning (`--watch`).
|
||||||
|
- Skip scanning files larger than 1MB for AI comments (`--watch`).
|
||||||
|
|
||||||
|
### Aider v0.82.2
|
||||||
|
|
||||||
|
- Fix editing shell files with diff-fenced, by zjy1412.
|
||||||
|
- Improve robustness of patch application by allowing multiple update/delete actions for the same file within a single response.
|
||||||
|
- Update prompts to instruct LLMs to consolidate all edits for a given file into a single block within the patch.
|
||||||
|
|
||||||
### Aider v0.82.1
|
### Aider v0.82.1
|
||||||
|
|
||||||
- Added support for `o3` and `o4-mini` including provider-specific versions for OpenAI, OpenRouter, and Azure.
|
- Added support for `o3` and `o4-mini` including provider-specific versions for OpenAI, OpenRouter, and Azure.
|
||||||
- Added support for Azure specific `gpt-4.1` and `gpt-4.1-mini` models.
|
- Added support for Azure specific `gpt-4.1` and `gpt-4.1-mini` models.
|
||||||
- Disabled streaming for `o3` models since you need identity verification to stream.
|
- Disabled streaming for `o3` models since you need identity verification to stream.
|
||||||
|
@ -348,7 +431,7 @@
|
||||||
- [Aider works with LLM web chat UIs](https://aider.chat/docs/usage/copypaste.html).
|
- [Aider works with LLM web chat UIs](https://aider.chat/docs/usage/copypaste.html).
|
||||||
- New `--copy-paste` mode.
|
- New `--copy-paste` mode.
|
||||||
- New `/copy-context` command.
|
- New `/copy-context` command.
|
||||||
- [Set API keys and other environment variables for all providers from command line or yaml conf file](https://aider.chat/docs/config/aider_conf.html#storing-llm-keys).
|
- [Set API keys and other environment variables for all providers from command line or YAML conf file](https://aider.chat/docs/config/aider_conf.html#storing-llm-keys).
|
||||||
- New `--api-key provider=key` setting.
|
- New `--api-key provider=key` setting.
|
||||||
- New `--set-env VAR=value` setting.
|
- New `--set-env VAR=value` setting.
|
||||||
- Added bash and zsh support to `--watch-files`.
|
- Added bash and zsh support to `--watch-files`.
|
||||||
|
@ -516,7 +599,7 @@
|
||||||
|
|
||||||
### Aider v0.59.1
|
### Aider v0.59.1
|
||||||
|
|
||||||
- Check for obsolete `yes: true` in yaml config, show helpful error.
|
- Check for obsolete `yes: true` in YAML config, show helpful error.
|
||||||
- Model settings for openrouter/anthropic/claude-3.5-sonnet:beta
|
- Model settings for openrouter/anthropic/claude-3.5-sonnet:beta
|
||||||
|
|
||||||
### Aider v0.59.0
|
### Aider v0.59.0
|
||||||
|
@ -526,7 +609,7 @@
|
||||||
- Still auto-completes the full paths of the repo files like `/add`.
|
- Still auto-completes the full paths of the repo files like `/add`.
|
||||||
- Now supports globs like `src/**/*.py`
|
- Now supports globs like `src/**/*.py`
|
||||||
- Renamed `--yes` to `--yes-always`.
|
- Renamed `--yes` to `--yes-always`.
|
||||||
- Now uses `AIDER_YES_ALWAYS` env var and `yes-always:` yaml key.
|
- Now uses `AIDER_YES_ALWAYS` env var and `yes-always:` YAML key.
|
||||||
- Existing YAML and .env files will need to be updated.
|
- Existing YAML and .env files will need to be updated.
|
||||||
- Can still abbreviate to `--yes` on the command line.
|
- Can still abbreviate to `--yes` on the command line.
|
||||||
- Config file now uses standard YAML list syntax with ` - list entries`, one per line.
|
- Config file now uses standard YAML list syntax with ` - list entries`, one per line.
|
||||||
|
@ -733,7 +816,7 @@
|
||||||
- Use `--map-refresh <always|files|manual|auto>` to configure.
|
- Use `--map-refresh <always|files|manual|auto>` to configure.
|
||||||
- Improved cost estimate logic for caching.
|
- Improved cost estimate logic for caching.
|
||||||
- Improved editing performance on Jupyter Notebook `.ipynb` files.
|
- Improved editing performance on Jupyter Notebook `.ipynb` files.
|
||||||
- Show which config yaml file is loaded with `--verbose`.
|
- Show which config YAML file is loaded with `--verbose`.
|
||||||
- Bumped dependency versions.
|
- Bumped dependency versions.
|
||||||
- Bugfix: properly load `.aider.models.metadata.json` data.
|
- Bugfix: properly load `.aider.models.metadata.json` data.
|
||||||
- Bugfix: Using `--msg /ask ...` caused an exception.
|
- Bugfix: Using `--msg /ask ...` caused an exception.
|
||||||
|
|
76
README.md
76
README.md
|
@ -27,13 +27,13 @@ cog.out(text)
|
||||||
<a href="https://github.com/Aider-AI/aider/stargazers"><img alt="GitHub Stars" title="Total number of GitHub stars the Aider project has received"
|
<a href="https://github.com/Aider-AI/aider/stargazers"><img alt="GitHub Stars" title="Total number of GitHub stars the Aider project has received"
|
||||||
src="https://img.shields.io/github/stars/Aider-AI/aider?style=flat-square&logo=github&color=f1c40f&labelColor=555555"/></a>
|
src="https://img.shields.io/github/stars/Aider-AI/aider?style=flat-square&logo=github&color=f1c40f&labelColor=555555"/></a>
|
||||||
<a href="https://pypi.org/project/aider-chat/"><img alt="PyPI Downloads" title="Total number of installations via pip from PyPI"
|
<a href="https://pypi.org/project/aider-chat/"><img alt="PyPI Downloads" title="Total number of installations via pip from PyPI"
|
||||||
src="https://img.shields.io/badge/📦%20Installs-2.0M-2ecc71?style=flat-square&labelColor=555555"/></a>
|
src="https://img.shields.io/badge/📦%20Installs-2.4M-2ecc71?style=flat-square&labelColor=555555"/></a>
|
||||||
<img alt="Tokens per week" title="Number of tokens processed weekly by Aider users"
|
<img alt="Tokens per week" title="Number of tokens processed weekly by Aider users"
|
||||||
src="https://img.shields.io/badge/📈%20Tokens%2Fweek-15B-3498db?style=flat-square&labelColor=555555"/>
|
src="https://img.shields.io/badge/📈%20Tokens%2Fweek-15B-3498db?style=flat-square&labelColor=555555"/>
|
||||||
<a href="https://openrouter.ai/#options-menu"><img alt="OpenRouter Ranking" title="Aider's ranking among applications on the OpenRouter platform"
|
<a href="https://openrouter.ai/#options-menu"><img alt="OpenRouter Ranking" title="Aider's ranking among applications on the OpenRouter platform"
|
||||||
src="https://img.shields.io/badge/🏆%20OpenRouter-Top%2020-9b59b6?style=flat-square&labelColor=555555"/></a>
|
src="https://img.shields.io/badge/🏆%20OpenRouter-Top%2020-9b59b6?style=flat-square&labelColor=555555"/></a>
|
||||||
<a href="https://aider.chat/HISTORY.html"><img alt="Singularity" title="Percentage of the new code in Aider's last release written by Aider itself"
|
<a href="https://aider.chat/HISTORY.html"><img alt="Singularity" title="Percentage of the new code in Aider's last release written by Aider itself"
|
||||||
src="https://img.shields.io/badge/🔄%20Singularity-92%25-e74c3c?style=flat-square&labelColor=555555"/></a>
|
src="https://img.shields.io/badge/🔄%20Singularity-79%25-e74c3c?style=flat-square&labelColor=555555"/></a>
|
||||||
<!--[[[end]]]-->
|
<!--[[[end]]]-->
|
||||||
</p>
|
</p>
|
||||||
|
|
||||||
|
@ -135,43 +135,45 @@ See the [installation instructions](https://aider.chat/docs/install.html) and [u
|
||||||
### Community & Resources
|
### Community & Resources
|
||||||
- [LLM Leaderboards](https://aider.chat/docs/leaderboards/)
|
- [LLM Leaderboards](https://aider.chat/docs/leaderboards/)
|
||||||
- [GitHub Repository](https://github.com/Aider-AI/aider)
|
- [GitHub Repository](https://github.com/Aider-AI/aider)
|
||||||
- [Discord Community](https://discord.gg/Tv2uQnR88V)
|
- [Discord Community](https://discord.gg/Y7X7bhMQFV)
|
||||||
|
- [Release notes](https://aider.chat/HISTORY.html)
|
||||||
- [Blog](https://aider.chat/blog/)
|
- [Blog](https://aider.chat/blog/)
|
||||||
|
|
||||||
## Kind Words From Users
|
## Kind Words From Users
|
||||||
|
|
||||||
- *"My life has changed this week. There's finally an AI coding tool that's good enough to keep up with me... Aider... It's going to rock your world."* — [Eric S. Raymond](https://x.com/esrtweet/status/1910809356381413593)
|
- *"My life has changed... Aider... It's going to rock your world."* — [Eric S. Raymond on X](https://x.com/esrtweet/status/1910809356381413593)
|
||||||
- *"The best free open source AI coding assistant."* — [IndyDevDan](https://youtu.be/YALpX8oOn78)
|
- *"The best free open source AI coding assistant."* — [IndyDevDan on YouTube](https://youtu.be/YALpX8oOn78)
|
||||||
- *"The best AI coding assistant so far."* — [Matthew Berman](https://www.youtube.com/watch?v=df8afeb1FY8)
|
- *"The best AI coding assistant so far."* — [Matthew Berman on YouTube](https://www.youtube.com/watch?v=df8afeb1FY8)
|
||||||
- *"Aider ... has easily quadrupled my coding productivity."* — [SOLAR_FIELDS](https://news.ycombinator.com/item?id=36212100)
|
- *"Aider ... has easily quadrupled my coding productivity."* — [SOLAR_FIELDS on Hacker News](https://news.ycombinator.com/item?id=36212100)
|
||||||
- *"It's a cool workflow... Aider's ergonomics are perfect for me."* — [qup](https://news.ycombinator.com/item?id=38185326)
|
- *"It's a cool workflow... Aider's ergonomics are perfect for me."* — [qup on Hacker News](https://news.ycombinator.com/item?id=38185326)
|
||||||
- *"It's really like having your senior developer live right in your Git repo - truly amazing!"* — [rappster](https://github.com/Aider-AI/aider/issues/124)
|
- *"It's really like having your senior developer live right in your Git repo - truly amazing!"* — [rappster on GitHub](https://github.com/Aider-AI/aider/issues/124)
|
||||||
- *"What an amazing tool. It's incredible."* — [valyagolev](https://github.com/Aider-AI/aider/issues/6#issue-1722897858)
|
- *"What an amazing tool. It's incredible."* — [valyagolev on GitHub](https://github.com/Aider-AI/aider/issues/6#issue-1722897858)
|
||||||
- *"Aider is such an astounding thing!"* — [cgrothaus](https://github.com/Aider-AI/aider/issues/82#issuecomment-1631876700)
|
- *"Aider is such an astounding thing!"* — [cgrothaus on GitHub](https://github.com/Aider-AI/aider/issues/82#issuecomment-1631876700)
|
||||||
- *"It was WAY faster than I would be getting off the ground and making the first few working versions."* — [Daniel Feldman](https://twitter.com/d_feldman/status/1662295077387923456)
|
- *"It was WAY faster than I would be getting off the ground and making the first few working versions."* — [Daniel Feldman on X](https://twitter.com/d_feldman/status/1662295077387923456)
|
||||||
- *"THANK YOU for Aider! It really feels like a glimpse into the future of coding."* — [derwiki](https://news.ycombinator.com/item?id=38205643)
|
- *"THANK YOU for Aider! It really feels like a glimpse into the future of coding."* — [derwiki on Hacker News](https://news.ycombinator.com/item?id=38205643)
|
||||||
- *"It's just amazing. It is freeing me to do things I felt were out my comfort zone before."* — [Dougie](https://discord.com/channels/1131200896827654144/1174002618058678323/1174084556257775656)
|
- *"It's just amazing. It is freeing me to do things I felt were out my comfort zone before."* — [Dougie on Discord](https://discord.com/channels/1131200896827654144/1174002618058678323/1174084556257775656)
|
||||||
- *"This project is stellar."* — [funkytaco](https://github.com/Aider-AI/aider/issues/112#issuecomment-1637429008)
|
- *"This project is stellar."* — [funkytaco on GitHub](https://github.com/Aider-AI/aider/issues/112#issuecomment-1637429008)
|
||||||
- *"Amazing project, definitely the best AI coding assistant I've used."* — [joshuavial](https://github.com/Aider-AI/aider/issues/84)
|
- *"Amazing project, definitely the best AI coding assistant I've used."* — [joshuavial on GitHub](https://github.com/Aider-AI/aider/issues/84)
|
||||||
- *"I absolutely love using Aider ... It makes software development feel so much lighter as an experience."* — [principalideal0](https://discord.com/channels/1131200896827654144/1133421607499595858/1229689636012691468)
|
- *"I absolutely love using Aider ... It makes software development feel so much lighter as an experience."* — [principalideal0 on Discord](https://discord.com/channels/1131200896827654144/1133421607499595858/1229689636012691468)
|
||||||
- *"I have been recovering from multiple shoulder surgeries ... and have used aider extensively. It has allowed me to continue productivity."* — [codeninja](https://www.reddit.com/r/OpenAI/s/nmNwkHy1zG)
|
- *"I have been recovering from ... surgeries ... aider ... has allowed me to continue productivity."* — [codeninja on Reddit](https://www.reddit.com/r/OpenAI/s/nmNwkHy1zG)
|
||||||
- *"I am an aider addict. I'm getting so much more work done, but in less time."* — [dandandan](https://discord.com/channels/1131200896827654144/1131200896827654149/1135913253483069470)
|
- *"I am an aider addict. I'm getting so much more work done, but in less time."* — [dandandan on Discord](https://discord.com/channels/1131200896827654144/1131200896827654149/1135913253483069470)
|
||||||
- *"After wasting $100 on tokens trying to find something better, I'm back to Aider. It blows everything else out of the water hands down, there's no competition whatsoever."* — [SystemSculpt](https://discord.com/channels/1131200896827654144/1131200896827654149/1178736602797846548)
|
- *"Aider... blows everything else out of the water hands down, there's no competition whatsoever."* — [SystemSculpt on Discord](https://discord.com/channels/1131200896827654144/1131200896827654149/1178736602797846548)
|
||||||
- *"Aider is amazing, coupled with Sonnet 3.5 it's quite mind blowing."* — [Josh Dingus](https://discord.com/channels/1131200896827654144/1133060684540813372/1262374225298198548)
|
- *"Aider is amazing, coupled with Sonnet 3.5 it's quite mind blowing."* — [Josh Dingus on Discord](https://discord.com/channels/1131200896827654144/1133060684540813372/1262374225298198548)
|
||||||
- *"Hands down, this is the best AI coding assistant tool so far."* — [IndyDevDan](https://www.youtube.com/watch?v=MPYFPvxfGZs)
|
- *"Hands down, this is the best AI coding assistant tool so far."* — [IndyDevDan on YouTube](https://www.youtube.com/watch?v=MPYFPvxfGZs)
|
||||||
- *"[Aider] changed my daily coding workflows. It's mind-blowing how a single Python application can change your life."* — [maledorak](https://discord.com/channels/1131200896827654144/1131200896827654149/1258453375620747264)
|
- *"[Aider] changed my daily coding workflows. It's mind-blowing how ...(it)... can change your life."* — [maledorak on Discord](https://discord.com/channels/1131200896827654144/1131200896827654149/1258453375620747264)
|
||||||
- *"Best agent for actual dev work in existing codebases."* — [Nick Dobos](https://twitter.com/NickADobos/status/1690408967963652097?s=20)
|
- *"Best agent for actual dev work in existing codebases."* — [Nick Dobos on X](https://twitter.com/NickADobos/status/1690408967963652097?s=20)
|
||||||
- *"One of my favorite pieces of software. Blazing trails on new paradigms!"* — [Chris Wall](https://x.com/chris65536/status/1905053299251798432)
|
- *"One of my favorite pieces of software. Blazing trails on new paradigms!"* — [Chris Wall on X](https://x.com/chris65536/status/1905053299251798432)
|
||||||
- *"Aider has been revolutionary for me and my work."* — [Starry Hope](https://x.com/starryhopeblog/status/1904985812137132056)
|
- *"Aider has been revolutionary for me and my work."* — [Starry Hope on X](https://x.com/starryhopeblog/status/1904985812137132056)
|
||||||
- *"Try aider! One of the best ways to vibe code."* — [Chris Wall](https://x.com/Chris65536/status/1905053418961391929)
|
- *"Try aider! One of the best ways to vibe code."* — [Chris Wall on X](https://x.com/Chris65536/status/1905053418961391929)
|
||||||
- *"Aider is hands down the best. And it's free and opensource."* — [AriyaSavakaLurker](https://www.reddit.com/r/ChatGPTCoding/comments/1ik16y6/whats_your_take_on_aider/mbip39n/)
|
- *"Aider is hands down the best. And it's free and opensource."* — [AriyaSavakaLurker on Reddit](https://www.reddit.com/r/ChatGPTCoding/comments/1ik16y6/whats_your_take_on_aider/mbip39n/)
|
||||||
- *"Aider is also my best friend."* — [jzn21](https://www.reddit.com/r/ChatGPTCoding/comments/1heuvuo/aider_vs_cline_vs_windsurf_vs_cursor/m27dcnb/)
|
- *"Aider is also my best friend."* — [jzn21 on Reddit](https://www.reddit.com/r/ChatGPTCoding/comments/1heuvuo/aider_vs_cline_vs_windsurf_vs_cursor/m27dcnb/)
|
||||||
- *"Try Aider, it's worth it."* — [jorgejhms](https://www.reddit.com/r/ChatGPTCoding/comments/1heuvuo/aider_vs_cline_vs_windsurf_vs_cursor/m27cp99/)
|
- *"Try Aider, it's worth it."* — [jorgejhms on Reddit](https://www.reddit.com/r/ChatGPTCoding/comments/1heuvuo/aider_vs_cline_vs_windsurf_vs_cursor/m27cp99/)
|
||||||
- *"I like aider :)"* — [Chenwei Cui](https://x.com/ccui42/status/1904965344999145698)
|
- *"I like aider :)"* — [Chenwei Cui on X](https://x.com/ccui42/status/1904965344999145698)
|
||||||
- *"Aider is the precision tool of LLM code gen. It is minimal, thoughtful and capable of surgical changes to your codebase all while keeping the developer in control."* — [Reilly Sweetland](https://x.com/rsweetland/status/1904963807237259586)
|
- *"Aider is the precision tool of LLM code gen... Minimal, thoughtful and capable of surgical changes ... while keeping the developer in control."* — [Reilly Sweetland on X](https://x.com/rsweetland/status/1904963807237259586)
|
||||||
- *"Cannot believe aider vibe coded a 650 LOC feature across service and cli today in 1 shot."* - [autopoietist](https://discord.com/channels/1131200896827654144/1131200896827654149/1355675042259796101)
|
- *"Cannot believe aider vibe coded a 650 LOC feature across service and cli today in 1 shot."* - [autopoietist on Discord](https://discord.com/channels/1131200896827654144/1131200896827654149/1355675042259796101)
|
||||||
- *"Oh no the secret is out! Yes, Aider is the best coding tool around. I highly, highly recommend it to anyone."* — [Joshua D Vander Hook](https://x.com/jodavaho/status/1911154899057795218)
|
- *"Oh no the secret is out! Yes, Aider is the best coding tool around. I highly, highly recommend it to anyone."* — [Joshua D Vander Hook on X](https://x.com/jodavaho/status/1911154899057795218)
|
||||||
- *"thanks to aider, i have started and finished three personal projects within the last two days"* — [joseph stalzyn](https://x.com/anitaheeder/status/1908338609645904160)
|
- *"thanks to aider, i have started and finished three personal projects within the last two days"* — [joseph stalzyn on X](https://x.com/anitaheeder/status/1908338609645904160)
|
||||||
- *"Been using aider as my daily driver for over a year ... I absolutely love the tool, like beyond words."* — [koleok](https://discord.com/channels/1131200896827654144/1273248471394291754/1356727448372252783)
|
- *"Been using aider as my daily driver for over a year ... I absolutely love the tool, like beyond words."* — [koleok on Discord](https://discord.com/channels/1131200896827654144/1273248471394291754/1356727448372252783)
|
||||||
- *"aider is really cool"* — [kache (@yacineMTB)](https://x.com/yacineMTB/status/1911224442430124387)
|
- *"Aider ... is the tool to benchmark against."* — [BeetleB on Hacker News](https://news.ycombinator.com/item?id=43930201)
|
||||||
|
- *"aider is really cool"* — [kache on X](https://x.com/yacineMTB/status/1911224442430124387)
|
||||||
|
|
||||||
|
|
|
@ -1,6 +1,6 @@
|
||||||
from packaging import version
|
from packaging import version
|
||||||
|
|
||||||
__version__ = "0.82.3.dev"
|
__version__ = "0.84.1.dev"
|
||||||
safe_version = __version__
|
safe_version = __version__
|
||||||
|
|
||||||
try:
|
try:
|
||||||
|
|
119
aider/args.py
119
aider/args.py
|
@ -6,6 +6,7 @@ import sys
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
import configargparse
|
import configargparse
|
||||||
|
import shtab
|
||||||
|
|
||||||
from aider import __version__
|
from aider import __version__
|
||||||
from aider.args_formatter import (
|
from aider.args_formatter import (
|
||||||
|
@ -39,10 +40,22 @@ def get_parser(default_config_files, git_root):
|
||||||
config_file_parser_class=configargparse.YAMLConfigFileParser,
|
config_file_parser_class=configargparse.YAMLConfigFileParser,
|
||||||
auto_env_var_prefix="AIDER_",
|
auto_env_var_prefix="AIDER_",
|
||||||
)
|
)
|
||||||
|
# List of valid edit formats for argparse validation & shtab completion.
|
||||||
|
# Dynamically gather them from the registered coder classes so the list
|
||||||
|
# stays in sync if new formats are added.
|
||||||
|
from aider import coders as _aider_coders
|
||||||
|
|
||||||
|
edit_format_choices = sorted(
|
||||||
|
{
|
||||||
|
c.edit_format
|
||||||
|
for c in _aider_coders.__all__
|
||||||
|
if hasattr(c, "edit_format") and c.edit_format is not None
|
||||||
|
}
|
||||||
|
)
|
||||||
group = parser.add_argument_group("Main model")
|
group = parser.add_argument_group("Main model")
|
||||||
group.add_argument(
|
group.add_argument(
|
||||||
"files", metavar="FILE", nargs="*", help="files to edit with an LLM (optional)"
|
"files", metavar="FILE", nargs="*", help="files to edit with an LLM (optional)"
|
||||||
)
|
).complete = shtab.FILE
|
||||||
group.add_argument(
|
group.add_argument(
|
||||||
"--model",
|
"--model",
|
||||||
metavar="MODEL",
|
metavar="MODEL",
|
||||||
|
@ -109,13 +122,13 @@ def get_parser(default_config_files, git_root):
|
||||||
metavar="MODEL_SETTINGS_FILE",
|
metavar="MODEL_SETTINGS_FILE",
|
||||||
default=".aider.model.settings.yml",
|
default=".aider.model.settings.yml",
|
||||||
help="Specify a file with aider model settings for unknown models",
|
help="Specify a file with aider model settings for unknown models",
|
||||||
)
|
).complete = shtab.FILE
|
||||||
group.add_argument(
|
group.add_argument(
|
||||||
"--model-metadata-file",
|
"--model-metadata-file",
|
||||||
metavar="MODEL_METADATA_FILE",
|
metavar="MODEL_METADATA_FILE",
|
||||||
default=".aider.model.metadata.json",
|
default=".aider.model.metadata.json",
|
||||||
help="Specify a file with context window and costs for unknown models",
|
help="Specify a file with context window and costs for unknown models",
|
||||||
)
|
).complete = shtab.FILE
|
||||||
group.add_argument(
|
group.add_argument(
|
||||||
"--alias",
|
"--alias",
|
||||||
action="append",
|
action="append",
|
||||||
|
@ -148,6 +161,7 @@ def get_parser(default_config_files, git_root):
|
||||||
"--edit-format",
|
"--edit-format",
|
||||||
"--chat-mode",
|
"--chat-mode",
|
||||||
metavar="EDIT_FORMAT",
|
metavar="EDIT_FORMAT",
|
||||||
|
choices=edit_format_choices,
|
||||||
default=None,
|
default=None,
|
||||||
help="Specify what edit format the LLM should use (default depends on model)",
|
help="Specify what edit format the LLM should use (default depends on model)",
|
||||||
)
|
)
|
||||||
|
@ -182,6 +196,7 @@ def get_parser(default_config_files, git_root):
|
||||||
group.add_argument(
|
group.add_argument(
|
||||||
"--editor-edit-format",
|
"--editor-edit-format",
|
||||||
metavar="EDITOR_EDIT_FORMAT",
|
metavar="EDITOR_EDIT_FORMAT",
|
||||||
|
choices=edit_format_choices,
|
||||||
default=None,
|
default=None,
|
||||||
help="Specify the edit format for the editor model (default: depends on editor model)",
|
help="Specify the edit format for the editor model (default: depends on editor model)",
|
||||||
)
|
)
|
||||||
|
@ -261,13 +276,13 @@ def get_parser(default_config_files, git_root):
|
||||||
metavar="INPUT_HISTORY_FILE",
|
metavar="INPUT_HISTORY_FILE",
|
||||||
default=default_input_history_file,
|
default=default_input_history_file,
|
||||||
help=f"Specify the chat input history file (default: {default_input_history_file})",
|
help=f"Specify the chat input history file (default: {default_input_history_file})",
|
||||||
)
|
).complete = shtab.FILE
|
||||||
group.add_argument(
|
group.add_argument(
|
||||||
"--chat-history-file",
|
"--chat-history-file",
|
||||||
metavar="CHAT_HISTORY_FILE",
|
metavar="CHAT_HISTORY_FILE",
|
||||||
default=default_chat_history_file,
|
default=default_chat_history_file,
|
||||||
help=f"Specify the chat history file (default: {default_chat_history_file})",
|
help=f"Specify the chat history file (default: {default_chat_history_file})",
|
||||||
)
|
).complete = shtab.FILE
|
||||||
group.add_argument(
|
group.add_argument(
|
||||||
"--restore-chat-history",
|
"--restore-chat-history",
|
||||||
action=argparse.BooleanOptionalAction,
|
action=argparse.BooleanOptionalAction,
|
||||||
|
@ -279,7 +294,7 @@ def get_parser(default_config_files, git_root):
|
||||||
metavar="LLM_HISTORY_FILE",
|
metavar="LLM_HISTORY_FILE",
|
||||||
default=None,
|
default=None,
|
||||||
help="Log the conversation with the LLM to this file (for example, .aider.llm.history)",
|
help="Log the conversation with the LLM to this file (for example, .aider.llm.history)",
|
||||||
)
|
).complete = shtab.FILE
|
||||||
|
|
||||||
##########
|
##########
|
||||||
group = parser.add_argument_group("Output settings")
|
group = parser.add_argument_group("Output settings")
|
||||||
|
@ -405,7 +420,7 @@ def get_parser(default_config_files, git_root):
|
||||||
type=lambda path_str: resolve_aiderignore_path(path_str, git_root),
|
type=lambda path_str: resolve_aiderignore_path(path_str, git_root),
|
||||||
default=default_aiderignore_file,
|
default=default_aiderignore_file,
|
||||||
help="Specify the aider ignore file (default: .aiderignore in git root)",
|
help="Specify the aider ignore file (default: .aiderignore in git root)",
|
||||||
)
|
).complete = shtab.FILE
|
||||||
group.add_argument(
|
group.add_argument(
|
||||||
"--subtree-only",
|
"--subtree-only",
|
||||||
action="store_true",
|
action="store_true",
|
||||||
|
@ -427,14 +442,20 @@ def get_parser(default_config_files, git_root):
|
||||||
group.add_argument(
|
group.add_argument(
|
||||||
"--attribute-author",
|
"--attribute-author",
|
||||||
action=argparse.BooleanOptionalAction,
|
action=argparse.BooleanOptionalAction,
|
||||||
default=True,
|
default=None,
|
||||||
help="Attribute aider code changes in the git author name (default: True)",
|
help=(
|
||||||
|
"Attribute aider code changes in the git author name (default: True). If explicitly set"
|
||||||
|
" to True, overrides --attribute-co-authored-by precedence."
|
||||||
|
),
|
||||||
)
|
)
|
||||||
group.add_argument(
|
group.add_argument(
|
||||||
"--attribute-committer",
|
"--attribute-committer",
|
||||||
action=argparse.BooleanOptionalAction,
|
action=argparse.BooleanOptionalAction,
|
||||||
default=True,
|
default=None,
|
||||||
help="Attribute aider commits in the git committer name (default: True)",
|
help=(
|
||||||
|
"Attribute aider commits in the git committer name (default: True). If explicitly set"
|
||||||
|
" to True, overrides --attribute-co-authored-by precedence for aider edits."
|
||||||
|
),
|
||||||
)
|
)
|
||||||
group.add_argument(
|
group.add_argument(
|
||||||
"--attribute-commit-message-author",
|
"--attribute-commit-message-author",
|
||||||
|
@ -448,6 +469,16 @@ def get_parser(default_config_files, git_root):
|
||||||
default=False,
|
default=False,
|
||||||
help="Prefix all commit messages with 'aider: ' (default: False)",
|
help="Prefix all commit messages with 'aider: ' (default: False)",
|
||||||
)
|
)
|
||||||
|
group.add_argument(
|
||||||
|
"--attribute-co-authored-by",
|
||||||
|
action=argparse.BooleanOptionalAction,
|
||||||
|
default=False,
|
||||||
|
help=(
|
||||||
|
"Attribute aider edits using the Co-authored-by trailer in the commit message"
|
||||||
|
" (default: False). If True, this takes precedence over default --attribute-author and"
|
||||||
|
" --attribute-committer behavior unless they are explicitly set to True."
|
||||||
|
),
|
||||||
|
)
|
||||||
group.add_argument(
|
group.add_argument(
|
||||||
"--git-commit-verify",
|
"--git-commit-verify",
|
||||||
action=argparse.BooleanOptionalAction,
|
action=argparse.BooleanOptionalAction,
|
||||||
|
@ -535,7 +566,7 @@ def get_parser(default_config_files, git_root):
|
||||||
"--analytics-log",
|
"--analytics-log",
|
||||||
metavar="ANALYTICS_LOG_FILE",
|
metavar="ANALYTICS_LOG_FILE",
|
||||||
help="Specify a file to log analytics events",
|
help="Specify a file to log analytics events",
|
||||||
)
|
).complete = shtab.FILE
|
||||||
group.add_argument(
|
group.add_argument(
|
||||||
"--analytics-disable",
|
"--analytics-disable",
|
||||||
action="store_true",
|
action="store_true",
|
||||||
|
@ -602,7 +633,7 @@ def get_parser(default_config_files, git_root):
|
||||||
"Specify a file containing the message to send the LLM, process reply, then exit"
|
"Specify a file containing the message to send the LLM, process reply, then exit"
|
||||||
" (disables chat mode)"
|
" (disables chat mode)"
|
||||||
),
|
),
|
||||||
)
|
).complete = shtab.FILE
|
||||||
group.add_argument(
|
group.add_argument(
|
||||||
"--gui",
|
"--gui",
|
||||||
"--browser",
|
"--browser",
|
||||||
|
@ -620,7 +651,7 @@ def get_parser(default_config_files, git_root):
|
||||||
"--apply",
|
"--apply",
|
||||||
metavar="FILE",
|
metavar="FILE",
|
||||||
help="Apply the changes from the given file instead of running the chat (debug)",
|
help="Apply the changes from the given file instead of running the chat (debug)",
|
||||||
)
|
).complete = shtab.FILE
|
||||||
group.add_argument(
|
group.add_argument(
|
||||||
"--apply-clipboard-edits",
|
"--apply-clipboard-edits",
|
||||||
action="store_true",
|
action="store_true",
|
||||||
|
@ -670,18 +701,24 @@ def get_parser(default_config_files, git_root):
|
||||||
|
|
||||||
######
|
######
|
||||||
group = parser.add_argument_group("Other settings")
|
group = parser.add_argument_group("Other settings")
|
||||||
|
group.add_argument(
|
||||||
|
"--disable-playwright",
|
||||||
|
action="store_true",
|
||||||
|
help="Never prompt for or attempt to install Playwright for web scraping (default: False).",
|
||||||
|
default=False,
|
||||||
|
)
|
||||||
group.add_argument(
|
group.add_argument(
|
||||||
"--file",
|
"--file",
|
||||||
action="append",
|
action="append",
|
||||||
metavar="FILE",
|
metavar="FILE",
|
||||||
help="specify a file to edit (can be used multiple times)",
|
help="specify a file to edit (can be used multiple times)",
|
||||||
)
|
).complete = shtab.FILE
|
||||||
group.add_argument(
|
group.add_argument(
|
||||||
"--read",
|
"--read",
|
||||||
action="append",
|
action="append",
|
||||||
metavar="FILE",
|
metavar="FILE",
|
||||||
help="specify a read-only file (can be used multiple times)",
|
help="specify a read-only file (can be used multiple times)",
|
||||||
)
|
).complete = shtab.FILE
|
||||||
group.add_argument(
|
group.add_argument(
|
||||||
"--vim",
|
"--vim",
|
||||||
action="store_true",
|
action="store_true",
|
||||||
|
@ -711,7 +748,7 @@ def get_parser(default_config_files, git_root):
|
||||||
"--load",
|
"--load",
|
||||||
metavar="LOAD_FILE",
|
metavar="LOAD_FILE",
|
||||||
help="Load and execute /commands from a file on launch",
|
help="Load and execute /commands from a file on launch",
|
||||||
)
|
).complete = shtab.FILE
|
||||||
group.add_argument(
|
group.add_argument(
|
||||||
"--encoding",
|
"--encoding",
|
||||||
default="utf-8",
|
default="utf-8",
|
||||||
|
@ -732,7 +769,7 @@ def get_parser(default_config_files, git_root):
|
||||||
"Specify the config file (default: search for .aider.conf.yml in git root, cwd"
|
"Specify the config file (default: search for .aider.conf.yml in git root, cwd"
|
||||||
" or home directory)"
|
" or home directory)"
|
||||||
),
|
),
|
||||||
)
|
).complete = shtab.FILE
|
||||||
# This is a duplicate of the argument in the preparser and is a no-op by this time of
|
# This is a duplicate of the argument in the preparser and is a no-op by this time of
|
||||||
# argument parsing, but it's here so that the help is displayed as expected.
|
# argument parsing, but it's here so that the help is displayed as expected.
|
||||||
group.add_argument(
|
group.add_argument(
|
||||||
|
@ -740,7 +777,7 @@ def get_parser(default_config_files, git_root):
|
||||||
metavar="ENV_FILE",
|
metavar="ENV_FILE",
|
||||||
default=default_env_file(git_root),
|
default=default_env_file(git_root),
|
||||||
help="Specify the .env file to load (default: .env in git root)",
|
help="Specify the .env file to load (default: .env in git root)",
|
||||||
)
|
).complete = shtab.FILE
|
||||||
group.add_argument(
|
group.add_argument(
|
||||||
"--suggest-shell-commands",
|
"--suggest-shell-commands",
|
||||||
action=argparse.BooleanOptionalAction,
|
action=argparse.BooleanOptionalAction,
|
||||||
|
@ -788,6 +825,17 @@ def get_parser(default_config_files, git_root):
|
||||||
help="Specify which editor to use for the /editor command",
|
help="Specify which editor to use for the /editor command",
|
||||||
)
|
)
|
||||||
|
|
||||||
|
supported_shells_list = sorted(list(shtab.SUPPORTED_SHELLS))
|
||||||
|
group.add_argument(
|
||||||
|
"--shell-completions",
|
||||||
|
metavar="SHELL",
|
||||||
|
choices=supported_shells_list,
|
||||||
|
help=(
|
||||||
|
"Print shell completion script for the specified SHELL and exit. Supported shells:"
|
||||||
|
f" {', '.join(supported_shells_list)}. Example: aider --shell-completions bash"
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
##########
|
##########
|
||||||
group = parser.add_argument_group("Deprecated model settings")
|
group = parser.add_argument_group("Deprecated model settings")
|
||||||
# Add deprecated model shortcut arguments
|
# Add deprecated model shortcut arguments
|
||||||
|
@ -836,13 +884,34 @@ def get_sample_dotenv():
|
||||||
|
|
||||||
|
|
||||||
def main():
|
def main():
|
||||||
arg = sys.argv[1] if len(sys.argv[1:]) else None
|
if len(sys.argv) > 1:
|
||||||
|
command = sys.argv[1]
|
||||||
if arg == "md":
|
|
||||||
print(get_md_help())
|
|
||||||
elif arg == "dotenv":
|
|
||||||
print(get_sample_dotenv())
|
|
||||||
else:
|
else:
|
||||||
|
command = "yaml" # Default to yaml if no command is given
|
||||||
|
|
||||||
|
if command == "md":
|
||||||
|
print(get_md_help())
|
||||||
|
elif command == "dotenv":
|
||||||
|
print(get_sample_dotenv())
|
||||||
|
elif command == "yaml":
|
||||||
|
print(get_sample_yaml())
|
||||||
|
elif command == "completion":
|
||||||
|
if len(sys.argv) > 2:
|
||||||
|
shell = sys.argv[2]
|
||||||
|
if shell not in shtab.SUPPORTED_SHELLS:
|
||||||
|
print(f"Error: Unsupported shell '{shell}'.", file=sys.stderr)
|
||||||
|
print(f"Supported shells are: {', '.join(shtab.SUPPORTED_SHELLS)}", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
parser = get_parser([], None)
|
||||||
|
parser.prog = "aider" # Set the program name on the parser
|
||||||
|
print(shtab.complete(parser, shell=shell))
|
||||||
|
else:
|
||||||
|
print("Error: Please specify a shell for completion.", file=sys.stderr)
|
||||||
|
print(f"Usage: python {sys.argv[0]} completion <shell_name>", file=sys.stderr)
|
||||||
|
print(f"Supported shells are: {', '.join(shtab.SUPPORTED_SHELLS)}", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
else:
|
||||||
|
# Default to YAML for any other unrecognized argument, or if 'yaml' was explicitly passed
|
||||||
print(get_sample_yaml())
|
print(get_sample_yaml())
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -96,7 +96,7 @@ class YamlHelpFormatter(argparse.HelpFormatter):
|
||||||
# Place in your home dir, or at the root of your git repo.
|
# Place in your home dir, or at the root of your git repo.
|
||||||
##########################################################
|
##########################################################
|
||||||
|
|
||||||
# Note: You can only put OpenAI and Anthropic API keys in the yaml
|
# Note: You can only put OpenAI and Anthropic API keys in the YAML
|
||||||
# config file. Keys for all APIs can be stored in a .env file
|
# config file. Keys for all APIs can be stored in a .env file
|
||||||
# https://aider.chat/docs/config/dotenv.html
|
# https://aider.chat/docs/config/dotenv.html
|
||||||
|
|
||||||
|
|
|
@ -10,6 +10,7 @@ from .editor_whole_coder import EditorWholeFileCoder
|
||||||
from .help_coder import HelpCoder
|
from .help_coder import HelpCoder
|
||||||
from .patch_coder import PatchCoder
|
from .patch_coder import PatchCoder
|
||||||
from .udiff_coder import UnifiedDiffCoder
|
from .udiff_coder import UnifiedDiffCoder
|
||||||
|
from .udiff_simple import UnifiedDiffSimpleCoder
|
||||||
from .wholefile_coder import WholeFileCoder
|
from .wholefile_coder import WholeFileCoder
|
||||||
|
|
||||||
# from .single_wholefile_func_coder import SingleWholeFileFunctionCoder
|
# from .single_wholefile_func_coder import SingleWholeFileFunctionCoder
|
||||||
|
@ -23,6 +24,7 @@ __all__ = [
|
||||||
WholeFileCoder,
|
WholeFileCoder,
|
||||||
PatchCoder,
|
PatchCoder,
|
||||||
UnifiedDiffCoder,
|
UnifiedDiffCoder,
|
||||||
|
UnifiedDiffSimpleCoder,
|
||||||
# SingleWholeFileFunctionCoder,
|
# SingleWholeFileFunctionCoder,
|
||||||
ArchitectCoder,
|
ArchitectCoder,
|
||||||
EditorEditBlockCoder,
|
EditorEditBlockCoder,
|
||||||
|
|
|
@ -8,7 +8,7 @@ class AskPrompts(CoderPrompts):
|
||||||
Answer questions about the supplied code.
|
Answer questions about the supplied code.
|
||||||
Always reply to the user in {language}.
|
Always reply to the user in {language}.
|
||||||
|
|
||||||
Describe code changes however you like. Don't use SEARCH/REPLACE blocks!
|
If you need to describe code changes, do so *briefly*.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
example_messages = []
|
example_messages = []
|
||||||
|
@ -32,4 +32,4 @@ Here are summaries of some files present in my git repo.
|
||||||
If you need to see the full contents of any files to answer my questions, ask me to *add them to the chat*.
|
If you need to see the full contents of any files to answer my questions, ask me to *add them to the chat*.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
system_reminder = ""
|
system_reminder = "{final_reminders}"
|
||||||
|
|
|
@ -15,10 +15,19 @@ import time
|
||||||
import traceback
|
import traceback
|
||||||
from collections import defaultdict
|
from collections import defaultdict
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
|
|
||||||
|
# Optional dependency: used to convert locale codes (eg ``en_US``)
|
||||||
|
# into human-readable language names (eg ``English``).
|
||||||
|
try:
|
||||||
|
from babel import Locale # type: ignore
|
||||||
|
except ImportError: # Babel not installed – we will fall back to a small mapping
|
||||||
|
Locale = None
|
||||||
from json.decoder import JSONDecodeError
|
from json.decoder import JSONDecodeError
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import List
|
from typing import List
|
||||||
|
|
||||||
|
from rich.console import Console
|
||||||
|
|
||||||
from aider import __version__, models, prompts, urls, utils
|
from aider import __version__, models, prompts, urls, utils
|
||||||
from aider.analytics import Analytics
|
from aider.analytics import Analytics
|
||||||
from aider.commands import Commands
|
from aider.commands import Commands
|
||||||
|
@ -38,6 +47,7 @@ from aider.repo import ANY_GIT_ERROR, GitRepo
|
||||||
from aider.repomap import RepoMap
|
from aider.repomap import RepoMap
|
||||||
from aider.run_cmd import run_cmd
|
from aider.run_cmd import run_cmd
|
||||||
from aider.utils import format_content, format_messages, format_tokens, is_image_file
|
from aider.utils import format_content, format_messages, format_tokens, is_image_file
|
||||||
|
from aider.waiting import WaitingSpinner
|
||||||
|
|
||||||
from ..dump import dump # noqa: F401
|
from ..dump import dump # noqa: F401
|
||||||
from .chat_chunks import ChatChunks
|
from .chat_chunks import ChatChunks
|
||||||
|
@ -101,8 +111,6 @@ class Coder:
|
||||||
partial_response_content = ""
|
partial_response_content = ""
|
||||||
commit_before_message = []
|
commit_before_message = []
|
||||||
message_cost = 0.0
|
message_cost = 0.0
|
||||||
message_tokens_sent = 0
|
|
||||||
message_tokens_received = 0
|
|
||||||
add_cache_headers = False
|
add_cache_headers = False
|
||||||
cache_warming_thread = None
|
cache_warming_thread = None
|
||||||
num_cache_warming_pings = 0
|
num_cache_warming_pings = 0
|
||||||
|
@ -168,6 +176,8 @@ class Coder:
|
||||||
commands=from_coder.commands.clone(),
|
commands=from_coder.commands.clone(),
|
||||||
total_cost=from_coder.total_cost,
|
total_cost=from_coder.total_cost,
|
||||||
ignore_mentions=from_coder.ignore_mentions,
|
ignore_mentions=from_coder.ignore_mentions,
|
||||||
|
total_tokens_sent=from_coder.total_tokens_sent,
|
||||||
|
total_tokens_received=from_coder.total_tokens_received,
|
||||||
file_watcher=from_coder.file_watcher,
|
file_watcher=from_coder.file_watcher,
|
||||||
)
|
)
|
||||||
use_kwargs.update(update) # override to complete the switch
|
use_kwargs.update(update) # override to complete the switch
|
||||||
|
@ -320,6 +330,8 @@ class Coder:
|
||||||
chat_language=None,
|
chat_language=None,
|
||||||
detect_urls=True,
|
detect_urls=True,
|
||||||
ignore_mentions=None,
|
ignore_mentions=None,
|
||||||
|
total_tokens_sent=0,
|
||||||
|
total_tokens_received=0,
|
||||||
file_watcher=None,
|
file_watcher=None,
|
||||||
auto_copy_context=False,
|
auto_copy_context=False,
|
||||||
auto_accept_architect=True,
|
auto_accept_architect=True,
|
||||||
|
@ -366,6 +378,10 @@ class Coder:
|
||||||
self.need_commit_before_edits = set()
|
self.need_commit_before_edits = set()
|
||||||
|
|
||||||
self.total_cost = total_cost
|
self.total_cost = total_cost
|
||||||
|
self.total_tokens_sent = total_tokens_sent
|
||||||
|
self.total_tokens_received = total_tokens_received
|
||||||
|
self.message_tokens_sent = 0
|
||||||
|
self.message_tokens_received = 0
|
||||||
|
|
||||||
self.verbose = verbose
|
self.verbose = verbose
|
||||||
self.abs_fnames = set()
|
self.abs_fnames = set()
|
||||||
|
@ -429,6 +445,7 @@ class Coder:
|
||||||
fname = Path(fname)
|
fname = Path(fname)
|
||||||
if self.repo and self.repo.git_ignored_file(fname):
|
if self.repo and self.repo.git_ignored_file(fname):
|
||||||
self.io.tool_warning(f"Skipping {fname} that matches gitignore spec.")
|
self.io.tool_warning(f"Skipping {fname} that matches gitignore spec.")
|
||||||
|
continue
|
||||||
|
|
||||||
if self.repo and self.repo.ignored_file(fname):
|
if self.repo and self.repo.ignored_file(fname):
|
||||||
self.io.tool_warning(f"Skipping {fname} that matches aiderignore spec.")
|
self.io.tool_warning(f"Skipping {fname} that matches aiderignore spec.")
|
||||||
|
@ -564,6 +581,15 @@ class Coder:
|
||||||
|
|
||||||
return True
|
return True
|
||||||
|
|
||||||
|
def _stop_waiting_spinner(self):
|
||||||
|
"""Stop and clear the waiting spinner if it is running."""
|
||||||
|
spinner = getattr(self, "waiting_spinner", None)
|
||||||
|
if spinner:
|
||||||
|
try:
|
||||||
|
spinner.stop()
|
||||||
|
finally:
|
||||||
|
self.waiting_spinner = None
|
||||||
|
|
||||||
def get_abs_fnames_content(self):
|
def get_abs_fnames_content(self):
|
||||||
for fname in list(self.abs_fnames):
|
for fname in list(self.abs_fnames):
|
||||||
content = self.io.read_text(fname)
|
content = self.io.read_text(fname)
|
||||||
|
@ -953,6 +979,9 @@ class Coder:
|
||||||
return inp
|
return inp
|
||||||
|
|
||||||
def keyboard_interrupt(self):
|
def keyboard_interrupt(self):
|
||||||
|
# Ensure cursor is visible on exit
|
||||||
|
Console().show_cursor(True)
|
||||||
|
|
||||||
now = time.time()
|
now = time.time()
|
||||||
|
|
||||||
thresh = 2 # seconds
|
thresh = 2 # seconds
|
||||||
|
@ -1011,23 +1040,82 @@ class Coder:
|
||||||
]
|
]
|
||||||
self.cur_messages = []
|
self.cur_messages = []
|
||||||
|
|
||||||
def get_user_language(self):
|
def normalize_language(self, lang_code):
|
||||||
if self.chat_language:
|
"""
|
||||||
return self.chat_language
|
Convert a locale code such as ``en_US`` or ``fr`` into a readable
|
||||||
|
language name (e.g. ``English`` or ``French``). If Babel is
|
||||||
|
available it is used for reliable conversion; otherwise a small
|
||||||
|
built-in fallback map handles common languages.
|
||||||
|
"""
|
||||||
|
if not lang_code:
|
||||||
|
return None
|
||||||
|
|
||||||
|
if lang_code.upper() in ("C", "POSIX"):
|
||||||
|
return None
|
||||||
|
|
||||||
|
# Probably already a language name
|
||||||
|
if (
|
||||||
|
len(lang_code) > 3
|
||||||
|
and "_" not in lang_code
|
||||||
|
and "-" not in lang_code
|
||||||
|
and lang_code[0].isupper()
|
||||||
|
):
|
||||||
|
return lang_code
|
||||||
|
|
||||||
|
# Preferred: Babel
|
||||||
|
if Locale is not None:
|
||||||
|
try:
|
||||||
|
loc = Locale.parse(lang_code.replace("-", "_"))
|
||||||
|
return loc.get_display_name("en").capitalize()
|
||||||
|
except Exception:
|
||||||
|
pass # Fall back to manual mapping
|
||||||
|
|
||||||
|
# Simple fallback for common languages
|
||||||
|
fallback = {
|
||||||
|
"en": "English",
|
||||||
|
"fr": "French",
|
||||||
|
"es": "Spanish",
|
||||||
|
"de": "German",
|
||||||
|
"it": "Italian",
|
||||||
|
"pt": "Portuguese",
|
||||||
|
"zh": "Chinese",
|
||||||
|
"ja": "Japanese",
|
||||||
|
"ko": "Korean",
|
||||||
|
"ru": "Russian",
|
||||||
|
}
|
||||||
|
primary_lang_code = lang_code.replace("-", "_").split("_")[0].lower()
|
||||||
|
return fallback.get(primary_lang_code, lang_code)
|
||||||
|
|
||||||
|
def get_user_language(self):
|
||||||
|
"""
|
||||||
|
Detect the user's language preference and return a human-readable
|
||||||
|
language name such as ``English``. Detection order:
|
||||||
|
|
||||||
|
1. ``self.chat_language`` if explicitly set
|
||||||
|
2. ``locale.getlocale()``
|
||||||
|
3. ``LANG`` / ``LANGUAGE`` / ``LC_ALL`` / ``LC_MESSAGES`` environment variables
|
||||||
|
"""
|
||||||
|
|
||||||
|
# Explicit override
|
||||||
|
if self.chat_language:
|
||||||
|
return self.normalize_language(self.chat_language)
|
||||||
|
|
||||||
|
# System locale
|
||||||
try:
|
try:
|
||||||
lang = locale.getlocale()[0]
|
lang = locale.getlocale()[0]
|
||||||
if lang:
|
if lang:
|
||||||
return lang # Return the full language code, including country
|
lang = self.normalize_language(lang)
|
||||||
|
if lang:
|
||||||
|
return lang
|
||||||
except Exception:
|
except Exception:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
for env_var in ["LANG", "LANGUAGE", "LC_ALL", "LC_MESSAGES"]:
|
# Environment variables
|
||||||
|
for env_var in ("LANG", "LANGUAGE", "LC_ALL", "LC_MESSAGES"):
|
||||||
lang = os.environ.get(env_var)
|
lang = os.environ.get(env_var)
|
||||||
if lang:
|
if lang:
|
||||||
return lang.split(".")[
|
lang = lang.split(".")[0] # Strip encoding if present
|
||||||
0
|
return self.normalize_language(lang)
|
||||||
] # Return language and country, but remove encoding if present
|
|
||||||
|
|
||||||
return None
|
return None
|
||||||
|
|
||||||
|
@ -1079,12 +1167,15 @@ class Coder:
|
||||||
return platform_text
|
return platform_text
|
||||||
|
|
||||||
def fmt_system_prompt(self, prompt):
|
def fmt_system_prompt(self, prompt):
|
||||||
|
final_reminders = []
|
||||||
if self.main_model.lazy:
|
if self.main_model.lazy:
|
||||||
lazy_prompt = self.gpt_prompts.lazy_prompt
|
final_reminders.append(self.gpt_prompts.lazy_prompt)
|
||||||
elif self.main_model.overeager:
|
if self.main_model.overeager:
|
||||||
lazy_prompt = self.gpt_prompts.overeager_prompt
|
final_reminders.append(self.gpt_prompts.overeager_prompt)
|
||||||
else:
|
|
||||||
lazy_prompt = ""
|
user_lang = self.get_user_language()
|
||||||
|
if user_lang:
|
||||||
|
final_reminders.append(f"Reply in {user_lang}.\n")
|
||||||
|
|
||||||
platform_text = self.get_platform_info()
|
platform_text = self.get_platform_info()
|
||||||
|
|
||||||
|
@ -1099,10 +1190,10 @@ class Coder:
|
||||||
)
|
)
|
||||||
rename_with_shell = ""
|
rename_with_shell = ""
|
||||||
|
|
||||||
if self.chat_language:
|
if user_lang: # user_lang is the result of self.get_user_language()
|
||||||
language = self.chat_language
|
language = user_lang
|
||||||
else:
|
else:
|
||||||
language = "the same language they are using"
|
language = "the same language they are using" # Default if no specific lang detected
|
||||||
|
|
||||||
if self.fence[0] == "`" * 4:
|
if self.fence[0] == "`" * 4:
|
||||||
quad_backtick_reminder = (
|
quad_backtick_reminder = (
|
||||||
|
@ -1111,10 +1202,12 @@ class Coder:
|
||||||
else:
|
else:
|
||||||
quad_backtick_reminder = ""
|
quad_backtick_reminder = ""
|
||||||
|
|
||||||
|
final_reminders = "\n\n".join(final_reminders)
|
||||||
|
|
||||||
prompt = prompt.format(
|
prompt = prompt.format(
|
||||||
fence=self.fence,
|
fence=self.fence,
|
||||||
quad_backtick_reminder=quad_backtick_reminder,
|
quad_backtick_reminder=quad_backtick_reminder,
|
||||||
lazy_prompt=lazy_prompt,
|
final_reminders=final_reminders,
|
||||||
platform=platform_text,
|
platform=platform_text,
|
||||||
shell_cmd_prompt=shell_cmd_prompt,
|
shell_cmd_prompt=shell_cmd_prompt,
|
||||||
rename_with_shell=rename_with_shell,
|
rename_with_shell=rename_with_shell,
|
||||||
|
@ -1123,14 +1216,13 @@ class Coder:
|
||||||
language=language,
|
language=language,
|
||||||
)
|
)
|
||||||
|
|
||||||
if self.main_model.system_prompt_prefix:
|
|
||||||
prompt = self.main_model.system_prompt_prefix + prompt
|
|
||||||
|
|
||||||
return prompt
|
return prompt
|
||||||
|
|
||||||
def format_chat_chunks(self):
|
def format_chat_chunks(self):
|
||||||
self.choose_fence()
|
self.choose_fence()
|
||||||
main_sys = self.fmt_system_prompt(self.gpt_prompts.main_system)
|
main_sys = self.fmt_system_prompt(self.gpt_prompts.main_system)
|
||||||
|
if self.main_model.system_prompt_prefix:
|
||||||
|
main_sys = self.main_model.system_prompt_prefix + "\n" + main_sys
|
||||||
|
|
||||||
example_messages = []
|
example_messages = []
|
||||||
if self.main_model.examples_as_sys_msg:
|
if self.main_model.examples_as_sys_msg:
|
||||||
|
@ -1339,8 +1431,13 @@ class Coder:
|
||||||
utils.show_messages(messages, functions=self.functions)
|
utils.show_messages(messages, functions=self.functions)
|
||||||
|
|
||||||
self.multi_response_content = ""
|
self.multi_response_content = ""
|
||||||
if self.show_pretty() and self.stream:
|
if self.show_pretty():
|
||||||
self.mdstream = self.io.get_assistant_mdstream()
|
self.waiting_spinner = WaitingSpinner("Waiting for " + self.main_model.name)
|
||||||
|
self.waiting_spinner.start()
|
||||||
|
if self.stream:
|
||||||
|
self.mdstream = self.io.get_assistant_mdstream()
|
||||||
|
else:
|
||||||
|
self.mdstream = None
|
||||||
else:
|
else:
|
||||||
self.mdstream = None
|
self.mdstream = None
|
||||||
|
|
||||||
|
@ -1413,6 +1510,9 @@ class Coder:
|
||||||
self.live_incremental_response(True)
|
self.live_incremental_response(True)
|
||||||
self.mdstream = None
|
self.mdstream = None
|
||||||
|
|
||||||
|
# Ensure any waiting spinner is stopped
|
||||||
|
self._stop_waiting_spinner()
|
||||||
|
|
||||||
self.partial_response_content = self.get_multi_response_content_in_progress(True)
|
self.partial_response_content = self.get_multi_response_content_in_progress(True)
|
||||||
self.remove_reasoning_content()
|
self.remove_reasoning_content()
|
||||||
self.multi_response_content = ""
|
self.multi_response_content = ""
|
||||||
|
@ -1729,6 +1829,9 @@ class Coder:
|
||||||
self.io.ai_output(json.dumps(args, indent=4))
|
self.io.ai_output(json.dumps(args, indent=4))
|
||||||
|
|
||||||
def show_send_output(self, completion):
|
def show_send_output(self, completion):
|
||||||
|
# Stop spinner once we have a response
|
||||||
|
self._stop_waiting_spinner()
|
||||||
|
|
||||||
if self.verbose:
|
if self.verbose:
|
||||||
print(completion)
|
print(completion)
|
||||||
|
|
||||||
|
@ -1843,6 +1946,8 @@ class Coder:
|
||||||
except AttributeError:
|
except AttributeError:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
if received_content:
|
||||||
|
self._stop_waiting_spinner()
|
||||||
self.partial_response_content += text
|
self.partial_response_content += text
|
||||||
|
|
||||||
if self.show_pretty():
|
if self.show_pretty():
|
||||||
|
@ -1922,6 +2027,44 @@ class Coder:
|
||||||
self.usage_report = tokens_report
|
self.usage_report = tokens_report
|
||||||
return
|
return
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Try and use litellm's built in cost calculator. Seems to work for non-streaming only?
|
||||||
|
cost = litellm.completion_cost(completion_response=completion)
|
||||||
|
except Exception:
|
||||||
|
cost = 0
|
||||||
|
|
||||||
|
if not cost:
|
||||||
|
cost = self.compute_costs_from_tokens(
|
||||||
|
prompt_tokens, completion_tokens, cache_write_tokens, cache_hit_tokens
|
||||||
|
)
|
||||||
|
|
||||||
|
self.total_cost += cost
|
||||||
|
self.message_cost += cost
|
||||||
|
|
||||||
|
def format_cost(value):
|
||||||
|
if value == 0:
|
||||||
|
return "0.00"
|
||||||
|
magnitude = abs(value)
|
||||||
|
if magnitude >= 0.01:
|
||||||
|
return f"{value:.2f}"
|
||||||
|
else:
|
||||||
|
return f"{value:.{max(2, 2 - int(math.log10(magnitude)))}f}"
|
||||||
|
|
||||||
|
cost_report = (
|
||||||
|
f"Cost: ${format_cost(self.message_cost)} message,"
|
||||||
|
f" ${format_cost(self.total_cost)} session."
|
||||||
|
)
|
||||||
|
|
||||||
|
if cache_hit_tokens and cache_write_tokens:
|
||||||
|
sep = "\n"
|
||||||
|
else:
|
||||||
|
sep = " "
|
||||||
|
|
||||||
|
self.usage_report = tokens_report + sep + cost_report
|
||||||
|
|
||||||
|
def compute_costs_from_tokens(
|
||||||
|
self, prompt_tokens, completion_tokens, cache_write_tokens, cache_hit_tokens
|
||||||
|
):
|
||||||
cost = 0
|
cost = 0
|
||||||
|
|
||||||
input_cost_per_token = self.main_model.info.get("input_cost_per_token") or 0
|
input_cost_per_token = self.main_model.info.get("input_cost_per_token") or 0
|
||||||
|
@ -1949,35 +2092,15 @@ class Coder:
|
||||||
cost += prompt_tokens * input_cost_per_token
|
cost += prompt_tokens * input_cost_per_token
|
||||||
|
|
||||||
cost += completion_tokens * output_cost_per_token
|
cost += completion_tokens * output_cost_per_token
|
||||||
|
return cost
|
||||||
self.total_cost += cost
|
|
||||||
self.message_cost += cost
|
|
||||||
|
|
||||||
def format_cost(value):
|
|
||||||
if value == 0:
|
|
||||||
return "0.00"
|
|
||||||
magnitude = abs(value)
|
|
||||||
if magnitude >= 0.01:
|
|
||||||
return f"{value:.2f}"
|
|
||||||
else:
|
|
||||||
return f"{value:.{max(2, 2 - int(math.log10(magnitude)))}f}"
|
|
||||||
|
|
||||||
cost_report = (
|
|
||||||
f"Cost: ${format_cost(self.message_cost)} message,"
|
|
||||||
f" ${format_cost(self.total_cost)} session."
|
|
||||||
)
|
|
||||||
|
|
||||||
if cache_hit_tokens and cache_write_tokens:
|
|
||||||
sep = "\n"
|
|
||||||
else:
|
|
||||||
sep = " "
|
|
||||||
|
|
||||||
self.usage_report = tokens_report + sep + cost_report
|
|
||||||
|
|
||||||
def show_usage_report(self):
|
def show_usage_report(self):
|
||||||
if not self.usage_report:
|
if not self.usage_report:
|
||||||
return
|
return
|
||||||
|
|
||||||
|
self.total_tokens_sent += self.message_tokens_sent
|
||||||
|
self.total_tokens_received += self.message_tokens_received
|
||||||
|
|
||||||
self.io.tool_output(self.usage_report)
|
self.io.tool_output(self.usage_report)
|
||||||
|
|
||||||
prompt_tokens = self.message_tokens_sent
|
prompt_tokens = self.message_tokens_sent
|
||||||
|
@ -2252,7 +2375,7 @@ class Coder:
|
||||||
context = self.get_context_from_history(self.cur_messages)
|
context = self.get_context_from_history(self.cur_messages)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
res = self.repo.commit(fnames=edited, context=context, aider_edits=True)
|
res = self.repo.commit(fnames=edited, context=context, aider_edits=True, coder=self)
|
||||||
if res:
|
if res:
|
||||||
self.show_auto_commit_outcome(res)
|
self.show_auto_commit_outcome(res)
|
||||||
commit_hash, commit_message = res
|
commit_hash, commit_message = res
|
||||||
|
@ -2288,7 +2411,7 @@ class Coder:
|
||||||
if not self.repo:
|
if not self.repo:
|
||||||
return
|
return
|
||||||
|
|
||||||
self.repo.commit(fnames=self.need_commit_before_edits)
|
self.repo.commit(fnames=self.need_commit_before_edits, coder=self)
|
||||||
|
|
||||||
# files changed, move cur messages back behind the files messages
|
# files changed, move cur messages back behind the files messages
|
||||||
# self.move_back_cur_messages(self.gpt_prompts.files_content_local_edits)
|
# self.move_back_cur_messages(self.gpt_prompts.files_content_local_edits)
|
||||||
|
|
|
@ -15,7 +15,9 @@ You always COMPLETELY IMPLEMENT the needed code!
|
||||||
"""
|
"""
|
||||||
|
|
||||||
overeager_prompt = """Pay careful attention to the scope of the user's request.
|
overeager_prompt = """Pay careful attention to the scope of the user's request.
|
||||||
Do what they ask, but no more."""
|
Do what they ask, but no more.
|
||||||
|
Do not improve, comment, fix or modify unrelated parts of the code in any way!
|
||||||
|
"""
|
||||||
|
|
||||||
example_messages = []
|
example_messages = []
|
||||||
|
|
||||||
|
|
|
@ -412,7 +412,16 @@ def strip_filename(filename, fence):
|
||||||
return
|
return
|
||||||
|
|
||||||
start_fence = fence[0]
|
start_fence = fence[0]
|
||||||
if filename.startswith(start_fence) or filename.startswith(triple_backticks):
|
if filename.startswith(start_fence):
|
||||||
|
candidate = filename[len(start_fence) :]
|
||||||
|
if candidate and ("." in candidate or "/" in candidate):
|
||||||
|
return candidate
|
||||||
|
return
|
||||||
|
|
||||||
|
if filename.startswith(triple_backticks):
|
||||||
|
candidate = filename[len(triple_backticks) :]
|
||||||
|
if candidate and ("." in candidate or "/" in candidate):
|
||||||
|
return candidate
|
||||||
return
|
return
|
||||||
|
|
||||||
filename = filename.rstrip(":")
|
filename = filename.rstrip(":")
|
||||||
|
@ -456,8 +465,12 @@ def find_original_update_blocks(content, fence=DEFAULT_FENCE, valid_fnames=None)
|
||||||
]
|
]
|
||||||
|
|
||||||
# Check if the next line or the one after that is an editblock
|
# Check if the next line or the one after that is an editblock
|
||||||
next_is_editblock = (i + 1 < len(lines) and head_pattern.match(lines[i + 1].strip())
|
next_is_editblock = (
|
||||||
or i + 2 < len(lines) and head_pattern.match(lines[i + 2].strip()))
|
i + 1 < len(lines)
|
||||||
|
and head_pattern.match(lines[i + 1].strip())
|
||||||
|
or i + 2 < len(lines)
|
||||||
|
and head_pattern.match(lines[i + 2].strip())
|
||||||
|
)
|
||||||
|
|
||||||
if any(line.strip().startswith(start) for start in shell_starts) and not next_is_editblock:
|
if any(line.strip().startswith(start) for start in shell_starts) and not next_is_editblock:
|
||||||
shell_content = []
|
shell_content = []
|
||||||
|
|
|
@ -5,5 +5,6 @@ from .editblock_fenced_prompts import EditBlockFencedPrompts
|
||||||
|
|
||||||
class EditBlockFencedCoder(EditBlockCoder):
|
class EditBlockFencedCoder(EditBlockCoder):
|
||||||
"""A coder that uses fenced search/replace blocks for code modifications."""
|
"""A coder that uses fenced search/replace blocks for code modifications."""
|
||||||
|
|
||||||
edit_format = "diff-fenced"
|
edit_format = "diff-fenced"
|
||||||
gpt_prompts = EditBlockFencedPrompts()
|
gpt_prompts = EditBlockFencedPrompts()
|
||||||
|
|
|
@ -137,7 +137,7 @@ To rename files which have been added to the chat, use shell commands at the end
|
||||||
If the user just says something like "ok" or "go ahead" or "do that" they probably want you to make SEARCH/REPLACE blocks for the code changes you just proposed.
|
If the user just says something like "ok" or "go ahead" or "do that" they probably want you to make SEARCH/REPLACE blocks for the code changes you just proposed.
|
||||||
The user will say when they've applied your edits. If they haven't explicitly confirmed the edits have been applied, they probably want proper SEARCH/REPLACE blocks.
|
The user will say when they've applied your edits. If they haven't explicitly confirmed the edits have been applied, they probably want proper SEARCH/REPLACE blocks.
|
||||||
|
|
||||||
{lazy_prompt}
|
{final_reminders}
|
||||||
ONLY EVER RETURN CODE IN A *SEARCH/REPLACE BLOCK*!
|
ONLY EVER RETURN CODE IN A *SEARCH/REPLACE BLOCK*!
|
||||||
{shell_cmd_reminder}
|
{shell_cmd_reminder}
|
||||||
"""
|
"""
|
||||||
|
|
|
@ -1,5 +1,6 @@
|
||||||
# flake8: noqa: E501
|
# flake8: noqa: E501
|
||||||
|
|
||||||
|
from . import shell
|
||||||
from .base_prompts import CoderPrompts
|
from .base_prompts import CoderPrompts
|
||||||
|
|
||||||
|
|
||||||
|
@ -7,7 +8,7 @@ class EditBlockPrompts(CoderPrompts):
|
||||||
main_system = """Act as an expert software developer.
|
main_system = """Act as an expert software developer.
|
||||||
Always use best practices when coding.
|
Always use best practices when coding.
|
||||||
Respect and use existing conventions, libraries, etc that are already present in the code base.
|
Respect and use existing conventions, libraries, etc that are already present in the code base.
|
||||||
{lazy_prompt}
|
{final_reminders}
|
||||||
Take requests for changes to the supplied code.
|
Take requests for changes to the supplied code.
|
||||||
If the request is ambiguous, ask questions.
|
If the request is ambiguous, ask questions.
|
||||||
|
|
||||||
|
@ -28,32 +29,6 @@ You can keep asking if you then decide you need to edit more files.
|
||||||
All changes to files must use this *SEARCH/REPLACE block* format.
|
All changes to files must use this *SEARCH/REPLACE block* format.
|
||||||
ONLY EVER RETURN CODE IN A *SEARCH/REPLACE BLOCK*!
|
ONLY EVER RETURN CODE IN A *SEARCH/REPLACE BLOCK*!
|
||||||
{shell_cmd_prompt}
|
{shell_cmd_prompt}
|
||||||
"""
|
|
||||||
|
|
||||||
shell_cmd_prompt = """
|
|
||||||
4. *Concisely* suggest any shell commands the user might want to run in ```bash blocks.
|
|
||||||
|
|
||||||
Just suggest shell commands this way, not example code.
|
|
||||||
Only suggest complete shell commands that are ready to execute, without placeholders.
|
|
||||||
Only suggest at most a few shell commands at a time, not more than 1-3, one per line.
|
|
||||||
Do not suggest multi-line shell commands.
|
|
||||||
All shell commands will run from the root directory of the user's project.
|
|
||||||
|
|
||||||
Use the appropriate shell based on the user's system info:
|
|
||||||
{platform}
|
|
||||||
Examples of when to suggest shell commands:
|
|
||||||
|
|
||||||
- If you changed a self-contained html file, suggest an OS-appropriate command to open a browser to view it to see the updated content.
|
|
||||||
- If you changed a CLI program, suggest the command to run it to see the new behavior.
|
|
||||||
- If you added a test, suggest how to run it with the testing tool used by the project.
|
|
||||||
- Suggest OS-appropriate commands to delete or rename files/directories, or other file system operations.
|
|
||||||
- If your code changes add new dependencies, suggest the command to install them.
|
|
||||||
- Etc.
|
|
||||||
"""
|
|
||||||
|
|
||||||
no_shell_cmd_prompt = """
|
|
||||||
Keep in mind these details about the user's platform and environment:
|
|
||||||
{platform}
|
|
||||||
"""
|
"""
|
||||||
example_messages = [
|
example_messages = [
|
||||||
dict(
|
dict(
|
||||||
|
@ -181,7 +156,7 @@ If you want to put code in a new file, use a *SEARCH/REPLACE block* with:
|
||||||
- An empty `SEARCH` section
|
- An empty `SEARCH` section
|
||||||
- The new file's contents in the `REPLACE` section
|
- The new file's contents in the `REPLACE` section
|
||||||
|
|
||||||
{rename_with_shell}{go_ahead_tip}{lazy_prompt}ONLY EVER RETURN CODE IN A *SEARCH/REPLACE BLOCK*!
|
{rename_with_shell}{go_ahead_tip}{final_reminders}ONLY EVER RETURN CODE IN A *SEARCH/REPLACE BLOCK*!
|
||||||
{shell_cmd_reminder}
|
{shell_cmd_reminder}
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
@ -194,14 +169,6 @@ The user will say when they've applied your edits. If they haven't explicitly co
|
||||||
|
|
||||||
"""
|
"""
|
||||||
|
|
||||||
shell_cmd_reminder = """
|
shell_cmd_prompt = shell.shell_cmd_prompt
|
||||||
Examples of when to suggest shell commands:
|
no_shell_cmd_prompt = shell.no_shell_cmd_prompt
|
||||||
|
shell_cmd_reminder = shell.shell_cmd_reminder
|
||||||
- If you changed a self-contained html file, suggest an OS-appropriate command to open a browser to view it to see the updated content.
|
|
||||||
- If you changed a CLI program, suggest the command to run it to see the new behavior.
|
|
||||||
- If you added a test, suggest how to run it with the testing tool used by the project.
|
|
||||||
- Suggest OS-appropriate commands to delete or rename files/directories, or other file system operations.
|
|
||||||
- If your code changes add new dependencies, suggest the command to install them.
|
|
||||||
- Etc.
|
|
||||||
|
|
||||||
"""
|
|
||||||
|
|
|
@ -5,7 +5,7 @@ from .editblock_prompts import EditBlockPrompts
|
||||||
|
|
||||||
class EditorEditBlockPrompts(EditBlockPrompts):
|
class EditorEditBlockPrompts(EditBlockPrompts):
|
||||||
main_system = """Act as an expert software developer who edits source code.
|
main_system = """Act as an expert software developer who edits source code.
|
||||||
{lazy_prompt}
|
{final_reminders}
|
||||||
Describe each change with a *SEARCH/REPLACE block* per the examples below.
|
Describe each change with a *SEARCH/REPLACE block* per the examples below.
|
||||||
All changes to files must use this *SEARCH/REPLACE block* format.
|
All changes to files must use this *SEARCH/REPLACE block* format.
|
||||||
ONLY EVER RETURN CODE IN A *SEARCH/REPLACE BLOCK*!
|
ONLY EVER RETURN CODE IN A *SEARCH/REPLACE BLOCK*!
|
||||||
|
|
|
@ -5,6 +5,6 @@ from .wholefile_prompts import WholeFilePrompts
|
||||||
|
|
||||||
class EditorWholeFilePrompts(WholeFilePrompts):
|
class EditorWholeFilePrompts(WholeFilePrompts):
|
||||||
main_system = """Act as an expert software developer and make changes to source code.
|
main_system = """Act as an expert software developer and make changes to source code.
|
||||||
{lazy_prompt}
|
{final_reminders}
|
||||||
Output a copy of each file that needs changes.
|
Output a copy of each file that needs changes.
|
||||||
"""
|
"""
|
||||||
|
|
|
@ -5,6 +5,7 @@ from .help_prompts import HelpPrompts
|
||||||
|
|
||||||
class HelpCoder(Coder):
|
class HelpCoder(Coder):
|
||||||
"""Interactive help and documentation about aider."""
|
"""Interactive help and documentation about aider."""
|
||||||
|
|
||||||
edit_format = "help"
|
edit_format = "help"
|
||||||
gpt_prompts = HelpPrompts()
|
gpt_prompts = HelpPrompts()
|
||||||
|
|
||||||
|
|
|
@ -11,7 +11,7 @@ class PatchPrompts(EditBlockPrompts):
|
||||||
main_system = """Act as an expert software developer.
|
main_system = """Act as an expert software developer.
|
||||||
Always use best practices when coding.
|
Always use best practices when coding.
|
||||||
Respect and use existing conventions, libraries, etc that are already present in the code base.
|
Respect and use existing conventions, libraries, etc that are already present in the code base.
|
||||||
{lazy_prompt}
|
{final_reminders}
|
||||||
Take requests for changes to the supplied code.
|
Take requests for changes to the supplied code.
|
||||||
If the request is ambiguous, ask questions.
|
If the request is ambiguous, ask questions.
|
||||||
|
|
||||||
|
@ -156,6 +156,6 @@ For `Add` actions, use the `*** Add File: [path/to/new/file]` marker, followed b
|
||||||
|
|
||||||
For `Delete` actions, use the `*** Delete File: [path/to/file]` marker. No other lines are needed for the deletion.
|
For `Delete` actions, use the `*** Delete File: [path/to/file]` marker. No other lines are needed for the deletion.
|
||||||
|
|
||||||
{rename_with_shell}{go_ahead_tip}{lazy_prompt}ONLY EVER RETURN CODE IN THE SPECIFIED V4A DIFF FORMAT!
|
{rename_with_shell}{go_ahead_tip}{final_reminders}ONLY EVER RETURN CODE IN THE SPECIFIED V4A DIFF FORMAT!
|
||||||
{shell_cmd_reminder}
|
{shell_cmd_reminder}
|
||||||
"""
|
"""
|
||||||
|
|
37
aider/coders/shell.py
Normal file
37
aider/coders/shell.py
Normal file
|
@ -0,0 +1,37 @@
|
||||||
|
shell_cmd_prompt = """
|
||||||
|
4. *Concisely* suggest any shell commands the user might want to run in ```bash blocks.
|
||||||
|
|
||||||
|
Just suggest shell commands this way, not example code.
|
||||||
|
Only suggest complete shell commands that are ready to execute, without placeholders.
|
||||||
|
Only suggest at most a few shell commands at a time, not more than 1-3, one per line.
|
||||||
|
Do not suggest multi-line shell commands.
|
||||||
|
All shell commands will run from the root directory of the user's project.
|
||||||
|
|
||||||
|
Use the appropriate shell based on the user's system info:
|
||||||
|
{platform}
|
||||||
|
Examples of when to suggest shell commands:
|
||||||
|
|
||||||
|
- If you changed a self-contained html file, suggest an OS-appropriate command to open a browser to view it to see the updated content.
|
||||||
|
- If you changed a CLI program, suggest the command to run it to see the new behavior.
|
||||||
|
- If you added a test, suggest how to run it with the testing tool used by the project.
|
||||||
|
- Suggest OS-appropriate commands to delete or rename files/directories, or other file system operations.
|
||||||
|
- If your code changes add new dependencies, suggest the command to install them.
|
||||||
|
- Etc.
|
||||||
|
""" # noqa
|
||||||
|
|
||||||
|
no_shell_cmd_prompt = """
|
||||||
|
Keep in mind these details about the user's platform and environment:
|
||||||
|
{platform}
|
||||||
|
""" # noqa
|
||||||
|
|
||||||
|
shell_cmd_reminder = """
|
||||||
|
Examples of when to suggest shell commands:
|
||||||
|
|
||||||
|
- If you changed a self-contained html file, suggest an OS-appropriate command to open a browser to view it to see the updated content.
|
||||||
|
- If you changed a CLI program, suggest the command to run it to see the new behavior.
|
||||||
|
- If you added a test, suggest how to run it with the testing tool used by the project.
|
||||||
|
- Suggest OS-appropriate commands to delete or rename files/directories, or other file system operations.
|
||||||
|
- If your code changes add new dependencies, suggest the command to install them.
|
||||||
|
- Etc.
|
||||||
|
|
||||||
|
""" # noqa
|
|
@ -348,8 +348,8 @@ def process_fenced_block(lines, start_line_num):
|
||||||
a_fname = block[0][4:].strip()
|
a_fname = block[0][4:].strip()
|
||||||
b_fname = block[1][4:].strip()
|
b_fname = block[1][4:].strip()
|
||||||
|
|
||||||
# Check if standard git diff prefixes are present and strip them
|
# Check if standard git diff prefixes are present (or /dev/null) and strip them
|
||||||
if a_fname.startswith("a/") and b_fname.startswith("b/"):
|
if (a_fname.startswith("a/") or a_fname == "/dev/null") and b_fname.startswith("b/"):
|
||||||
fname = b_fname[2:]
|
fname = b_fname[2:]
|
||||||
else:
|
else:
|
||||||
# Otherwise, assume the path is as intended
|
# Otherwise, assume the path is as intended
|
||||||
|
|
|
@ -1,11 +1,12 @@
|
||||||
# flake8: noqa: E501
|
# flake8: noqa: E501
|
||||||
|
|
||||||
|
from . import shell
|
||||||
from .base_prompts import CoderPrompts
|
from .base_prompts import CoderPrompts
|
||||||
|
|
||||||
|
|
||||||
class UnifiedDiffPrompts(CoderPrompts):
|
class UnifiedDiffPrompts(CoderPrompts):
|
||||||
main_system = """Act as an expert software developer.
|
main_system = """Act as an expert software developer.
|
||||||
{lazy_prompt}
|
{final_reminders}
|
||||||
Always use best practices when coding.
|
Always use best practices when coding.
|
||||||
Respect and use existing conventions, libraries, etc that are already present in the code base.
|
Respect and use existing conventions, libraries, etc that are already present in the code base.
|
||||||
|
|
||||||
|
@ -106,5 +107,9 @@ To move code within a file, use 2 hunks: 1 to delete it from its current locatio
|
||||||
|
|
||||||
To make a new file, show a diff from `--- /dev/null` to `+++ path/to/new/file.ext`.
|
To make a new file, show a diff from `--- /dev/null` to `+++ path/to/new/file.ext`.
|
||||||
|
|
||||||
{lazy_prompt}
|
{final_reminders}
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
shell_cmd_prompt = shell.shell_cmd_prompt
|
||||||
|
no_shell_cmd_prompt = shell.no_shell_cmd_prompt
|
||||||
|
shell_cmd_reminder = shell.shell_cmd_reminder
|
||||||
|
|
14
aider/coders/udiff_simple.py
Normal file
14
aider/coders/udiff_simple.py
Normal file
|
@ -0,0 +1,14 @@
|
||||||
|
from .udiff_coder import UnifiedDiffCoder
|
||||||
|
from .udiff_simple_prompts import UnifiedDiffSimplePrompts
|
||||||
|
|
||||||
|
|
||||||
|
class UnifiedDiffSimpleCoder(UnifiedDiffCoder):
|
||||||
|
"""
|
||||||
|
A coder that uses unified diff format for code modifications.
|
||||||
|
This variant uses a simpler prompt that doesn't mention specific
|
||||||
|
diff rules like using `@@ ... @@` lines or avoiding line numbers.
|
||||||
|
"""
|
||||||
|
|
||||||
|
edit_format = "udiff-simple"
|
||||||
|
|
||||||
|
gpt_prompts = UnifiedDiffSimplePrompts()
|
25
aider/coders/udiff_simple_prompts.py
Normal file
25
aider/coders/udiff_simple_prompts.py
Normal file
|
@ -0,0 +1,25 @@
|
||||||
|
from .udiff_prompts import UnifiedDiffPrompts
|
||||||
|
|
||||||
|
|
||||||
|
class UnifiedDiffSimplePrompts(UnifiedDiffPrompts):
|
||||||
|
"""
|
||||||
|
Prompts for the UnifiedDiffSimpleCoder.
|
||||||
|
Inherits from UnifiedDiffPrompts and can override specific prompts
|
||||||
|
if a simpler wording is desired for this edit format.
|
||||||
|
"""
|
||||||
|
|
||||||
|
example_messages = []
|
||||||
|
|
||||||
|
system_reminder = """# File editing rules:
|
||||||
|
|
||||||
|
Return edits similar to unified diffs that `diff -U0` would produce.
|
||||||
|
|
||||||
|
The user's patch tool needs CORRECT patches that apply cleanly against the current contents of the file!
|
||||||
|
Think carefully and make sure you include and mark all lines that need to be removed or changed as `-` lines.
|
||||||
|
Make sure you mark all new or modified lines with `+`.
|
||||||
|
Don't leave out any lines or the diff patch won't apply correctly.
|
||||||
|
|
||||||
|
To make a new file, show a diff from `--- /dev/null` to `+++ path/to/new/file.ext`.
|
||||||
|
|
||||||
|
{final_reminders}
|
||||||
|
""" # noqa
|
|
@ -10,7 +10,7 @@ If the request is ambiguous, ask questions.
|
||||||
|
|
||||||
Always reply to the user in {language}.
|
Always reply to the user in {language}.
|
||||||
|
|
||||||
{lazy_prompt}
|
{final_reminders}
|
||||||
Once you understand the request you MUST:
|
Once you understand the request you MUST:
|
||||||
1. Determine if any code changes are needed.
|
1. Determine if any code changes are needed.
|
||||||
2. Explain any needed changes.
|
2. Explain any needed changes.
|
||||||
|
@ -61,7 +61,7 @@ To suggest changes to a file you MUST return a *file listing* that contains the
|
||||||
*NEVER* skip, omit or elide content from a *file listing* using "..." or by adding comments like "... rest of code..."!
|
*NEVER* skip, omit or elide content from a *file listing* using "..." or by adding comments like "... rest of code..."!
|
||||||
Create a new file you MUST return a *file listing* which includes an appropriate filename, including any appropriate path.
|
Create a new file you MUST return a *file listing* which includes an appropriate filename, including any appropriate path.
|
||||||
|
|
||||||
{lazy_prompt}
|
{final_reminders}
|
||||||
"""
|
"""
|
||||||
|
|
||||||
redacted_edit_message = "No changes are needed."
|
redacted_edit_message = "No changes are needed."
|
||||||
|
|
|
@ -47,6 +47,7 @@ class Commands:
|
||||||
parser=self.parser,
|
parser=self.parser,
|
||||||
verbose=self.verbose,
|
verbose=self.verbose,
|
||||||
editor=self.editor,
|
editor=self.editor,
|
||||||
|
original_read_only_fnames=self.original_read_only_fnames,
|
||||||
)
|
)
|
||||||
|
|
||||||
def __init__(
|
def __init__(
|
||||||
|
@ -220,12 +221,18 @@ class Commands:
|
||||||
|
|
||||||
self.io.tool_output(f"Scraping {url}...")
|
self.io.tool_output(f"Scraping {url}...")
|
||||||
if not self.scraper:
|
if not self.scraper:
|
||||||
res = install_playwright(self.io)
|
disable_playwright = getattr(self.args, "disable_playwright", False)
|
||||||
if not res:
|
if disable_playwright:
|
||||||
self.io.tool_warning("Unable to initialize playwright.")
|
res = False
|
||||||
|
else:
|
||||||
|
res = install_playwright(self.io)
|
||||||
|
if not res:
|
||||||
|
self.io.tool_warning("Unable to initialize playwright.")
|
||||||
|
|
||||||
self.scraper = Scraper(
|
self.scraper = Scraper(
|
||||||
print_error=self.io.tool_error, playwright_available=res, verify_ssl=self.verify_ssl
|
print_error=self.io.tool_error,
|
||||||
|
playwright_available=res,
|
||||||
|
verify_ssl=self.verify_ssl,
|
||||||
)
|
)
|
||||||
|
|
||||||
content = self.scraper.scrape(url) or ""
|
content = self.scraper.scrape(url) or ""
|
||||||
|
@ -339,7 +346,7 @@ class Commands:
|
||||||
return
|
return
|
||||||
|
|
||||||
commit_message = args.strip() if args else None
|
commit_message = args.strip() if args else None
|
||||||
self.coder.repo.commit(message=commit_message)
|
self.coder.repo.commit(message=commit_message, coder=self.coder)
|
||||||
|
|
||||||
def cmd_lint(self, args="", fnames=None):
|
def cmd_lint(self, args="", fnames=None):
|
||||||
"Lint and fix in-chat files or all dirty files if none in chat"
|
"Lint and fix in-chat files or all dirty files if none in chat"
|
||||||
|
@ -1385,7 +1392,30 @@ class Commands:
|
||||||
"Print out the current settings"
|
"Print out the current settings"
|
||||||
settings = format_settings(self.parser, self.args)
|
settings = format_settings(self.parser, self.args)
|
||||||
announcements = "\n".join(self.coder.get_announcements())
|
announcements = "\n".join(self.coder.get_announcements())
|
||||||
|
|
||||||
|
# Build metadata for the active models (main, editor, weak)
|
||||||
|
model_sections = []
|
||||||
|
active_models = [
|
||||||
|
("Main model", self.coder.main_model),
|
||||||
|
("Editor model", getattr(self.coder.main_model, "editor_model", None)),
|
||||||
|
("Weak model", getattr(self.coder.main_model, "weak_model", None)),
|
||||||
|
]
|
||||||
|
for label, model in active_models:
|
||||||
|
if not model:
|
||||||
|
continue
|
||||||
|
info = getattr(model, "info", {}) or {}
|
||||||
|
if not info:
|
||||||
|
continue
|
||||||
|
model_sections.append(f"{label} ({model.name}):")
|
||||||
|
for k, v in sorted(info.items()):
|
||||||
|
model_sections.append(f" {k}: {v}")
|
||||||
|
model_sections.append("") # blank line between models
|
||||||
|
|
||||||
|
model_metadata = "\n".join(model_sections)
|
||||||
|
|
||||||
output = f"{announcements}\n{settings}"
|
output = f"{announcements}\n{settings}"
|
||||||
|
if model_metadata:
|
||||||
|
output += "\n" + model_metadata
|
||||||
self.io.tool_output(output)
|
self.io.tool_output(output)
|
||||||
|
|
||||||
def completions_raw_load(self, document, complete_event):
|
def completions_raw_load(self, document, complete_event):
|
||||||
|
|
|
@ -11,7 +11,7 @@ from aider.coders import Coder
|
||||||
from aider.dump import dump # noqa: F401
|
from aider.dump import dump # noqa: F401
|
||||||
from aider.io import InputOutput
|
from aider.io import InputOutput
|
||||||
from aider.main import main as cli_main
|
from aider.main import main as cli_main
|
||||||
from aider.scrape import Scraper
|
from aider.scrape import Scraper, has_playwright
|
||||||
|
|
||||||
|
|
||||||
class CaptureIO(InputOutput):
|
class CaptureIO(InputOutput):
|
||||||
|
@ -484,7 +484,7 @@ class GUI:
|
||||||
url = self.web_content
|
url = self.web_content
|
||||||
|
|
||||||
if not self.state.scraper:
|
if not self.state.scraper:
|
||||||
self.scraper = Scraper(print_error=self.info)
|
self.scraper = Scraper(print_error=self.info, playwright_available=has_playwright())
|
||||||
|
|
||||||
content = self.scraper.scrape(url) or ""
|
content = self.scraper.scrape(url) or ""
|
||||||
if content.strip():
|
if content.strip():
|
||||||
|
|
11
aider/io.py
11
aider/io.py
|
@ -595,7 +595,7 @@ class InputOutput:
|
||||||
current_text = buffer.text
|
current_text = buffer.text
|
||||||
|
|
||||||
# Open the editor with the current text
|
# Open the editor with the current text
|
||||||
edited_text = pipe_editor(input_data=current_text)
|
edited_text = pipe_editor(input_data=current_text, suffix="md")
|
||||||
|
|
||||||
# Replace the buffer with the edited text, strip any trailing newlines
|
# Replace the buffer with the edited text, strip any trailing newlines
|
||||||
buffer.text = edited_text.rstrip("\n")
|
buffer.text = edited_text.rstrip("\n")
|
||||||
|
@ -1144,18 +1144,19 @@ class InputOutput:
|
||||||
ro_paths = []
|
ro_paths = []
|
||||||
for rel_path in read_only_files:
|
for rel_path in read_only_files:
|
||||||
abs_path = os.path.abspath(os.path.join(self.root, rel_path))
|
abs_path = os.path.abspath(os.path.join(self.root, rel_path))
|
||||||
ro_paths.append(abs_path if len(abs_path) < len(rel_path) else rel_path)
|
ro_paths.append(Text(abs_path if len(abs_path) < len(rel_path) else rel_path))
|
||||||
|
|
||||||
files_with_label = ["Readonly:"] + ro_paths
|
files_with_label = [Text("Readonly:")] + ro_paths
|
||||||
read_only_output = StringIO()
|
read_only_output = StringIO()
|
||||||
Console(file=read_only_output, force_terminal=False).print(Columns(files_with_label))
|
Console(file=read_only_output, force_terminal=False).print(Columns(files_with_label))
|
||||||
read_only_lines = read_only_output.getvalue().splitlines()
|
read_only_lines = read_only_output.getvalue().splitlines()
|
||||||
console.print(Columns(files_with_label))
|
console.print(Columns(files_with_label))
|
||||||
|
|
||||||
if editable_files:
|
if editable_files:
|
||||||
files_with_label = editable_files
|
text_editable_files = [Text(f) for f in editable_files]
|
||||||
|
files_with_label = text_editable_files
|
||||||
if read_only_files:
|
if read_only_files:
|
||||||
files_with_label = ["Editable:"] + editable_files
|
files_with_label = [Text("Editable:")] + text_editable_files
|
||||||
editable_output = StringIO()
|
editable_output = StringIO()
|
||||||
Console(file=editable_output, force_terminal=False).print(Columns(files_with_label))
|
Console(file=editable_output, force_terminal=False).print(Columns(files_with_label))
|
||||||
editable_lines = editable_output.getvalue().splitlines()
|
editable_lines = editable_output.getvalue().splitlines()
|
||||||
|
|
|
@ -4,10 +4,10 @@ import subprocess
|
||||||
import sys
|
import sys
|
||||||
import traceback
|
import traceback
|
||||||
import warnings
|
import warnings
|
||||||
import shlex
|
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
|
import oslex
|
||||||
from grep_ast import TreeContext, filename_to_lang
|
from grep_ast import TreeContext, filename_to_lang
|
||||||
from grep_ast.tsl import get_parser # noqa: E402
|
from grep_ast.tsl import get_parser # noqa: E402
|
||||||
|
|
||||||
|
@ -45,7 +45,7 @@ class Linter:
|
||||||
return fname
|
return fname
|
||||||
|
|
||||||
def run_cmd(self, cmd, rel_fname, code):
|
def run_cmd(self, cmd, rel_fname, code):
|
||||||
cmd += " " + shlex.quote(rel_fname)
|
cmd += " " + oslex.quote(rel_fname)
|
||||||
|
|
||||||
returncode = 0
|
returncode = 0
|
||||||
stdout = ""
|
stdout = ""
|
||||||
|
|
|
@ -14,6 +14,7 @@ except ImportError:
|
||||||
git = None
|
git = None
|
||||||
|
|
||||||
import importlib_resources
|
import importlib_resources
|
||||||
|
import shtab
|
||||||
from dotenv import load_dotenv
|
from dotenv import load_dotenv
|
||||||
from prompt_toolkit.enums import EditingMode
|
from prompt_toolkit.enums import EditingMode
|
||||||
|
|
||||||
|
@ -502,6 +503,12 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
|
||||||
# Parse again to include any arguments that might have been defined in .env
|
# Parse again to include any arguments that might have been defined in .env
|
||||||
args = parser.parse_args(argv)
|
args = parser.parse_args(argv)
|
||||||
|
|
||||||
|
if args.shell_completions:
|
||||||
|
# Ensure parser.prog is set for shtab, though it should be by default
|
||||||
|
parser.prog = "aider"
|
||||||
|
print(shtab.complete(parser, shell=args.shell_completions))
|
||||||
|
sys.exit(0)
|
||||||
|
|
||||||
if git is None:
|
if git is None:
|
||||||
args.git = False
|
args.git = False
|
||||||
|
|
||||||
|
@ -904,6 +911,7 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
|
||||||
commit_prompt=args.commit_prompt,
|
commit_prompt=args.commit_prompt,
|
||||||
subtree_only=args.subtree_only,
|
subtree_only=args.subtree_only,
|
||||||
git_commit_verify=args.git_commit_verify,
|
git_commit_verify=args.git_commit_verify,
|
||||||
|
attribute_co_authored_by=args.attribute_co_authored_by, # Pass the arg
|
||||||
)
|
)
|
||||||
except FileNotFoundError:
|
except FileNotFoundError:
|
||||||
pass
|
pass
|
||||||
|
|
|
@ -115,9 +115,9 @@ class MarkdownStream:
|
||||||
else:
|
else:
|
||||||
self.mdargs = dict()
|
self.mdargs = dict()
|
||||||
|
|
||||||
# Initialize rich Live display with empty text
|
# Defer Live creation until the first update.
|
||||||
self.live = Live(Text(""), refresh_per_second=1.0 / self.min_delay)
|
self.live = None
|
||||||
self.live.start()
|
self._live_started = False
|
||||||
|
|
||||||
def _render_markdown_to_lines(self, text):
|
def _render_markdown_to_lines(self, text):
|
||||||
"""Render markdown text to a list of lines.
|
"""Render markdown text to a list of lines.
|
||||||
|
@ -163,6 +163,12 @@ class MarkdownStream:
|
||||||
Markdown going to the console works better in terminal scrollback buffers.
|
Markdown going to the console works better in terminal scrollback buffers.
|
||||||
The live window doesn't play nice with terminal scrollback.
|
The live window doesn't play nice with terminal scrollback.
|
||||||
"""
|
"""
|
||||||
|
# On the first call, stop the spinner and start the Live renderer
|
||||||
|
if not getattr(self, "_live_started", False):
|
||||||
|
self.live = Live(Text(""), refresh_per_second=1.0 / self.min_delay)
|
||||||
|
self.live.start()
|
||||||
|
self._live_started = True
|
||||||
|
|
||||||
now = time.time()
|
now = time.time()
|
||||||
# Throttle updates to maintain smooth rendering
|
# Throttle updates to maintain smooth rendering
|
||||||
if not final and now - self.when < self.min_delay:
|
if not final and now - self.when < self.min_delay:
|
||||||
|
|
176
aider/models.py
176
aider/models.py
|
@ -8,6 +8,7 @@ import platform
|
||||||
import sys
|
import sys
|
||||||
import time
|
import time
|
||||||
from dataclasses import dataclass, fields
|
from dataclasses import dataclass, fields
|
||||||
|
from datetime import datetime
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Optional, Union
|
from typing import Optional, Union
|
||||||
|
|
||||||
|
@ -17,6 +18,7 @@ from PIL import Image
|
||||||
|
|
||||||
from aider.dump import dump # noqa: F401
|
from aider.dump import dump # noqa: F401
|
||||||
from aider.llm import litellm
|
from aider.llm import litellm
|
||||||
|
from aider.openrouter import OpenRouterModelManager
|
||||||
from aider.sendchat import ensure_alternating_roles, sanity_check_messages
|
from aider.sendchat import ensure_alternating_roles, sanity_check_messages
|
||||||
from aider.utils import check_pip_install_extra
|
from aider.utils import check_pip_install_extra
|
||||||
|
|
||||||
|
@ -69,6 +71,8 @@ claude-3-opus-20240229
|
||||||
claude-3-sonnet-20240229
|
claude-3-sonnet-20240229
|
||||||
claude-3-5-sonnet-20240620
|
claude-3-5-sonnet-20240620
|
||||||
claude-3-5-sonnet-20241022
|
claude-3-5-sonnet-20241022
|
||||||
|
claude-sonnet-4-20250514
|
||||||
|
claude-opus-4-20250514
|
||||||
"""
|
"""
|
||||||
|
|
||||||
ANTHROPIC_MODELS = [ln.strip() for ln in ANTHROPIC_MODELS.splitlines() if ln.strip()]
|
ANTHROPIC_MODELS = [ln.strip() for ln in ANTHROPIC_MODELS.splitlines() if ln.strip()]
|
||||||
|
@ -76,9 +80,9 @@ ANTHROPIC_MODELS = [ln.strip() for ln in ANTHROPIC_MODELS.splitlines() if ln.str
|
||||||
# Mapping of model aliases to their canonical names
|
# Mapping of model aliases to their canonical names
|
||||||
MODEL_ALIASES = {
|
MODEL_ALIASES = {
|
||||||
# Claude models
|
# Claude models
|
||||||
"sonnet": "anthropic/claude-3-7-sonnet-20250219",
|
"sonnet": "anthropic/claude-sonnet-4-20250514",
|
||||||
"haiku": "claude-3-5-haiku-20241022",
|
"haiku": "claude-3-5-haiku-20241022",
|
||||||
"opus": "claude-3-opus-20240229",
|
"opus": "claude-opus-4-20250514",
|
||||||
# GPT models
|
# GPT models
|
||||||
"4": "gpt-4-0613",
|
"4": "gpt-4-0613",
|
||||||
"4o": "gpt-4o",
|
"4o": "gpt-4o",
|
||||||
|
@ -88,11 +92,11 @@ MODEL_ALIASES = {
|
||||||
"3": "gpt-3.5-turbo",
|
"3": "gpt-3.5-turbo",
|
||||||
# Other models
|
# Other models
|
||||||
"deepseek": "deepseek/deepseek-chat",
|
"deepseek": "deepseek/deepseek-chat",
|
||||||
"flash": "gemini/gemini-2.0-flash-exp",
|
"flash": "gemini/gemini-2.5-flash-preview-04-17",
|
||||||
"quasar": "openrouter/openrouter/quasar-alpha",
|
"quasar": "openrouter/openrouter/quasar-alpha",
|
||||||
"r1": "deepseek/deepseek-reasoner",
|
"r1": "deepseek/deepseek-reasoner",
|
||||||
"gemini-2.5-pro": "gemini/gemini-2.5-pro-exp-03-25",
|
"gemini-2.5-pro": "gemini/gemini-2.5-pro-preview-05-06",
|
||||||
"gemini": "gemini/gemini-2.5-pro-preview-03-25",
|
"gemini": "gemini/gemini-2.5-pro-preview-05-06",
|
||||||
"gemini-exp": "gemini/gemini-2.5-pro-exp-03-25",
|
"gemini-exp": "gemini/gemini-2.5-pro-exp-03-25",
|
||||||
"grok3": "xai/grok-3-beta",
|
"grok3": "xai/grok-3-beta",
|
||||||
"optimus": "openrouter/openrouter/optimus-alpha",
|
"optimus": "openrouter/openrouter/optimus-alpha",
|
||||||
|
@ -149,8 +153,13 @@ class ModelInfoManager:
|
||||||
self.verify_ssl = True
|
self.verify_ssl = True
|
||||||
self._cache_loaded = False
|
self._cache_loaded = False
|
||||||
|
|
||||||
|
# Manager for the cached OpenRouter model database
|
||||||
|
self.openrouter_manager = OpenRouterModelManager()
|
||||||
|
|
||||||
def set_verify_ssl(self, verify_ssl):
|
def set_verify_ssl(self, verify_ssl):
|
||||||
self.verify_ssl = verify_ssl
|
self.verify_ssl = verify_ssl
|
||||||
|
if hasattr(self, "openrouter_manager"):
|
||||||
|
self.openrouter_manager.set_verify_ssl(verify_ssl)
|
||||||
|
|
||||||
def _load_cache(self):
|
def _load_cache(self):
|
||||||
if self._cache_loaded:
|
if self._cache_loaded:
|
||||||
|
@ -231,8 +240,68 @@ class ModelInfoManager:
|
||||||
if litellm_info:
|
if litellm_info:
|
||||||
return litellm_info
|
return litellm_info
|
||||||
|
|
||||||
|
if not cached_info and model.startswith("openrouter/"):
|
||||||
|
# First try using the locally cached OpenRouter model database
|
||||||
|
openrouter_info = self.openrouter_manager.get_model_info(model)
|
||||||
|
if openrouter_info:
|
||||||
|
return openrouter_info
|
||||||
|
|
||||||
|
# Fallback to legacy web-scraping if the API cache does not contain the model
|
||||||
|
openrouter_info = self.fetch_openrouter_model_info(model)
|
||||||
|
if openrouter_info:
|
||||||
|
return openrouter_info
|
||||||
|
|
||||||
return cached_info
|
return cached_info
|
||||||
|
|
||||||
|
def fetch_openrouter_model_info(self, model):
|
||||||
|
"""
|
||||||
|
Fetch model info by scraping the openrouter model page.
|
||||||
|
Expected URL: https://openrouter.ai/<model_route>
|
||||||
|
Example: openrouter/qwen/qwen-2.5-72b-instruct:free
|
||||||
|
Returns a dict with keys: max_tokens, max_input_tokens, max_output_tokens,
|
||||||
|
input_cost_per_token, output_cost_per_token.
|
||||||
|
"""
|
||||||
|
url_part = model[len("openrouter/") :]
|
||||||
|
url = "https://openrouter.ai/" + url_part
|
||||||
|
try:
|
||||||
|
import requests
|
||||||
|
|
||||||
|
response = requests.get(url, timeout=5, verify=self.verify_ssl)
|
||||||
|
if response.status_code != 200:
|
||||||
|
return {}
|
||||||
|
html = response.text
|
||||||
|
import re
|
||||||
|
|
||||||
|
if re.search(
|
||||||
|
rf"The model\s*.*{re.escape(url_part)}.* is not available", html, re.IGNORECASE
|
||||||
|
):
|
||||||
|
print(f"\033[91mError: Model '{url_part}' is not available\033[0m")
|
||||||
|
return {}
|
||||||
|
text = re.sub(r"<[^>]+>", " ", html)
|
||||||
|
context_match = re.search(r"([\d,]+)\s*context", text)
|
||||||
|
if context_match:
|
||||||
|
context_str = context_match.group(1).replace(",", "")
|
||||||
|
context_size = int(context_str)
|
||||||
|
else:
|
||||||
|
context_size = None
|
||||||
|
input_cost_match = re.search(r"\$\s*([\d.]+)\s*/M input tokens", text, re.IGNORECASE)
|
||||||
|
output_cost_match = re.search(r"\$\s*([\d.]+)\s*/M output tokens", text, re.IGNORECASE)
|
||||||
|
input_cost = float(input_cost_match.group(1)) / 1000000 if input_cost_match else None
|
||||||
|
output_cost = float(output_cost_match.group(1)) / 1000000 if output_cost_match else None
|
||||||
|
if context_size is None or input_cost is None or output_cost is None:
|
||||||
|
return {}
|
||||||
|
params = {
|
||||||
|
"max_input_tokens": context_size,
|
||||||
|
"max_tokens": context_size,
|
||||||
|
"max_output_tokens": context_size,
|
||||||
|
"input_cost_per_token": input_cost,
|
||||||
|
"output_cost_per_token": output_cost,
|
||||||
|
}
|
||||||
|
return params
|
||||||
|
except Exception as e:
|
||||||
|
print("Error fetching openrouter info:", str(e))
|
||||||
|
return {}
|
||||||
|
|
||||||
|
|
||||||
model_info_manager = ModelInfoManager()
|
model_info_manager = ModelInfoManager()
|
||||||
|
|
||||||
|
@ -332,6 +401,15 @@ class Model(ModelSettings):
|
||||||
# For non-dict values, simply update
|
# For non-dict values, simply update
|
||||||
self.extra_params[key] = value
|
self.extra_params[key] = value
|
||||||
|
|
||||||
|
# Ensure OpenRouter models accept thinking_tokens and reasoning_effort
|
||||||
|
if self.name.startswith("openrouter/"):
|
||||||
|
if self.accepts_settings is None:
|
||||||
|
self.accepts_settings = []
|
||||||
|
if "thinking_tokens" not in self.accepts_settings:
|
||||||
|
self.accepts_settings.append("thinking_tokens")
|
||||||
|
if "reasoning_effort" not in self.accepts_settings:
|
||||||
|
self.accepts_settings.append("reasoning_effort")
|
||||||
|
|
||||||
def apply_generic_model_settings(self, model):
|
def apply_generic_model_settings(self, model):
|
||||||
if "/o3-mini" in model:
|
if "/o3-mini" in model:
|
||||||
self.edit_format = "diff"
|
self.edit_format = "diff"
|
||||||
|
@ -460,6 +538,14 @@ class Model(ModelSettings):
|
||||||
self.extra_params = dict(top_p=0.95)
|
self.extra_params = dict(top_p=0.95)
|
||||||
return # <--
|
return # <--
|
||||||
|
|
||||||
|
if "qwen3" in model and "235b" in model:
|
||||||
|
self.edit_format = "diff"
|
||||||
|
self.use_repo_map = True
|
||||||
|
self.system_prompt_prefix = "/no_think"
|
||||||
|
self.use_temperature = 0.7
|
||||||
|
self.extra_params = {"top_p": 0.8, "top_k": 20, "min_p": 0.0}
|
||||||
|
return # <--
|
||||||
|
|
||||||
# use the defaults
|
# use the defaults
|
||||||
if self.edit_format == "diff":
|
if self.edit_format == "diff":
|
||||||
self.use_repo_map = True
|
self.use_repo_map = True
|
||||||
|
@ -659,11 +745,18 @@ class Model(ModelSettings):
|
||||||
def set_reasoning_effort(self, effort):
|
def set_reasoning_effort(self, effort):
|
||||||
"""Set the reasoning effort parameter for models that support it"""
|
"""Set the reasoning effort parameter for models that support it"""
|
||||||
if effort is not None:
|
if effort is not None:
|
||||||
if not self.extra_params:
|
if self.name.startswith("openrouter/"):
|
||||||
self.extra_params = {}
|
if not self.extra_params:
|
||||||
if "extra_body" not in self.extra_params:
|
self.extra_params = {}
|
||||||
self.extra_params["extra_body"] = {}
|
if "extra_body" not in self.extra_params:
|
||||||
self.extra_params["extra_body"]["reasoning_effort"] = effort
|
self.extra_params["extra_body"] = {}
|
||||||
|
self.extra_params["extra_body"]["reasoning"] = {"effort": effort}
|
||||||
|
else:
|
||||||
|
if not self.extra_params:
|
||||||
|
self.extra_params = {}
|
||||||
|
if "extra_body" not in self.extra_params:
|
||||||
|
self.extra_params["extra_body"] = {}
|
||||||
|
self.extra_params["extra_body"]["reasoning_effort"] = effort
|
||||||
|
|
||||||
def parse_token_value(self, value):
|
def parse_token_value(self, value):
|
||||||
"""
|
"""
|
||||||
|
@ -709,7 +802,9 @@ class Model(ModelSettings):
|
||||||
|
|
||||||
# OpenRouter models use 'reasoning' instead of 'thinking'
|
# OpenRouter models use 'reasoning' instead of 'thinking'
|
||||||
if self.name.startswith("openrouter/"):
|
if self.name.startswith("openrouter/"):
|
||||||
self.extra_params["reasoning"] = {"max_tokens": num_tokens}
|
if "extra_body" not in self.extra_params:
|
||||||
|
self.extra_params["extra_body"] = {}
|
||||||
|
self.extra_params["extra_body"]["reasoning"] = {"max_tokens": num_tokens}
|
||||||
else:
|
else:
|
||||||
self.extra_params["thinking"] = {"type": "enabled", "budget_tokens": num_tokens}
|
self.extra_params["thinking"] = {"type": "enabled", "budget_tokens": num_tokens}
|
||||||
|
|
||||||
|
@ -719,8 +814,13 @@ class Model(ModelSettings):
|
||||||
|
|
||||||
if self.extra_params:
|
if self.extra_params:
|
||||||
# Check for OpenRouter reasoning format
|
# Check for OpenRouter reasoning format
|
||||||
if "reasoning" in self.extra_params and "max_tokens" in self.extra_params["reasoning"]:
|
if self.name.startswith("openrouter/"):
|
||||||
budget = self.extra_params["reasoning"]["max_tokens"]
|
if (
|
||||||
|
"extra_body" in self.extra_params
|
||||||
|
and "reasoning" in self.extra_params["extra_body"]
|
||||||
|
and "max_tokens" in self.extra_params["extra_body"]["reasoning"]
|
||||||
|
):
|
||||||
|
budget = self.extra_params["extra_body"]["reasoning"]["max_tokens"]
|
||||||
# Check for standard thinking format
|
# Check for standard thinking format
|
||||||
elif (
|
elif (
|
||||||
"thinking" in self.extra_params and "budget_tokens" in self.extra_params["thinking"]
|
"thinking" in self.extra_params and "budget_tokens" in self.extra_params["thinking"]
|
||||||
|
@ -750,12 +850,21 @@ class Model(ModelSettings):
|
||||||
|
|
||||||
def get_reasoning_effort(self):
|
def get_reasoning_effort(self):
|
||||||
"""Get reasoning effort value if available"""
|
"""Get reasoning effort value if available"""
|
||||||
if (
|
if self.extra_params:
|
||||||
self.extra_params
|
# Check for OpenRouter reasoning format
|
||||||
and "extra_body" in self.extra_params
|
if self.name.startswith("openrouter/"):
|
||||||
and "reasoning_effort" in self.extra_params["extra_body"]
|
if (
|
||||||
):
|
"extra_body" in self.extra_params
|
||||||
return self.extra_params["extra_body"]["reasoning_effort"]
|
and "reasoning" in self.extra_params["extra_body"]
|
||||||
|
and "effort" in self.extra_params["extra_body"]["reasoning"]
|
||||||
|
):
|
||||||
|
return self.extra_params["extra_body"]["reasoning"]["effort"]
|
||||||
|
# Check for standard reasoning_effort format (e.g. in extra_body)
|
||||||
|
elif (
|
||||||
|
"extra_body" in self.extra_params
|
||||||
|
and "reasoning_effort" in self.extra_params["extra_body"]
|
||||||
|
):
|
||||||
|
return self.extra_params["extra_body"]["reasoning_effort"]
|
||||||
return None
|
return None
|
||||||
|
|
||||||
def is_deepseek_r1(self):
|
def is_deepseek_r1(self):
|
||||||
|
@ -767,6 +876,28 @@ class Model(ModelSettings):
|
||||||
def is_ollama(self):
|
def is_ollama(self):
|
||||||
return self.name.startswith("ollama/") or self.name.startswith("ollama_chat/")
|
return self.name.startswith("ollama/") or self.name.startswith("ollama_chat/")
|
||||||
|
|
||||||
|
def github_copilot_token_to_open_ai_key(self):
|
||||||
|
# check to see if there's an openai api key
|
||||||
|
# If so, check to see if it's expire
|
||||||
|
openai_api_key = "OPENAI_API_KEY"
|
||||||
|
|
||||||
|
if openai_api_key not in os.environ or (
|
||||||
|
int(dict(x.split("=") for x in os.environ[openai_api_key].split(";"))["exp"])
|
||||||
|
< int(datetime.now().timestamp())
|
||||||
|
):
|
||||||
|
import requests
|
||||||
|
|
||||||
|
headers = {
|
||||||
|
"Authorization": f"Bearer {os.environ['GITHUB_COPILOT_TOKEN']}",
|
||||||
|
"Editor-Version": self.extra_params["extra_headers"]["Editor-Version"],
|
||||||
|
"Copilot-Integration-Id": self.extra_params["extra_headers"][
|
||||||
|
"Copilot-Integration-Id"
|
||||||
|
],
|
||||||
|
"Content-Type": "application/json",
|
||||||
|
}
|
||||||
|
res = requests.get("https://api.github.com/copilot_internal/v2/token", headers=headers)
|
||||||
|
os.environ[openai_api_key] = res.json()["token"]
|
||||||
|
|
||||||
def send_completion(self, messages, functions, stream, temperature=None):
|
def send_completion(self, messages, functions, stream, temperature=None):
|
||||||
if os.environ.get("AIDER_SANITY_CHECK_TURNS"):
|
if os.environ.get("AIDER_SANITY_CHECK_TURNS"):
|
||||||
sanity_check_messages(messages)
|
sanity_check_messages(messages)
|
||||||
|
@ -808,6 +939,10 @@ class Model(ModelSettings):
|
||||||
dump(kwargs)
|
dump(kwargs)
|
||||||
kwargs["messages"] = messages
|
kwargs["messages"] = messages
|
||||||
|
|
||||||
|
# Are we using github copilot?
|
||||||
|
if "GITHUB_COPILOT_TOKEN" in os.environ:
|
||||||
|
self.github_copilot_token_to_open_ai_key()
|
||||||
|
|
||||||
res = litellm.completion(**kwargs)
|
res = litellm.completion(**kwargs)
|
||||||
return hash_object, res
|
return hash_object, res
|
||||||
|
|
||||||
|
@ -819,6 +954,9 @@ class Model(ModelSettings):
|
||||||
messages = ensure_alternating_roles(messages)
|
messages = ensure_alternating_roles(messages)
|
||||||
retry_delay = 0.125
|
retry_delay = 0.125
|
||||||
|
|
||||||
|
if self.verbose:
|
||||||
|
dump(messages)
|
||||||
|
|
||||||
while True:
|
while True:
|
||||||
try:
|
try:
|
||||||
kwargs = {
|
kwargs = {
|
||||||
|
|
|
@ -55,9 +55,9 @@ def try_to_select_default_model():
|
||||||
# Check if the user is on a free tier
|
# Check if the user is on a free tier
|
||||||
is_free_tier = check_openrouter_tier(openrouter_key)
|
is_free_tier = check_openrouter_tier(openrouter_key)
|
||||||
if is_free_tier:
|
if is_free_tier:
|
||||||
return "openrouter/google/gemini-2.5-pro-exp-03-25:free"
|
return "openrouter/deepseek/deepseek-r1:free"
|
||||||
else:
|
else:
|
||||||
return "openrouter/anthropic/claude-3.7-sonnet"
|
return "openrouter/anthropic/claude-sonnet-4"
|
||||||
|
|
||||||
# Select model based on other available API keys
|
# Select model based on other available API keys
|
||||||
model_key_pairs = [
|
model_key_pairs = [
|
||||||
|
|
128
aider/openrouter.py
Normal file
128
aider/openrouter.py
Normal file
|
@ -0,0 +1,128 @@
|
||||||
|
"""
|
||||||
|
OpenRouter model metadata caching and lookup.
|
||||||
|
|
||||||
|
This module keeps a local cached copy of the OpenRouter model list
|
||||||
|
(downloaded from ``https://openrouter.ai/api/v1/models``) and exposes a
|
||||||
|
helper class that returns metadata for a given model in a format compatible
|
||||||
|
with litellm’s ``get_model_info``.
|
||||||
|
"""
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import json
|
||||||
|
import time
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Dict
|
||||||
|
|
||||||
|
import requests
|
||||||
|
|
||||||
|
|
||||||
|
def _cost_per_token(val: str | None) -> float | None:
|
||||||
|
"""Convert a price string (USD per token) to a float."""
|
||||||
|
if val in (None, "", "0"):
|
||||||
|
return 0.0 if val == "0" else None
|
||||||
|
try:
|
||||||
|
return float(val)
|
||||||
|
except Exception: # noqa: BLE001
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
class OpenRouterModelManager:
|
||||||
|
MODELS_URL = "https://openrouter.ai/api/v1/models"
|
||||||
|
CACHE_TTL = 60 * 60 * 24 # 24 h
|
||||||
|
|
||||||
|
def __init__(self) -> None:
|
||||||
|
self.cache_dir = Path.home() / ".aider" / "caches"
|
||||||
|
self.cache_file = self.cache_dir / "openrouter_models.json"
|
||||||
|
self.content: Dict | None = None
|
||||||
|
self.verify_ssl: bool = True
|
||||||
|
self._cache_loaded = False
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------ #
|
||||||
|
# Public API #
|
||||||
|
# ------------------------------------------------------------------ #
|
||||||
|
def set_verify_ssl(self, verify_ssl: bool) -> None:
|
||||||
|
"""Enable/disable SSL verification for API requests."""
|
||||||
|
self.verify_ssl = verify_ssl
|
||||||
|
|
||||||
|
def get_model_info(self, model: str) -> Dict:
|
||||||
|
"""
|
||||||
|
Return metadata for *model* or an empty ``dict`` when unknown.
|
||||||
|
|
||||||
|
``model`` should use the aider naming convention, e.g.
|
||||||
|
``openrouter/nousresearch/deephermes-3-mistral-24b-preview:free``.
|
||||||
|
"""
|
||||||
|
self._ensure_content()
|
||||||
|
if not self.content or "data" not in self.content:
|
||||||
|
return {}
|
||||||
|
|
||||||
|
route = self._strip_prefix(model)
|
||||||
|
|
||||||
|
# Consider both the exact id and id without any “:suffix”.
|
||||||
|
candidates = {route}
|
||||||
|
if ":" in route:
|
||||||
|
candidates.add(route.split(":", 1)[0])
|
||||||
|
|
||||||
|
record = next((item for item in self.content["data"] if item.get("id") in candidates), None)
|
||||||
|
if not record:
|
||||||
|
return {}
|
||||||
|
|
||||||
|
context_len = (
|
||||||
|
record.get("top_provider", {}).get("context_length")
|
||||||
|
or record.get("context_length")
|
||||||
|
or None
|
||||||
|
)
|
||||||
|
|
||||||
|
pricing = record.get("pricing", {})
|
||||||
|
return {
|
||||||
|
"max_input_tokens": context_len,
|
||||||
|
"max_tokens": context_len,
|
||||||
|
"max_output_tokens": context_len,
|
||||||
|
"input_cost_per_token": _cost_per_token(pricing.get("prompt")),
|
||||||
|
"output_cost_per_token": _cost_per_token(pricing.get("completion")),
|
||||||
|
"litellm_provider": "openrouter",
|
||||||
|
}
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------ #
|
||||||
|
# Internal helpers #
|
||||||
|
# ------------------------------------------------------------------ #
|
||||||
|
def _strip_prefix(self, model: str) -> str:
|
||||||
|
return model[len("openrouter/") :] if model.startswith("openrouter/") else model
|
||||||
|
|
||||||
|
def _ensure_content(self) -> None:
|
||||||
|
self._load_cache()
|
||||||
|
if not self.content:
|
||||||
|
self._update_cache()
|
||||||
|
|
||||||
|
def _load_cache(self) -> None:
|
||||||
|
if self._cache_loaded:
|
||||||
|
return
|
||||||
|
try:
|
||||||
|
self.cache_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
if self.cache_file.exists():
|
||||||
|
cache_age = time.time() - self.cache_file.stat().st_mtime
|
||||||
|
if cache_age < self.CACHE_TTL:
|
||||||
|
try:
|
||||||
|
self.content = json.loads(self.cache_file.read_text())
|
||||||
|
except json.JSONDecodeError:
|
||||||
|
self.content = None
|
||||||
|
except OSError:
|
||||||
|
# Cache directory might be unwritable; ignore.
|
||||||
|
pass
|
||||||
|
|
||||||
|
self._cache_loaded = True
|
||||||
|
|
||||||
|
def _update_cache(self) -> None:
|
||||||
|
try:
|
||||||
|
response = requests.get(self.MODELS_URL, timeout=10, verify=self.verify_ssl)
|
||||||
|
if response.status_code == 200:
|
||||||
|
self.content = response.json()
|
||||||
|
try:
|
||||||
|
self.cache_file.write_text(json.dumps(self.content, indent=2))
|
||||||
|
except OSError:
|
||||||
|
pass # Non-fatal if we can’t write the cache
|
||||||
|
except Exception as ex: # noqa: BLE001
|
||||||
|
print(f"Failed to fetch OpenRouter model list: {ex}")
|
||||||
|
try:
|
||||||
|
self.cache_file.write_text("{}")
|
||||||
|
except OSError:
|
||||||
|
pass
|
|
@ -13,11 +13,13 @@ Generate a one-line commit message for those changes.
|
||||||
The commit message should be structured as follows: <type>: <description>
|
The commit message should be structured as follows: <type>: <description>
|
||||||
Use these for <type>: fix, feat, build, chore, ci, docs, style, refactor, perf, test
|
Use these for <type>: fix, feat, build, chore, ci, docs, style, refactor, perf, test
|
||||||
|
|
||||||
Ensure the commit message:
|
Ensure the commit message:{language_instruction}
|
||||||
- Starts with the appropriate prefix.
|
- Starts with the appropriate prefix.
|
||||||
- Is in the imperative mood (e.g., \"add feature\" not \"added feature\" or \"adding feature\").
|
- Is in the imperative mood (e.g., \"add feature\" not \"added feature\" or \"adding feature\").
|
||||||
- Does not exceed 72 characters.
|
- Does not exceed 72 characters.
|
||||||
|
|
||||||
|
Reply only with the one-line commit message, without any additional text, explanations, or line breaks.
|
||||||
|
|
||||||
Reply only with the one-line commit message, without any additional text, explanations, \
|
Reply only with the one-line commit message, without any additional text, explanations, \
|
||||||
or line breaks.
|
or line breaks.
|
||||||
"""
|
"""
|
||||||
|
|
115
aider/queries/tree-sitter-language-pack/ocaml-tags.scm
Normal file
115
aider/queries/tree-sitter-language-pack/ocaml-tags.scm
Normal file
|
@ -0,0 +1,115 @@
|
||||||
|
; Modules
|
||||||
|
;--------
|
||||||
|
|
||||||
|
(
|
||||||
|
(comment)? @doc .
|
||||||
|
(module_definition (module_binding (module_name) @name.definition.module) @definition.module)
|
||||||
|
(#strip! @doc "^\\(\\*\\*?\\s*|\\s\\*\\)$")
|
||||||
|
)
|
||||||
|
|
||||||
|
(module_path (module_name) @name.reference.module) @reference.module
|
||||||
|
|
||||||
|
; Module types
|
||||||
|
;--------------
|
||||||
|
|
||||||
|
(
|
||||||
|
(comment)? @doc .
|
||||||
|
(module_type_definition (module_type_name) @name.definition.interface) @definition.interface
|
||||||
|
(#strip! @doc "^\\(\\*\\*?\\s*|\\s\\*\\)$")
|
||||||
|
)
|
||||||
|
|
||||||
|
(module_type_path (module_type_name) @name.reference.implementation) @reference.implementation
|
||||||
|
|
||||||
|
; Functions
|
||||||
|
;----------
|
||||||
|
|
||||||
|
(
|
||||||
|
(comment)? @doc .
|
||||||
|
(value_definition
|
||||||
|
[
|
||||||
|
(let_binding
|
||||||
|
pattern: (value_name) @name.definition.function
|
||||||
|
(parameter))
|
||||||
|
(let_binding
|
||||||
|
pattern: (value_name) @name.definition.function
|
||||||
|
body: [(fun_expression) (function_expression)])
|
||||||
|
] @definition.function
|
||||||
|
)
|
||||||
|
(#strip! @doc "^\\(\\*\\*?\\s*|\\s\\*\\)$")
|
||||||
|
)
|
||||||
|
|
||||||
|
(
|
||||||
|
(comment)? @doc .
|
||||||
|
(external (value_name) @name.definition.function) @definition.function
|
||||||
|
(#strip! @doc "^\\(\\*\\*?\\s*|\\s\\*\\)$")
|
||||||
|
)
|
||||||
|
|
||||||
|
(application_expression
|
||||||
|
function: (value_path (value_name) @name.reference.call)) @reference.call
|
||||||
|
|
||||||
|
(infix_expression
|
||||||
|
left: (value_path (value_name) @name.reference.call)
|
||||||
|
operator: (concat_operator) @reference.call
|
||||||
|
(#eq? @reference.call "@@"))
|
||||||
|
|
||||||
|
(infix_expression
|
||||||
|
operator: (rel_operator) @reference.call
|
||||||
|
right: (value_path (value_name) @name.reference.call)
|
||||||
|
(#eq? @reference.call "|>"))
|
||||||
|
|
||||||
|
; Operator
|
||||||
|
;---------
|
||||||
|
|
||||||
|
(
|
||||||
|
(comment)? @doc .
|
||||||
|
(value_definition
|
||||||
|
(let_binding
|
||||||
|
pattern: (parenthesized_operator (_) @name.definition.function)) @definition.function)
|
||||||
|
(#strip! @doc "^\\(\\*\\*?\\s*|\\s\\*\\)$")
|
||||||
|
)
|
||||||
|
|
||||||
|
[
|
||||||
|
(prefix_operator)
|
||||||
|
(sign_operator)
|
||||||
|
(pow_operator)
|
||||||
|
(mult_operator)
|
||||||
|
(add_operator)
|
||||||
|
(concat_operator)
|
||||||
|
(rel_operator)
|
||||||
|
(and_operator)
|
||||||
|
(or_operator)
|
||||||
|
(assign_operator)
|
||||||
|
(hash_operator)
|
||||||
|
(indexing_operator)
|
||||||
|
(let_operator)
|
||||||
|
(let_and_operator)
|
||||||
|
(match_operator)
|
||||||
|
] @name.reference.call @reference.call
|
||||||
|
|
||||||
|
; Classes
|
||||||
|
;--------
|
||||||
|
|
||||||
|
(
|
||||||
|
(comment)? @doc .
|
||||||
|
[
|
||||||
|
(class_definition (class_binding (class_name) @name.definition.class) @definition.class)
|
||||||
|
(class_type_definition (class_type_binding (class_type_name) @name.definition.class) @definition.class)
|
||||||
|
]
|
||||||
|
(#strip! @doc "^\\(\\*\\*?\\s*|\\s\\*\\)$")
|
||||||
|
)
|
||||||
|
|
||||||
|
[
|
||||||
|
(class_path (class_name) @name.reference.class)
|
||||||
|
(class_type_path (class_type_name) @name.reference.class)
|
||||||
|
] @reference.class
|
||||||
|
|
||||||
|
; Methods
|
||||||
|
;--------
|
||||||
|
|
||||||
|
(
|
||||||
|
(comment)? @doc .
|
||||||
|
(method_definition (method_name) @name.definition.method) @definition.method
|
||||||
|
(#strip! @doc "^\\(\\*\\*?\\s*|\\s\\*\\)$")
|
||||||
|
)
|
||||||
|
|
||||||
|
(method_invocation (method_name) @name.reference.call) @reference.call
|
|
@ -0,0 +1,98 @@
|
||||||
|
; Modules
|
||||||
|
;--------
|
||||||
|
|
||||||
|
(
|
||||||
|
(comment)? @doc .
|
||||||
|
(module_definition
|
||||||
|
(module_binding (module_name) @name) @definition.module
|
||||||
|
)
|
||||||
|
(#strip! @doc "^\\(\\*+\\s*|\\s*\\*+\\)$")
|
||||||
|
)
|
||||||
|
|
||||||
|
(module_path (module_name) @name) @reference.module
|
||||||
|
(extended_module_path (module_name) @name) @reference.module
|
||||||
|
|
||||||
|
(
|
||||||
|
(comment)? @doc .
|
||||||
|
(module_type_definition (module_type_name) @name) @definition.interface
|
||||||
|
(#strip! @doc "^\\(\\*+\\s*|\\s*\\*+\\)$")
|
||||||
|
)
|
||||||
|
|
||||||
|
(module_type_path (module_type_name) @name) @reference.implementation
|
||||||
|
|
||||||
|
|
||||||
|
; Classes
|
||||||
|
;--------
|
||||||
|
|
||||||
|
(
|
||||||
|
(comment)? @doc .
|
||||||
|
[
|
||||||
|
(class_definition
|
||||||
|
(class_binding (class_name) @name) @definition.class
|
||||||
|
)
|
||||||
|
(class_type_definition
|
||||||
|
(class_type_binding (class_type_name) @name) @definition.class
|
||||||
|
)
|
||||||
|
]
|
||||||
|
(#strip! @doc "^\\(\\*+\\s*|\\s*\\*+\\)$")
|
||||||
|
)
|
||||||
|
|
||||||
|
[
|
||||||
|
(class_path (class_name) @name)
|
||||||
|
(class_type_path (class_type_name) @name)
|
||||||
|
] @reference.class
|
||||||
|
|
||||||
|
(
|
||||||
|
(comment)? @doc .
|
||||||
|
(method_definition (method_name) @name) @definition.method
|
||||||
|
(#strip! @doc "^\\(\\*+\\s*|\\s*\\*+\\)$")
|
||||||
|
)
|
||||||
|
|
||||||
|
(method_invocation (method_name) @name) @reference.call
|
||||||
|
|
||||||
|
|
||||||
|
; Types
|
||||||
|
;------
|
||||||
|
|
||||||
|
(
|
||||||
|
(comment)? @doc .
|
||||||
|
(type_definition
|
||||||
|
(type_binding
|
||||||
|
name: [
|
||||||
|
(type_constructor) @name
|
||||||
|
(type_constructor_path (type_constructor) @name)
|
||||||
|
]
|
||||||
|
) @definition.type
|
||||||
|
)
|
||||||
|
(#strip! @doc "^\\(\\*+\\s*|\\s*\\*+\\)$")
|
||||||
|
)
|
||||||
|
|
||||||
|
(type_constructor_path (type_constructor) @name) @reference.type
|
||||||
|
|
||||||
|
[
|
||||||
|
(constructor_declaration (constructor_name) @name)
|
||||||
|
(tag_specification (tag) @name)
|
||||||
|
] @definition.enum_variant
|
||||||
|
|
||||||
|
[
|
||||||
|
(constructor_path (constructor_name) @name)
|
||||||
|
(tag) @name
|
||||||
|
] @reference.enum_variant
|
||||||
|
|
||||||
|
(field_declaration (field_name) @name) @definition.field
|
||||||
|
|
||||||
|
(field_path (field_name) @name) @reference.field
|
||||||
|
|
||||||
|
(
|
||||||
|
(comment)? @doc .
|
||||||
|
(external (value_name) @name) @definition.function
|
||||||
|
(#strip! @doc "^\\(\\*+\\s*|\\s*\\*+\\)$")
|
||||||
|
)
|
||||||
|
|
||||||
|
(
|
||||||
|
(comment)? @doc .
|
||||||
|
(value_specification
|
||||||
|
(value_name) @name.definition.function
|
||||||
|
) @definition.function
|
||||||
|
(#strip! @doc "^\\(\\*+\\s*|\\s*\\*+\\)$")
|
||||||
|
)
|
98
aider/queries/tree-sitter-languages/ocaml_interface-tags.scm
Normal file
98
aider/queries/tree-sitter-languages/ocaml_interface-tags.scm
Normal file
|
@ -0,0 +1,98 @@
|
||||||
|
; Modules
|
||||||
|
;--------
|
||||||
|
|
||||||
|
(
|
||||||
|
(comment)? @doc .
|
||||||
|
(module_definition
|
||||||
|
(module_binding (module_name) @name) @definition.module
|
||||||
|
)
|
||||||
|
(#strip! @doc "^\\(\\*+\\s*|\\s*\\*+\\)$")
|
||||||
|
)
|
||||||
|
|
||||||
|
(module_path (module_name) @name) @reference.module
|
||||||
|
(extended_module_path (module_name) @name) @reference.module
|
||||||
|
|
||||||
|
(
|
||||||
|
(comment)? @doc .
|
||||||
|
(module_type_definition (module_type_name) @name) @definition.interface
|
||||||
|
(#strip! @doc "^\\(\\*+\\s*|\\s*\\*+\\)$")
|
||||||
|
)
|
||||||
|
|
||||||
|
(module_type_path (module_type_name) @name) @reference.implementation
|
||||||
|
|
||||||
|
|
||||||
|
; Classes
|
||||||
|
;--------
|
||||||
|
|
||||||
|
(
|
||||||
|
(comment)? @doc .
|
||||||
|
[
|
||||||
|
(class_definition
|
||||||
|
(class_binding (class_name) @name) @definition.class
|
||||||
|
)
|
||||||
|
(class_type_definition
|
||||||
|
(class_type_binding (class_type_name) @name) @definition.class
|
||||||
|
)
|
||||||
|
]
|
||||||
|
(#strip! @doc "^\\(\\*+\\s*|\\s*\\*+\\)$")
|
||||||
|
)
|
||||||
|
|
||||||
|
[
|
||||||
|
(class_path (class_name) @name)
|
||||||
|
(class_type_path (class_type_name) @name)
|
||||||
|
] @reference.class
|
||||||
|
|
||||||
|
(
|
||||||
|
(comment)? @doc .
|
||||||
|
(method_definition (method_name) @name) @definition.method
|
||||||
|
(#strip! @doc "^\\(\\*+\\s*|\\s*\\*+\\)$")
|
||||||
|
)
|
||||||
|
|
||||||
|
(method_invocation (method_name) @name) @reference.call
|
||||||
|
|
||||||
|
|
||||||
|
; Types
|
||||||
|
;------
|
||||||
|
|
||||||
|
(
|
||||||
|
(comment)? @doc .
|
||||||
|
(type_definition
|
||||||
|
(type_binding
|
||||||
|
name: [
|
||||||
|
(type_constructor) @name
|
||||||
|
(type_constructor_path (type_constructor) @name)
|
||||||
|
]
|
||||||
|
) @definition.type
|
||||||
|
)
|
||||||
|
(#strip! @doc "^\\(\\*+\\s*|\\s*\\*+\\)$")
|
||||||
|
)
|
||||||
|
|
||||||
|
(type_constructor_path (type_constructor) @name) @reference.type
|
||||||
|
|
||||||
|
[
|
||||||
|
(constructor_declaration (constructor_name) @name)
|
||||||
|
(tag_specification (tag) @name)
|
||||||
|
] @definition.enum_variant
|
||||||
|
|
||||||
|
[
|
||||||
|
(constructor_path (constructor_name) @name)
|
||||||
|
(tag) @name
|
||||||
|
] @reference.enum_variant
|
||||||
|
|
||||||
|
(field_declaration (field_name) @name) @definition.field
|
||||||
|
|
||||||
|
(field_path (field_name) @name) @reference.field
|
||||||
|
|
||||||
|
(
|
||||||
|
(comment)? @doc .
|
||||||
|
(external (value_name) @name) @definition.function
|
||||||
|
(#strip! @doc "^\\(\\*+\\s*|\\s*\\*+\\)$")
|
||||||
|
)
|
||||||
|
|
||||||
|
(
|
||||||
|
(comment)? @doc .
|
||||||
|
(value_specification
|
||||||
|
(value_name) @name.definition.function
|
||||||
|
) @definition.function
|
||||||
|
(#strip! @doc "^\\(\\*+\\s*|\\s*\\*+\\)$")
|
||||||
|
)
|
225
aider/repo.py
225
aider/repo.py
|
@ -1,3 +1,4 @@
|
||||||
|
import contextlib
|
||||||
import os
|
import os
|
||||||
import time
|
import time
|
||||||
from pathlib import Path, PurePosixPath
|
from pathlib import Path, PurePosixPath
|
||||||
|
@ -20,6 +21,7 @@ import pathspec
|
||||||
from aider import prompts, utils
|
from aider import prompts, utils
|
||||||
|
|
||||||
from .dump import dump # noqa: F401
|
from .dump import dump # noqa: F401
|
||||||
|
from .waiting import WaitingSpinner
|
||||||
|
|
||||||
ANY_GIT_ERROR += [
|
ANY_GIT_ERROR += [
|
||||||
OSError,
|
OSError,
|
||||||
|
@ -34,6 +36,19 @@ ANY_GIT_ERROR += [
|
||||||
ANY_GIT_ERROR = tuple(ANY_GIT_ERROR)
|
ANY_GIT_ERROR = tuple(ANY_GIT_ERROR)
|
||||||
|
|
||||||
|
|
||||||
|
@contextlib.contextmanager
|
||||||
|
def set_git_env(var_name, value, original_value):
|
||||||
|
"""Temporarily set a Git environment variable."""
|
||||||
|
os.environ[var_name] = value
|
||||||
|
try:
|
||||||
|
yield
|
||||||
|
finally:
|
||||||
|
if original_value is not None:
|
||||||
|
os.environ[var_name] = original_value
|
||||||
|
elif var_name in os.environ:
|
||||||
|
del os.environ[var_name]
|
||||||
|
|
||||||
|
|
||||||
class GitRepo:
|
class GitRepo:
|
||||||
repo = None
|
repo = None
|
||||||
aider_ignore_file = None
|
aider_ignore_file = None
|
||||||
|
@ -58,6 +73,7 @@ class GitRepo:
|
||||||
commit_prompt=None,
|
commit_prompt=None,
|
||||||
subtree_only=False,
|
subtree_only=False,
|
||||||
git_commit_verify=True,
|
git_commit_verify=True,
|
||||||
|
attribute_co_authored_by=False, # Added parameter
|
||||||
):
|
):
|
||||||
self.io = io
|
self.io = io
|
||||||
self.models = models
|
self.models = models
|
||||||
|
@ -69,6 +85,7 @@ class GitRepo:
|
||||||
self.attribute_committer = attribute_committer
|
self.attribute_committer = attribute_committer
|
||||||
self.attribute_commit_message_author = attribute_commit_message_author
|
self.attribute_commit_message_author = attribute_commit_message_author
|
||||||
self.attribute_commit_message_committer = attribute_commit_message_committer
|
self.attribute_commit_message_committer = attribute_commit_message_committer
|
||||||
|
self.attribute_co_authored_by = attribute_co_authored_by # Assign from parameter
|
||||||
self.commit_prompt = commit_prompt
|
self.commit_prompt = commit_prompt
|
||||||
self.subtree_only = subtree_only
|
self.subtree_only = subtree_only
|
||||||
self.git_commit_verify = git_commit_verify
|
self.git_commit_verify = git_commit_verify
|
||||||
|
@ -111,7 +128,76 @@ class GitRepo:
|
||||||
if aider_ignore_file:
|
if aider_ignore_file:
|
||||||
self.aider_ignore_file = Path(aider_ignore_file)
|
self.aider_ignore_file = Path(aider_ignore_file)
|
||||||
|
|
||||||
def commit(self, fnames=None, context=None, message=None, aider_edits=False):
|
def commit(self, fnames=None, context=None, message=None, aider_edits=False, coder=None):
|
||||||
|
"""
|
||||||
|
Commit the specified files or all dirty files if none are specified.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
fnames (list, optional): List of filenames to commit. Defaults to None (commit all
|
||||||
|
dirty files).
|
||||||
|
context (str, optional): Context for generating commit message. Defaults to None.
|
||||||
|
message (str, optional): Explicit commit message. Defaults to None (generate message).
|
||||||
|
aider_edits (bool, optional): Whether the changes were made by Aider. Defaults to False.
|
||||||
|
This affects attribution logic.
|
||||||
|
coder (Coder, optional): The Coder instance, used for config and model info.
|
||||||
|
Defaults to None.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
tuple(str, str) or None: The commit hash and commit message if successful,
|
||||||
|
else None.
|
||||||
|
|
||||||
|
Attribution Logic:
|
||||||
|
------------------
|
||||||
|
This method handles Git commit attribution based on configuration flags and whether
|
||||||
|
Aider generated the changes (`aider_edits`).
|
||||||
|
|
||||||
|
Key Concepts:
|
||||||
|
- Author: The person who originally wrote the code changes.
|
||||||
|
- Committer: The person who last applied the commit to the repository.
|
||||||
|
- aider_edits=True: Changes were generated by Aider (LLM).
|
||||||
|
- aider_edits=False: Commit is user-driven (e.g., /commit manually staged changes).
|
||||||
|
- Explicit Setting: A flag (--attribute-...) is set to True or False
|
||||||
|
via command line or config file.
|
||||||
|
- Implicit Default: A flag is not explicitly set, defaulting to None in args, which is
|
||||||
|
interpreted as True unless overridden by other logic.
|
||||||
|
|
||||||
|
Flags:
|
||||||
|
- --attribute-author: Modify Author name to "User Name (aider)".
|
||||||
|
- --attribute-committer: Modify Committer name to "User Name (aider)".
|
||||||
|
- --attribute-co-authored-by: Add
|
||||||
|
"Co-authored-by: aider (<model>) <noreply@aider.chat>" trailer to commit message.
|
||||||
|
|
||||||
|
Behavior Summary:
|
||||||
|
|
||||||
|
1. When aider_edits = True (AI Changes):
|
||||||
|
- If --attribute-co-authored-by=True:
|
||||||
|
- Co-authored-by trailer IS ADDED.
|
||||||
|
- Author/Committer names are NOT modified by default (co-authored-by takes precedence).
|
||||||
|
- EXCEPTION: If --attribute-author/--attribute-committer is EXPLICITLY True, the
|
||||||
|
respective name IS modified (explicit overrides precedence).
|
||||||
|
- If --attribute-co-authored-by=False:
|
||||||
|
- Co-authored-by trailer is NOT added.
|
||||||
|
- Author/Committer names ARE modified by default (implicit True).
|
||||||
|
- EXCEPTION: If --attribute-author/--attribute-committer is EXPLICITLY False,
|
||||||
|
the respective name is NOT modified.
|
||||||
|
|
||||||
|
2. When aider_edits = False (User Changes):
|
||||||
|
- --attribute-co-authored-by is IGNORED (trailer never added).
|
||||||
|
- Author name is NEVER modified (--attribute-author ignored).
|
||||||
|
- Committer name IS modified by default (implicit True, as Aider runs `git commit`).
|
||||||
|
- EXCEPTION: If --attribute-committer is EXPLICITLY False, the name is NOT modified.
|
||||||
|
|
||||||
|
Resulting Scenarios:
|
||||||
|
- Standard AI edit (defaults): Co-authored-by=False -> Author=You(aider),
|
||||||
|
Committer=You(aider)
|
||||||
|
- AI edit with Co-authored-by (default): Co-authored-by=True -> Author=You,
|
||||||
|
Committer=You, Trailer added
|
||||||
|
- AI edit with Co-authored-by + Explicit Author: Co-authored-by=True,
|
||||||
|
--attribute-author -> Author=You(aider), Committer=You, Trailer added
|
||||||
|
- User commit (defaults): aider_edits=False -> Author=You, Committer=You(aider)
|
||||||
|
- User commit with explicit no-committer: aider_edits=False,
|
||||||
|
--no-attribute-committer -> Author=You, Committer=You
|
||||||
|
"""
|
||||||
if not fnames and not self.repo.is_dirty():
|
if not fnames and not self.repo.is_dirty():
|
||||||
return
|
return
|
||||||
|
|
||||||
|
@ -122,19 +208,71 @@ class GitRepo:
|
||||||
if message:
|
if message:
|
||||||
commit_message = message
|
commit_message = message
|
||||||
else:
|
else:
|
||||||
commit_message = self.get_commit_message(diffs, context)
|
user_language = None
|
||||||
|
if coder:
|
||||||
|
user_language = coder.get_user_language()
|
||||||
|
commit_message = self.get_commit_message(diffs, context, user_language)
|
||||||
|
|
||||||
if aider_edits and self.attribute_commit_message_author:
|
# Retrieve attribute settings, prioritizing coder.args if available
|
||||||
commit_message = "aider: " + commit_message
|
if coder and hasattr(coder, "args"):
|
||||||
elif self.attribute_commit_message_committer:
|
attribute_author = coder.args.attribute_author
|
||||||
commit_message = "aider: " + commit_message
|
attribute_committer = coder.args.attribute_committer
|
||||||
|
attribute_commit_message_author = coder.args.attribute_commit_message_author
|
||||||
|
attribute_commit_message_committer = coder.args.attribute_commit_message_committer
|
||||||
|
attribute_co_authored_by = coder.args.attribute_co_authored_by
|
||||||
|
else:
|
||||||
|
# Fallback to self attributes (initialized from config/defaults)
|
||||||
|
attribute_author = self.attribute_author
|
||||||
|
attribute_committer = self.attribute_committer
|
||||||
|
attribute_commit_message_author = self.attribute_commit_message_author
|
||||||
|
attribute_commit_message_committer = self.attribute_commit_message_committer
|
||||||
|
attribute_co_authored_by = self.attribute_co_authored_by
|
||||||
|
|
||||||
|
# Determine explicit settings (None means use default behavior)
|
||||||
|
author_explicit = attribute_author is not None
|
||||||
|
committer_explicit = attribute_committer is not None
|
||||||
|
|
||||||
|
# Determine effective settings (apply default True if not explicit)
|
||||||
|
effective_author = True if attribute_author is None else attribute_author
|
||||||
|
effective_committer = True if attribute_committer is None else attribute_committer
|
||||||
|
|
||||||
|
# Determine commit message prefixing
|
||||||
|
prefix_commit_message = aider_edits and (
|
||||||
|
attribute_commit_message_author or attribute_commit_message_committer
|
||||||
|
)
|
||||||
|
|
||||||
|
# Determine Co-authored-by trailer
|
||||||
|
commit_message_trailer = ""
|
||||||
|
if aider_edits and attribute_co_authored_by:
|
||||||
|
model_name = "unknown-model"
|
||||||
|
if coder and hasattr(coder, "main_model") and coder.main_model.name:
|
||||||
|
model_name = coder.main_model.name
|
||||||
|
commit_message_trailer = (
|
||||||
|
f"\n\nCo-authored-by: aider ({model_name}) <noreply@aider.chat>"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Determine if author/committer names should be modified
|
||||||
|
# Author modification applies only to aider edits.
|
||||||
|
# It's used if effective_author is True AND
|
||||||
|
# (co-authored-by is False OR author was explicitly set).
|
||||||
|
use_attribute_author = (
|
||||||
|
aider_edits and effective_author and (not attribute_co_authored_by or author_explicit)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Committer modification applies regardless of aider_edits (based on tests).
|
||||||
|
# It's used if effective_committer is True AND
|
||||||
|
# (it's not an aider edit with co-authored-by OR committer was explicitly set).
|
||||||
|
use_attribute_committer = effective_committer and (
|
||||||
|
not (aider_edits and attribute_co_authored_by) or committer_explicit
|
||||||
|
)
|
||||||
|
|
||||||
if not commit_message:
|
if not commit_message:
|
||||||
commit_message = "(no commit message provided)"
|
commit_message = "(no commit message provided)"
|
||||||
|
|
||||||
full_commit_message = commit_message
|
if prefix_commit_message:
|
||||||
# if context:
|
commit_message = "aider: " + commit_message
|
||||||
# full_commit_message += "\n\n# Aider chat conversation:\n\n" + context
|
|
||||||
|
full_commit_message = commit_message + commit_message_trailer
|
||||||
|
|
||||||
cmd = ["-m", full_commit_message]
|
cmd = ["-m", full_commit_message]
|
||||||
if not self.git_commit_verify:
|
if not self.git_commit_verify:
|
||||||
|
@ -152,36 +290,32 @@ class GitRepo:
|
||||||
|
|
||||||
original_user_name = self.repo.git.config("--get", "user.name")
|
original_user_name = self.repo.git.config("--get", "user.name")
|
||||||
original_committer_name_env = os.environ.get("GIT_COMMITTER_NAME")
|
original_committer_name_env = os.environ.get("GIT_COMMITTER_NAME")
|
||||||
|
original_author_name_env = os.environ.get("GIT_AUTHOR_NAME")
|
||||||
committer_name = f"{original_user_name} (aider)"
|
committer_name = f"{original_user_name} (aider)"
|
||||||
|
|
||||||
if self.attribute_committer:
|
|
||||||
os.environ["GIT_COMMITTER_NAME"] = committer_name
|
|
||||||
|
|
||||||
if aider_edits and self.attribute_author:
|
|
||||||
original_author_name_env = os.environ.get("GIT_AUTHOR_NAME")
|
|
||||||
os.environ["GIT_AUTHOR_NAME"] = committer_name
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
self.repo.git.commit(cmd)
|
# Use context managers to handle environment variables
|
||||||
commit_hash = self.get_head_commit_sha(short=True)
|
with contextlib.ExitStack() as stack:
|
||||||
self.io.tool_output(f"Commit {commit_hash} {commit_message}", bold=True)
|
if use_attribute_committer:
|
||||||
return commit_hash, commit_message
|
stack.enter_context(
|
||||||
|
set_git_env(
|
||||||
|
"GIT_COMMITTER_NAME", committer_name, original_committer_name_env
|
||||||
|
)
|
||||||
|
)
|
||||||
|
if use_attribute_author:
|
||||||
|
stack.enter_context(
|
||||||
|
set_git_env("GIT_AUTHOR_NAME", committer_name, original_author_name_env)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Perform the commit
|
||||||
|
self.repo.git.commit(cmd)
|
||||||
|
commit_hash = self.get_head_commit_sha(short=True)
|
||||||
|
self.io.tool_output(f"Commit {commit_hash} {commit_message}", bold=True)
|
||||||
|
return commit_hash, commit_message
|
||||||
|
|
||||||
except ANY_GIT_ERROR as err:
|
except ANY_GIT_ERROR as err:
|
||||||
self.io.tool_error(f"Unable to commit: {err}")
|
self.io.tool_error(f"Unable to commit: {err}")
|
||||||
finally:
|
# No return here, implicitly returns None
|
||||||
# Restore the env
|
|
||||||
|
|
||||||
if self.attribute_committer:
|
|
||||||
if original_committer_name_env is not None:
|
|
||||||
os.environ["GIT_COMMITTER_NAME"] = original_committer_name_env
|
|
||||||
else:
|
|
||||||
del os.environ["GIT_COMMITTER_NAME"]
|
|
||||||
|
|
||||||
if aider_edits and self.attribute_author:
|
|
||||||
if original_author_name_env is not None:
|
|
||||||
os.environ["GIT_AUTHOR_NAME"] = original_author_name_env
|
|
||||||
else:
|
|
||||||
del os.environ["GIT_AUTHOR_NAME"]
|
|
||||||
|
|
||||||
def get_rel_repo_dir(self):
|
def get_rel_repo_dir(self):
|
||||||
try:
|
try:
|
||||||
|
@ -189,7 +323,7 @@ class GitRepo:
|
||||||
except (ValueError, OSError):
|
except (ValueError, OSError):
|
||||||
return self.repo.git_dir
|
return self.repo.git_dir
|
||||||
|
|
||||||
def get_commit_message(self, diffs, context):
|
def get_commit_message(self, diffs, context, user_language=None):
|
||||||
diffs = "# Diffs:\n" + diffs
|
diffs = "# Diffs:\n" + diffs
|
||||||
|
|
||||||
content = ""
|
content = ""
|
||||||
|
@ -198,6 +332,11 @@ class GitRepo:
|
||||||
content += diffs
|
content += diffs
|
||||||
|
|
||||||
system_content = self.commit_prompt or prompts.commit_system
|
system_content = self.commit_prompt or prompts.commit_system
|
||||||
|
language_instruction = ""
|
||||||
|
if user_language:
|
||||||
|
language_instruction = f"\n- Is written in {user_language}."
|
||||||
|
system_content = system_content.format(language_instruction=language_instruction)
|
||||||
|
|
||||||
messages = [
|
messages = [
|
||||||
dict(role="system", content=system_content),
|
dict(role="system", content=system_content),
|
||||||
dict(role="user", content=content),
|
dict(role="user", content=content),
|
||||||
|
@ -205,13 +344,15 @@ class GitRepo:
|
||||||
|
|
||||||
commit_message = None
|
commit_message = None
|
||||||
for model in self.models:
|
for model in self.models:
|
||||||
num_tokens = model.token_count(messages)
|
spinner_text = f"Generating commit message with {model.name}"
|
||||||
max_tokens = model.info.get("max_input_tokens") or 0
|
with WaitingSpinner(spinner_text):
|
||||||
if max_tokens and num_tokens > max_tokens:
|
num_tokens = model.token_count(messages)
|
||||||
continue
|
max_tokens = model.info.get("max_input_tokens") or 0
|
||||||
commit_message = model.simple_send_with_retries(messages)
|
if max_tokens and num_tokens > max_tokens:
|
||||||
if commit_message:
|
continue
|
||||||
break
|
commit_message = model.simple_send_with_retries(messages)
|
||||||
|
if commit_message:
|
||||||
|
break # Found a model that could generate the message
|
||||||
|
|
||||||
if not commit_message:
|
if not commit_message:
|
||||||
self.io.tool_error("Failed to generate commit message!")
|
self.io.tool_error("Failed to generate commit message!")
|
||||||
|
|
|
@ -19,7 +19,7 @@ from tqdm import tqdm
|
||||||
|
|
||||||
from aider.dump import dump
|
from aider.dump import dump
|
||||||
from aider.special import filter_important_files
|
from aider.special import filter_important_files
|
||||||
from aider.utils import Spinner
|
from aider.waiting import Spinner
|
||||||
|
|
||||||
# tree_sitter is throwing a FutureWarning
|
# tree_sitter is throwing a FutureWarning
|
||||||
warnings.simplefilter("ignore", category=FutureWarning)
|
warnings.simplefilter("ignore", category=FutureWarning)
|
||||||
|
@ -35,6 +35,8 @@ CACHE_VERSION = 3
|
||||||
if USING_TSL_PACK:
|
if USING_TSL_PACK:
|
||||||
CACHE_VERSION = 4
|
CACHE_VERSION = 4
|
||||||
|
|
||||||
|
UPDATING_REPO_MAP_MESSAGE = "Updating repo map"
|
||||||
|
|
||||||
|
|
||||||
class RepoMap:
|
class RepoMap:
|
||||||
TAGS_CACHE_DIR = f".aider.tags.cache.v{CACHE_VERSION}"
|
TAGS_CACHE_DIR = f".aider.tags.cache.v{CACHE_VERSION}"
|
||||||
|
@ -380,7 +382,7 @@ class RepoMap:
|
||||||
if self.verbose:
|
if self.verbose:
|
||||||
self.io.tool_output(f"Processing {fname}")
|
self.io.tool_output(f"Processing {fname}")
|
||||||
if progress and not showing_bar:
|
if progress and not showing_bar:
|
||||||
progress()
|
progress(f"{UPDATING_REPO_MAP_MESSAGE}: {fname}")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
file_ok = Path(fname).is_file()
|
file_ok = Path(fname).is_file()
|
||||||
|
@ -459,7 +461,7 @@ class RepoMap:
|
||||||
|
|
||||||
for ident in idents:
|
for ident in idents:
|
||||||
if progress:
|
if progress:
|
||||||
progress()
|
progress(f"{UPDATING_REPO_MAP_MESSAGE}: {ident}")
|
||||||
|
|
||||||
definers = defines[ident]
|
definers = defines[ident]
|
||||||
|
|
||||||
|
@ -512,7 +514,7 @@ class RepoMap:
|
||||||
ranked_definitions = defaultdict(float)
|
ranked_definitions = defaultdict(float)
|
||||||
for src in G.nodes:
|
for src in G.nodes:
|
||||||
if progress:
|
if progress:
|
||||||
progress()
|
progress(f"{UPDATING_REPO_MAP_MESSAGE}: {src}")
|
||||||
|
|
||||||
src_rank = ranked[src]
|
src_rank = ranked[src]
|
||||||
total_weight = sum(data["weight"] for _src, _dst, data in G.out_edges(src, data=True))
|
total_weight = sum(data["weight"] for _src, _dst, data in G.out_edges(src, data=True))
|
||||||
|
@ -621,7 +623,7 @@ class RepoMap:
|
||||||
if not mentioned_idents:
|
if not mentioned_idents:
|
||||||
mentioned_idents = set()
|
mentioned_idents = set()
|
||||||
|
|
||||||
spin = Spinner("Updating repo map")
|
spin = Spinner(UPDATING_REPO_MAP_MESSAGE)
|
||||||
|
|
||||||
ranked_tags = self.get_ranked_tags(
|
ranked_tags = self.get_ranked_tags(
|
||||||
chat_fnames,
|
chat_fnames,
|
||||||
|
@ -655,7 +657,11 @@ class RepoMap:
|
||||||
while lower_bound <= upper_bound:
|
while lower_bound <= upper_bound:
|
||||||
# dump(lower_bound, middle, upper_bound)
|
# dump(lower_bound, middle, upper_bound)
|
||||||
|
|
||||||
spin.step()
|
if middle > 1500:
|
||||||
|
show_tokens = f"{middle / 1000.0:.1f}K"
|
||||||
|
else:
|
||||||
|
show_tokens = str(middle)
|
||||||
|
spin.step(f"{UPDATING_REPO_MAP_MESSAGE}: {show_tokens} tokens")
|
||||||
|
|
||||||
tree = self.to_tree(ranked_tags[:middle], chat_rel_fnames)
|
tree = self.to_tree(ranked_tags[:middle], chat_rel_fnames)
|
||||||
num_tokens = self.token_count(tree)
|
num_tokens = self.token_count(tree)
|
||||||
|
|
|
@ -15,22 +15,6 @@
|
||||||
//"supports_tool_choice": true,
|
//"supports_tool_choice": true,
|
||||||
"supports_prompt_caching": true
|
"supports_prompt_caching": true
|
||||||
},
|
},
|
||||||
"openrouter/deepseek/deepseek-r1": {
|
|
||||||
"max_tokens": 8192,
|
|
||||||
"max_input_tokens": 64000,
|
|
||||||
"max_output_tokens": 8192,
|
|
||||||
"input_cost_per_token": 0.00000055,
|
|
||||||
"input_cost_per_token_cache_hit": 0.00000014,
|
|
||||||
"cache_read_input_token_cost": 0.00000014,
|
|
||||||
"cache_creation_input_token_cost": 0.0,
|
|
||||||
"output_cost_per_token": 0.00000219,
|
|
||||||
"litellm_provider": "openrouter",
|
|
||||||
"mode": "chat",
|
|
||||||
//"supports_function_calling": true,
|
|
||||||
"supports_assistant_prefill": true,
|
|
||||||
//"supports_tool_choice": true,
|
|
||||||
"supports_prompt_caching": true
|
|
||||||
},
|
|
||||||
"openrouter/deepseek/deepseek-r1:free": {
|
"openrouter/deepseek/deepseek-r1:free": {
|
||||||
"max_tokens": 8192,
|
"max_tokens": 8192,
|
||||||
"max_input_tokens": 64000,
|
"max_input_tokens": 64000,
|
||||||
|
@ -65,7 +49,7 @@
|
||||||
},
|
},
|
||||||
"openrouter/deepseek/deepseek-chat-v3-0324": {
|
"openrouter/deepseek/deepseek-chat-v3-0324": {
|
||||||
"max_tokens": 8192,
|
"max_tokens": 8192,
|
||||||
"max_input_tokens": 64000,
|
"max_input_tokens": 131072,
|
||||||
"max_output_tokens": 8192,
|
"max_output_tokens": 8192,
|
||||||
"input_cost_per_token": 0.00000055,
|
"input_cost_per_token": 0.00000055,
|
||||||
"input_cost_per_token_cache_hit": 0.00000014,
|
"input_cost_per_token_cache_hit": 0.00000014,
|
||||||
|
@ -99,15 +83,6 @@
|
||||||
"output_cost_per_token": 0.000008,
|
"output_cost_per_token": 0.000008,
|
||||||
"mode": "chat",
|
"mode": "chat",
|
||||||
},
|
},
|
||||||
"fireworks_ai/accounts/fireworks/models/deepseek-v3": {
|
|
||||||
"max_tokens": 128000,
|
|
||||||
"max_input_tokens": 100000,
|
|
||||||
"max_output_tokens": 8192,
|
|
||||||
"litellm_provider": "fireworks_ai",
|
|
||||||
"input_cost_per_token": 0.0000009,
|
|
||||||
"output_cost_per_token": 0.0000009,
|
|
||||||
"mode": "chat",
|
|
||||||
},
|
|
||||||
"fireworks_ai/accounts/fireworks/models/deepseek-v3-0324": {
|
"fireworks_ai/accounts/fireworks/models/deepseek-v3-0324": {
|
||||||
"max_tokens": 160000,
|
"max_tokens": 160000,
|
||||||
"max_input_tokens": 100000,
|
"max_input_tokens": 100000,
|
||||||
|
@ -117,54 +92,6 @@
|
||||||
"output_cost_per_token": 0.0000009,
|
"output_cost_per_token": 0.0000009,
|
||||||
"mode": "chat",
|
"mode": "chat",
|
||||||
},
|
},
|
||||||
"o3-mini": {
|
|
||||||
"max_tokens": 100000,
|
|
||||||
"max_input_tokens": 200000,
|
|
||||||
"max_output_tokens": 100000,
|
|
||||||
"input_cost_per_token": 0.0000011,
|
|
||||||
"output_cost_per_token": 0.0000044,
|
|
||||||
"cache_read_input_token_cost": 0.00000055,
|
|
||||||
"litellm_provider": "openai",
|
|
||||||
"mode": "chat",
|
|
||||||
"supports_function_calling": true,
|
|
||||||
"supports_parallel_function_calling": true,
|
|
||||||
"supports_vision": true,
|
|
||||||
"supports_prompt_caching": true,
|
|
||||||
"supports_system_messages": true,
|
|
||||||
"supports_response_schema": true
|
|
||||||
},
|
|
||||||
"openrouter/openai/o3-mini": {
|
|
||||||
"max_tokens": 100000,
|
|
||||||
"max_input_tokens": 200000,
|
|
||||||
"max_output_tokens": 100000,
|
|
||||||
"input_cost_per_token": 0.0000011,
|
|
||||||
"output_cost_per_token": 0.0000044,
|
|
||||||
"cache_read_input_token_cost": 0.00000055,
|
|
||||||
"litellm_provider": "openrouter",
|
|
||||||
"mode": "chat",
|
|
||||||
"supports_function_calling": true,
|
|
||||||
"supports_parallel_function_calling": true,
|
|
||||||
"supports_vision": true,
|
|
||||||
"supports_prompt_caching": true,
|
|
||||||
"supports_system_messages": true,
|
|
||||||
"supports_response_schema": true
|
|
||||||
},
|
|
||||||
"openrouter/openai/o3-mini-high": {
|
|
||||||
"max_tokens": 100000,
|
|
||||||
"max_input_tokens": 200000,
|
|
||||||
"max_output_tokens": 100000,
|
|
||||||
"input_cost_per_token": 0.0000011,
|
|
||||||
"output_cost_per_token": 0.0000044,
|
|
||||||
"cache_read_input_token_cost": 0.00000055,
|
|
||||||
"litellm_provider": "openrouter",
|
|
||||||
"mode": "chat",
|
|
||||||
"supports_function_calling": true,
|
|
||||||
"supports_parallel_function_calling": true,
|
|
||||||
"supports_vision": true,
|
|
||||||
"supports_prompt_caching": true,
|
|
||||||
"supports_system_messages": true,
|
|
||||||
"supports_response_schema": true
|
|
||||||
},
|
|
||||||
"openrouter/openrouter/quasar-alpha": {
|
"openrouter/openrouter/quasar-alpha": {
|
||||||
"max_input_tokens": 1000000,
|
"max_input_tokens": 1000000,
|
||||||
"max_output_tokens": 32000,
|
"max_output_tokens": 32000,
|
||||||
|
@ -203,26 +130,6 @@
|
||||||
"supports_prompt_caching": true,
|
"supports_prompt_caching": true,
|
||||||
"supports_system_messages": true
|
"supports_system_messages": true
|
||||||
},
|
},
|
||||||
"claude-3-7-sonnet-20250219": {
|
|
||||||
"max_tokens": 8192,
|
|
||||||
"max_input_tokens": 200000,
|
|
||||||
"max_output_tokens": 8192,
|
|
||||||
"input_cost_per_token": 0.000003,
|
|
||||||
"output_cost_per_token": 0.000015,
|
|
||||||
"cache_creation_input_token_cost": 0.00000375,
|
|
||||||
"cache_read_input_token_cost": 0.0000003,
|
|
||||||
"litellm_provider": "anthropic",
|
|
||||||
"mode": "chat",
|
|
||||||
"supports_function_calling": true,
|
|
||||||
"supports_vision": true,
|
|
||||||
"tool_use_system_prompt_tokens": 159,
|
|
||||||
"supports_assistant_prefill": true,
|
|
||||||
"supports_pdf_input": true,
|
|
||||||
"supports_prompt_caching": true,
|
|
||||||
"supports_response_schema": true,
|
|
||||||
"deprecation_date": "2025-10-01",
|
|
||||||
"supports_tool_choice": true
|
|
||||||
},
|
|
||||||
"anthropic/claude-3-7-sonnet-20250219": {
|
"anthropic/claude-3-7-sonnet-20250219": {
|
||||||
"max_tokens": 8192,
|
"max_tokens": 8192,
|
||||||
"max_input_tokens": 200000,
|
"max_input_tokens": 200000,
|
||||||
|
@ -243,43 +150,6 @@
|
||||||
"deprecation_date": "2025-10-01",
|
"deprecation_date": "2025-10-01",
|
||||||
"supports_tool_choice": true
|
"supports_tool_choice": true
|
||||||
},
|
},
|
||||||
"openrouter/anthropic/claude-3.7-sonnet": {
|
|
||||||
"max_tokens": 8192,
|
|
||||||
"max_input_tokens": 200000,
|
|
||||||
"max_output_tokens": 8192,
|
|
||||||
"input_cost_per_token": 0.000003,
|
|
||||||
"output_cost_per_token": 0.000015,
|
|
||||||
"cache_creation_input_token_cost": 0.00000375,
|
|
||||||
"cache_read_input_token_cost": 0.0000003,
|
|
||||||
"litellm_provider": "openrouter",
|
|
||||||
"mode": "chat",
|
|
||||||
"supports_function_calling": true,
|
|
||||||
"supports_vision": true,
|
|
||||||
"tool_use_system_prompt_tokens": 159,
|
|
||||||
"supports_assistant_prefill": true,
|
|
||||||
"supports_pdf_input": true,
|
|
||||||
"supports_prompt_caching": true,
|
|
||||||
"supports_response_schema": true,
|
|
||||||
"deprecation_date": "2025-10-01",
|
|
||||||
"supports_tool_choice": true
|
|
||||||
},
|
|
||||||
"gpt-4.5-preview": {
|
|
||||||
"max_tokens": 16384,
|
|
||||||
"max_input_tokens": 128000,
|
|
||||||
"max_output_tokens": 16384,
|
|
||||||
"input_cost_per_token": 0.000075,
|
|
||||||
"output_cost_per_token": 0.00015,
|
|
||||||
"cache_read_input_token_cost": 0.0000375,
|
|
||||||
"litellm_provider": "openai",
|
|
||||||
"mode": "chat",
|
|
||||||
"supports_function_calling": true,
|
|
||||||
"supports_parallel_function_calling": true,
|
|
||||||
"supports_response_schema": true,
|
|
||||||
"supports_vision": true,
|
|
||||||
"supports_prompt_caching": true,
|
|
||||||
"supports_system_messages": true,
|
|
||||||
"supports_tool_choice": true
|
|
||||||
},
|
|
||||||
"openai/gpt-4.5-preview": {
|
"openai/gpt-4.5-preview": {
|
||||||
"max_tokens": 16384,
|
"max_tokens": 16384,
|
||||||
"max_input_tokens": 128000,
|
"max_input_tokens": 128000,
|
||||||
|
@ -334,42 +204,6 @@
|
||||||
"supports_tool_choice": true,
|
"supports_tool_choice": true,
|
||||||
"source": "https://cloud.google.com/vertex-ai/generative-ai/pricing"
|
"source": "https://cloud.google.com/vertex-ai/generative-ai/pricing"
|
||||||
},
|
},
|
||||||
"gemini/gemini-2.5-pro-preview-03-25": {
|
|
||||||
"max_tokens": 8192,
|
|
||||||
"max_input_tokens": 1048576,
|
|
||||||
"max_output_tokens": 64000,
|
|
||||||
"max_images_per_prompt": 3000,
|
|
||||||
"max_videos_per_prompt": 10,
|
|
||||||
"max_video_length": 1,
|
|
||||||
"max_audio_length_hours": 8.4,
|
|
||||||
"max_audio_per_prompt": 1,
|
|
||||||
"max_pdf_size_mb": 30,
|
|
||||||
"input_cost_per_image": 0,
|
|
||||||
"input_cost_per_video_per_second": 0,
|
|
||||||
"input_cost_per_audio_per_second": 0,
|
|
||||||
"input_cost_per_token": 0.00000125,
|
|
||||||
"input_cost_per_character": 0,
|
|
||||||
"input_cost_per_token_above_128k_tokens": 0,
|
|
||||||
"input_cost_per_character_above_128k_tokens": 0,
|
|
||||||
"input_cost_per_image_above_128k_tokens": 0,
|
|
||||||
"input_cost_per_video_per_second_above_128k_tokens": 0,
|
|
||||||
"input_cost_per_audio_per_second_above_128k_tokens": 0,
|
|
||||||
"output_cost_per_token": 0.000010,
|
|
||||||
"output_cost_per_character": 0,
|
|
||||||
"output_cost_per_token_above_128k_tokens": 0,
|
|
||||||
"output_cost_per_character_above_128k_tokens": 0,
|
|
||||||
"litellm_provider": "gemini",
|
|
||||||
"mode": "chat",
|
|
||||||
"supports_system_messages": true,
|
|
||||||
"supports_function_calling": true,
|
|
||||||
"supports_vision": true,
|
|
||||||
"supports_audio_input": true,
|
|
||||||
"supports_video_input": true,
|
|
||||||
"supports_pdf_input": true,
|
|
||||||
"supports_response_schema": true,
|
|
||||||
"supports_tool_choice": true,
|
|
||||||
"source": "https://cloud.google.com/vertex-ai/generative-ai/pricing"
|
|
||||||
},
|
|
||||||
"vertex_ai/gemini-2.5-pro-exp-03-25": {
|
"vertex_ai/gemini-2.5-pro-exp-03-25": {
|
||||||
"max_tokens": 8192,
|
"max_tokens": 8192,
|
||||||
"max_input_tokens": 1048576,
|
"max_input_tokens": 1048576,
|
||||||
|
@ -478,7 +312,7 @@
|
||||||
"supports_tool_choice": true,
|
"supports_tool_choice": true,
|
||||||
"source": "https://cloud.google.com/vertex-ai/generative-ai/pricing"
|
"source": "https://cloud.google.com/vertex-ai/generative-ai/pricing"
|
||||||
},
|
},
|
||||||
"openrouter/google/gemini-2.5-pro-exp-03-25:free": {
|
"openrouter/google/gemini-2.5-pro-exp-03-25": {
|
||||||
"max_tokens": 8192,
|
"max_tokens": 8192,
|
||||||
"max_input_tokens": 1048576,
|
"max_input_tokens": 1048576,
|
||||||
"max_output_tokens": 64000,
|
"max_output_tokens": 64000,
|
||||||
|
@ -523,15 +357,6 @@
|
||||||
"litellm_provider": "openrouter",
|
"litellm_provider": "openrouter",
|
||||||
"mode": "chat"
|
"mode": "chat"
|
||||||
},
|
},
|
||||||
"xai/grok-3-beta": {
|
|
||||||
"max_tokens": 131072,
|
|
||||||
"max_input_tokens": 131072,
|
|
||||||
"max_output_tokens": 131072,
|
|
||||||
"input_cost_per_token": 0.000003,
|
|
||||||
"output_cost_per_token": 0.000015,
|
|
||||||
"litellm_provider": "xai",
|
|
||||||
"mode": "chat"
|
|
||||||
},
|
|
||||||
"openrouter/x-ai/grok-3-mini-beta": {
|
"openrouter/x-ai/grok-3-mini-beta": {
|
||||||
"max_tokens": 131072,
|
"max_tokens": 131072,
|
||||||
"max_input_tokens": 131072,
|
"max_input_tokens": 131072,
|
||||||
|
@ -541,15 +366,6 @@
|
||||||
"litellm_provider": "openrouter",
|
"litellm_provider": "openrouter",
|
||||||
"mode": "chat"
|
"mode": "chat"
|
||||||
},
|
},
|
||||||
"xai/grok-3-mini-beta": {
|
|
||||||
"max_tokens": 131072,
|
|
||||||
"max_input_tokens": 131072,
|
|
||||||
"max_output_tokens": 131072,
|
|
||||||
"input_cost_per_token": 0.0000003,
|
|
||||||
"output_cost_per_token": 0.0000005,
|
|
||||||
"litellm_provider": "xai",
|
|
||||||
"mode": "chat"
|
|
||||||
},
|
|
||||||
"openrouter/x-ai/grok-3-fast-beta": {
|
"openrouter/x-ai/grok-3-fast-beta": {
|
||||||
"max_tokens": 131072,
|
"max_tokens": 131072,
|
||||||
"max_input_tokens": 131072,
|
"max_input_tokens": 131072,
|
||||||
|
@ -559,15 +375,6 @@
|
||||||
"litellm_provider": "openrouter",
|
"litellm_provider": "openrouter",
|
||||||
"mode": "chat"
|
"mode": "chat"
|
||||||
},
|
},
|
||||||
"xai/grok-3-fast-beta": {
|
|
||||||
"max_tokens": 131072,
|
|
||||||
"max_input_tokens": 131072,
|
|
||||||
"max_output_tokens": 131072,
|
|
||||||
"input_cost_per_token": 0.000005,
|
|
||||||
"output_cost_per_token": 0.000025,
|
|
||||||
"litellm_provider": "xai",
|
|
||||||
"mode": "chat"
|
|
||||||
},
|
|
||||||
"openrouter/x-ai/grok-3-mini-fast-beta": {
|
"openrouter/x-ai/grok-3-mini-fast-beta": {
|
||||||
"max_tokens": 131072,
|
"max_tokens": 131072,
|
||||||
"max_input_tokens": 131072,
|
"max_input_tokens": 131072,
|
||||||
|
@ -577,15 +384,6 @@
|
||||||
"litellm_provider": "openrouter",
|
"litellm_provider": "openrouter",
|
||||||
"mode": "chat"
|
"mode": "chat"
|
||||||
},
|
},
|
||||||
"xai/grok-3-mini-fast-beta": {
|
|
||||||
"max_tokens": 131072,
|
|
||||||
"max_input_tokens": 131072,
|
|
||||||
"max_output_tokens": 131072,
|
|
||||||
"input_cost_per_token": 0.0000006,
|
|
||||||
"output_cost_per_token": 0.000004,
|
|
||||||
"litellm_provider": "xai",
|
|
||||||
"mode": "chat"
|
|
||||||
},
|
|
||||||
"openrouter/google/gemini-2.0-flash-exp:free": {
|
"openrouter/google/gemini-2.0-flash-exp:free": {
|
||||||
"max_tokens": 8192,
|
"max_tokens": 8192,
|
||||||
"max_input_tokens": 1048576,
|
"max_input_tokens": 1048576,
|
||||||
|
@ -605,4 +403,66 @@
|
||||||
"supports_audio_output": true,
|
"supports_audio_output": true,
|
||||||
"supports_tool_choice": true
|
"supports_tool_choice": true
|
||||||
},
|
},
|
||||||
|
"gemini-2.5-pro-preview-05-06": {
|
||||||
|
"max_tokens": 65536,
|
||||||
|
"max_input_tokens": 1048576,
|
||||||
|
"max_output_tokens": 65536,
|
||||||
|
"max_images_per_prompt": 3000,
|
||||||
|
"max_videos_per_prompt": 10,
|
||||||
|
"max_video_length": 1,
|
||||||
|
"max_audio_length_hours": 8.4,
|
||||||
|
"max_audio_per_prompt": 1,
|
||||||
|
"max_pdf_size_mb": 30,
|
||||||
|
"input_cost_per_audio_token": 0.00000125,
|
||||||
|
"input_cost_per_token": 0.00000125,
|
||||||
|
"input_cost_per_token_above_200k_tokens": 0.0000025,
|
||||||
|
"output_cost_per_token": 0.00001,
|
||||||
|
"output_cost_per_token_above_200k_tokens": 0.000015,
|
||||||
|
"litellm_provider": "vertex_ai-language-models",
|
||||||
|
"mode": "chat",
|
||||||
|
"supports_reasoning": true,
|
||||||
|
"supports_system_messages": true,
|
||||||
|
"supports_function_calling": true,
|
||||||
|
"supports_vision": true,
|
||||||
|
"supports_response_schema": true,
|
||||||
|
"supports_audio_output": false,
|
||||||
|
"supports_tool_choice": true,
|
||||||
|
"supported_endpoints": ["/v1/chat/completions", "/v1/completions", "/v1/batch"],
|
||||||
|
"supported_modalities": ["text", "image", "audio", "video"],
|
||||||
|
"supported_output_modalities": ["text"],
|
||||||
|
"source": "https://ai.google.dev/gemini-api/docs/models#gemini-2.5-flash-preview"
|
||||||
|
},
|
||||||
|
"gemini/gemini-2.5-pro-preview-05-06": {
|
||||||
|
"max_tokens": 65536,
|
||||||
|
"max_input_tokens": 1048576,
|
||||||
|
"max_output_tokens": 65536,
|
||||||
|
"max_images_per_prompt": 3000,
|
||||||
|
"max_videos_per_prompt": 10,
|
||||||
|
"max_video_length": 1,
|
||||||
|
"max_audio_length_hours": 8.4,
|
||||||
|
"max_audio_per_prompt": 1,
|
||||||
|
"max_pdf_size_mb": 30,
|
||||||
|
"input_cost_per_audio_token": 0.0000007,
|
||||||
|
"input_cost_per_token": 0.00000125,
|
||||||
|
"input_cost_per_token_above_200k_tokens": 0.0000025,
|
||||||
|
"output_cost_per_token": 0.00001,
|
||||||
|
"output_cost_per_token_above_200k_tokens": 0.000015,
|
||||||
|
"litellm_provider": "gemini",
|
||||||
|
"mode": "chat",
|
||||||
|
"rpm": 10000,
|
||||||
|
"tpm": 10000000,
|
||||||
|
"supports_system_messages": true,
|
||||||
|
"supports_function_calling": true,
|
||||||
|
"supports_vision": true,
|
||||||
|
"supports_response_schema": true,
|
||||||
|
"supports_audio_output": false,
|
||||||
|
"supports_tool_choice": true,
|
||||||
|
"supported_modalities": ["text", "image", "audio", "video"],
|
||||||
|
"supported_output_modalities": ["text"],
|
||||||
|
"source": "https://ai.google.dev/gemini-api/docs/pricing#gemini-2.5-pro-preview"
|
||||||
|
},
|
||||||
|
"together_ai/Qwen/Qwen3-235B-A22B-fp8-tput": {
|
||||||
|
"input_cost_per_token": 0.0000002,
|
||||||
|
"output_cost_per_token": 0.0000006,
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -958,6 +958,7 @@
|
||||||
use_system_prompt: false
|
use_system_prompt: false
|
||||||
|
|
||||||
- name: gemini/gemini-2.5-pro-preview-03-25
|
- name: gemini/gemini-2.5-pro-preview-03-25
|
||||||
|
overeager: true
|
||||||
edit_format: diff-fenced
|
edit_format: diff-fenced
|
||||||
use_repo_map: true
|
use_repo_map: true
|
||||||
weak_model_name: gemini/gemini-2.0-flash
|
weak_model_name: gemini/gemini-2.0-flash
|
||||||
|
@ -965,24 +966,28 @@
|
||||||
- name: gemini/gemini-2.5-pro-exp-03-25
|
- name: gemini/gemini-2.5-pro-exp-03-25
|
||||||
edit_format: diff-fenced
|
edit_format: diff-fenced
|
||||||
use_repo_map: true
|
use_repo_map: true
|
||||||
weak_model_name: gemini/gemini-2.0-flash
|
overeager: true
|
||||||
|
weak_model_name: gemini/gemini-2.5-flash-preview-04-17
|
||||||
|
|
||||||
- name: openrouter/google/gemini-2.5-pro-exp-03-25:free
|
- name: openrouter/google/gemini-2.5-pro-exp-03-25
|
||||||
edit_format: diff-fenced
|
edit_format: diff-fenced
|
||||||
|
overeager: true
|
||||||
use_repo_map: true
|
use_repo_map: true
|
||||||
weak_model_name: openrouter/google/gemini-2.0-flash-exp:free
|
weak_model_name: openrouter/google/gemini-2.0-flash-exp:free
|
||||||
|
|
||||||
- name: vertex_ai/gemini-2.5-pro-exp-03-25
|
- name: vertex_ai/gemini-2.5-pro-exp-03-25
|
||||||
edit_format: diff-fenced
|
edit_format: diff-fenced
|
||||||
use_repo_map: true
|
use_repo_map: true
|
||||||
# Need metadata for this one...
|
weak_model_name: vertex_ai-language-models/gemini-2.5-flash-preview-04-17
|
||||||
#weak_model_name: vertex_ai/gemini-2.0-flash
|
overeager: true
|
||||||
|
editor_model_name: vertex_ai-language-models/gemini-2.5-flash-preview-04-17
|
||||||
|
|
||||||
- name: vertex_ai/gemini-2.5-pro-preview-03-25
|
- name: vertex_ai/gemini-2.5-pro-preview-03-25
|
||||||
edit_format: diff-fenced
|
edit_format: diff-fenced
|
||||||
use_repo_map: true
|
use_repo_map: true
|
||||||
# Need metadata for this one...
|
weak_model_name: vertex_ai-language-models/gemini-2.5-flash-preview-04-17
|
||||||
#weak_model_name: vertex_ai/gemini-2.0-flash
|
overeager: true
|
||||||
|
editor_model_name: vertex_ai-language-models/gemini-2.5-flash-preview-04-17
|
||||||
|
|
||||||
- name: openrouter/openrouter/quasar-alpha
|
- name: openrouter/openrouter/quasar-alpha
|
||||||
use_repo_map: true
|
use_repo_map: true
|
||||||
|
@ -1367,3 +1372,396 @@
|
||||||
# extra_body:
|
# extra_body:
|
||||||
# reasoning_effort: high
|
# reasoning_effort: high
|
||||||
|
|
||||||
|
- name: gemini/gemini-2.5-flash-preview-04-17
|
||||||
|
edit_format: diff
|
||||||
|
use_repo_map: true
|
||||||
|
accepts_settings: ["reasoning_effort", "thinking_tokens"]
|
||||||
|
|
||||||
|
- name: gemini-2.5-flash-preview-04-17
|
||||||
|
edit_format: diff
|
||||||
|
use_repo_map: true
|
||||||
|
accepts_settings: ["reasoning_effort", "thinking_tokens"]
|
||||||
|
|
||||||
|
- name: vertex_ai-language-models/gemini-2.5-flash-preview-04-17
|
||||||
|
edit_format: diff
|
||||||
|
use_repo_map: true
|
||||||
|
accepts_settings: ["reasoning_effort", "thinking_tokens"]
|
||||||
|
|
||||||
|
- name: openrouter/google/gemini-2.5-pro-preview-03-25
|
||||||
|
overeager: true
|
||||||
|
edit_format: diff-fenced
|
||||||
|
use_repo_map: true
|
||||||
|
weak_model_name: openrouter/google/gemini-2.0-flash-001
|
||||||
|
|
||||||
|
- name: gemini/gemini-2.5-pro-preview-05-06
|
||||||
|
overeager: true
|
||||||
|
edit_format: diff-fenced
|
||||||
|
use_repo_map: true
|
||||||
|
weak_model_name: gemini/gemini-2.5-flash-preview-04-17
|
||||||
|
|
||||||
|
- name: vertex_ai/gemini-2.5-pro-preview-05-06
|
||||||
|
edit_format: diff-fenced
|
||||||
|
use_repo_map: true
|
||||||
|
weak_model_name: vertex_ai-language-models/gemini-2.5-flash-preview-04-17
|
||||||
|
overeager: true
|
||||||
|
editor_model_name: vertex_ai-language-models/gemini-2.5-flash-preview-04-17
|
||||||
|
|
||||||
|
- name: openrouter/google/gemini-2.5-pro-preview-05-06
|
||||||
|
overeager: true
|
||||||
|
edit_format: diff-fenced
|
||||||
|
use_repo_map: true
|
||||||
|
weak_model_name: openrouter/google/gemini-2.0-flash-001
|
||||||
|
|
||||||
|
#- name: openrouter/qwen/qwen3-235b-a22b
|
||||||
|
# system_prompt_prefix: "/no_think"
|
||||||
|
# use_temperature: 0.7
|
||||||
|
# extra_params:
|
||||||
|
# max_tokens: 24000
|
||||||
|
# top_p: 0.8
|
||||||
|
# top_k: 20
|
||||||
|
# min_p: 0.0
|
||||||
|
# temperature: 0.7
|
||||||
|
# extra_body:
|
||||||
|
# provider:
|
||||||
|
# order: ["Together"]
|
||||||
|
|
||||||
|
#- name: together_ai/Qwen/Qwen3-235B-A22B-fp8-tput
|
||||||
|
# system_prompt_prefix: "/no_think"
|
||||||
|
# use_temperature: 0.7
|
||||||
|
# reasoning_tag: think
|
||||||
|
# extra_params:
|
||||||
|
# max_tokens: 24000
|
||||||
|
# top_p: 0.8
|
||||||
|
# top_k: 20
|
||||||
|
# min_p: 0.0
|
||||||
|
# temperature: 0.7
|
||||||
|
|
||||||
|
|
||||||
|
- name: claude-sonnet-4-20250514
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: claude-3-5-haiku-20241022
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: false
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 64000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: claude-sonnet-4-20250514
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings: ["thinking_tokens"]
|
||||||
|
|
||||||
|
- name: anthropic/claude-sonnet-4-20250514
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: anthropic/claude-3-5-haiku-20241022
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: false
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 64000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: anthropic/claude-sonnet-4-20250514
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings: ["thinking_tokens"]
|
||||||
|
|
||||||
|
- name: bedrock/anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: bedrock/anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: false
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 64000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: bedrock/anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings: ["thinking_tokens"]
|
||||||
|
|
||||||
|
- name: bedrock/us.anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: bedrock/us.anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: false
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 64000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: bedrock/us.anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings: ["thinking_tokens"]
|
||||||
|
|
||||||
|
- name: bedrock_converse/anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: bedrock_converse/anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: false
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 64000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: bedrock_converse/anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings: ["thinking_tokens"]
|
||||||
|
|
||||||
|
- name: bedrock_converse/us.anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: bedrock_converse/us.anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: false
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 64000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: bedrock_converse/us.anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings: ["thinking_tokens"]
|
||||||
|
|
||||||
|
- name: vertex_ai/claude-sonnet-4@20250514
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: vertex_ai/claude-3-5-haiku@20241022
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: false
|
||||||
|
extra_params:
|
||||||
|
max_tokens: 64000
|
||||||
|
editor_model_name: vertex_ai/claude-sonnet-4@20250514
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings: ["thinking_tokens"]
|
||||||
|
|
||||||
|
- name: vertex_ai-anthropic_models/vertex_ai/claude-sonnet-4@20250514
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: vertex_ai/claude-3-5-haiku@20241022
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: false
|
||||||
|
extra_params:
|
||||||
|
max_tokens: 64000
|
||||||
|
editor_model_name: vertex_ai-anthropic_models/vertex_ai/claude-sonnet-4@20250514
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings: ["thinking_tokens"]
|
||||||
|
|
||||||
|
- name: openrouter/anthropic/claude-sonnet-4
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: openrouter/anthropic/claude-3-5-haiku
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: false
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 64000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: openrouter/anthropic/claude-sonnet-4
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings: ["thinking_tokens"]
|
||||||
|
|
||||||
|
- name: anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: false
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 64000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings: ["thinking_tokens"]
|
||||||
|
|
||||||
|
- name: bedrock_converse/eu.anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: bedrock_converse/eu.anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: false
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 64000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: bedrock_converse/eu.anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings: ["thinking_tokens"]
|
||||||
|
|
||||||
|
- name: eu.anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: eu.anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: false
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 64000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: eu.anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings: ["thinking_tokens"]
|
||||||
|
|
||||||
|
- name: us.anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: us.anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: false
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 64000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: us.anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings: ["thinking_tokens"]
|
||||||
|
|
||||||
|
- name: claude-opus-4-20250514
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: claude-3-5-haiku-20241022
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: false
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 32000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: claude-sonnet-4-20250514
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings: ["thinking_tokens"]
|
||||||
|
|
||||||
|
- name: anthropic/claude-opus-4-20250514
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: anthropic/claude-3-5-haiku-20241022
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: false
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 32000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: anthropic/claude-sonnet-4-20250514
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings: ["thinking_tokens"]
|
||||||
|
|
||||||
|
- name: anthropic.claude-opus-4-20250514-v1:0
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: false
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 32000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings: ["thinking_tokens"]
|
||||||
|
|
||||||
|
- name: bedrock_converse/anthropic.claude-opus-4-20250514-v1:0
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: bedrock_converse/anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: false
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 32000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: bedrock_converse/anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings: ["thinking_tokens"]
|
||||||
|
|
||||||
|
- name: bedrock_converse/us.anthropic.claude-opus-4-20250514-v1:0
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: bedrock_converse/us.anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: false
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 32000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: bedrock_converse/us.anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings: ["thinking_tokens"]
|
||||||
|
|
||||||
|
- name: bedrock_converse/eu.anthropic.claude-opus-4-20250514-v1:0
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: bedrock_converse/eu.anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: false
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 32000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: bedrock_converse/eu.anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings: ["thinking_tokens"]
|
||||||
|
|
||||||
|
- name: eu.anthropic.claude-opus-4-20250514-v1:0
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: eu.anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: false
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 32000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: eu.anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings: ["thinking_tokens"]
|
||||||
|
|
||||||
|
- name: us.anthropic.claude-opus-4-20250514-v1:0
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: us.anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: false
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 32000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: us.anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings: ["thinking_tokens"]
|
||||||
|
|
||||||
|
- name: vertex_ai/claude-opus-4@20250514
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: vertex_ai/claude-3-5-haiku@20241022
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: false
|
||||||
|
extra_params:
|
||||||
|
max_tokens: 32000
|
||||||
|
editor_model_name: vertex_ai/claude-sonnet-4@20250514
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings: ["thinking_tokens"]
|
||||||
|
|
||||||
|
- name: vertex_ai-anthropic_models/vertex_ai/claude-opus-4@20250514
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: vertex_ai/claude-3-5-haiku@20241022
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: false
|
||||||
|
extra_params:
|
||||||
|
max_tokens: 32000
|
||||||
|
editor_model_name: vertex_ai-anthropic_models/vertex_ai/claude-sonnet-4@20250514
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings: ["thinking_tokens"]
|
||||||
|
|
||||||
|
- name: vertex_ai/gemini-2.5-flash-preview-05-20
|
||||||
|
edit_format: diff
|
||||||
|
use_repo_map: true
|
||||||
|
accepts_settings: ["reasoning_effort", "thinking_tokens"]
|
||||||
|
- name: openrouter/anthropic/claude-opus-4
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: openrouter/anthropic/claude-3-5-haiku
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: false
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 32000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: openrouter/anthropic/claude-sonnet-4
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings: ["thinking_tokens"]
|
||||||
|
|
||||||
|
|
|
@ -14,7 +14,7 @@ aider_user_agent = f"Aider/{__version__} +{urls.website}"
|
||||||
# platforms.
|
# platforms.
|
||||||
|
|
||||||
|
|
||||||
def install_playwright(io):
|
def check_env():
|
||||||
try:
|
try:
|
||||||
from playwright.sync_api import sync_playwright
|
from playwright.sync_api import sync_playwright
|
||||||
|
|
||||||
|
@ -29,6 +29,16 @@ def install_playwright(io):
|
||||||
except Exception:
|
except Exception:
|
||||||
has_chromium = False
|
has_chromium = False
|
||||||
|
|
||||||
|
return has_pip, has_chromium
|
||||||
|
|
||||||
|
|
||||||
|
def has_playwright():
|
||||||
|
has_pip, has_chromium = check_env()
|
||||||
|
return has_pip and has_chromium
|
||||||
|
|
||||||
|
|
||||||
|
def install_playwright(io):
|
||||||
|
has_pip, has_chromium = check_env()
|
||||||
if has_pip and has_chromium:
|
if has_pip and has_chromium:
|
||||||
return True
|
return True
|
||||||
|
|
||||||
|
@ -262,7 +272,7 @@ def slimdown_html(soup):
|
||||||
|
|
||||||
|
|
||||||
def main(url):
|
def main(url):
|
||||||
scraper = Scraper()
|
scraper = Scraper(playwright_available=has_playwright())
|
||||||
content = scraper.scrape(url)
|
content = scraper.scrape(url)
|
||||||
print(content)
|
print(content)
|
||||||
|
|
||||||
|
|
|
@ -1,14 +1,14 @@
|
||||||
import itertools
|
|
||||||
import os
|
import os
|
||||||
import platform
|
import platform
|
||||||
import shlex
|
|
||||||
import subprocess
|
import subprocess
|
||||||
import sys
|
import sys
|
||||||
import tempfile
|
import tempfile
|
||||||
import time
|
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
|
import oslex
|
||||||
|
|
||||||
from aider.dump import dump # noqa: F401
|
from aider.dump import dump # noqa: F401
|
||||||
|
from aider.waiting import Spinner
|
||||||
|
|
||||||
IMAGE_EXTENSIONS = {".png", ".jpg", ".jpeg", ".gif", ".bmp", ".tiff", ".webp", ".pdf"}
|
IMAGE_EXTENSIONS = {".png", ".jpg", ".jpeg", ".gif", ".bmp", ".tiff", ".webp", ".pdf"}
|
||||||
|
|
||||||
|
@ -250,55 +250,6 @@ def run_install(cmd):
|
||||||
return False, output
|
return False, output
|
||||||
|
|
||||||
|
|
||||||
class Spinner:
|
|
||||||
unicode_spinner = ["⠋", "⠙", "⠹", "⠸", "⠼", "⠴", "⠦", "⠧", "⠇", "⠏"]
|
|
||||||
ascii_spinner = ["|", "/", "-", "\\"]
|
|
||||||
|
|
||||||
def __init__(self, text):
|
|
||||||
self.text = text
|
|
||||||
self.start_time = time.time()
|
|
||||||
self.last_update = 0
|
|
||||||
self.visible = False
|
|
||||||
self.is_tty = sys.stdout.isatty()
|
|
||||||
self.tested = False
|
|
||||||
|
|
||||||
def test_charset(self):
|
|
||||||
if self.tested:
|
|
||||||
return
|
|
||||||
self.tested = True
|
|
||||||
# Try unicode first, fall back to ascii if needed
|
|
||||||
try:
|
|
||||||
# Test if we can print unicode characters
|
|
||||||
print(self.unicode_spinner[0], end="", flush=True)
|
|
||||||
print("\r", end="", flush=True)
|
|
||||||
self.spinner_chars = itertools.cycle(self.unicode_spinner)
|
|
||||||
except UnicodeEncodeError:
|
|
||||||
self.spinner_chars = itertools.cycle(self.ascii_spinner)
|
|
||||||
|
|
||||||
def step(self):
|
|
||||||
if not self.is_tty:
|
|
||||||
return
|
|
||||||
|
|
||||||
current_time = time.time()
|
|
||||||
if not self.visible and current_time - self.start_time >= 0.5:
|
|
||||||
self.visible = True
|
|
||||||
self._step()
|
|
||||||
elif self.visible and current_time - self.last_update >= 0.1:
|
|
||||||
self._step()
|
|
||||||
self.last_update = current_time
|
|
||||||
|
|
||||||
def _step(self):
|
|
||||||
if not self.visible:
|
|
||||||
return
|
|
||||||
|
|
||||||
self.test_charset()
|
|
||||||
print(f"\r{self.text} {next(self.spinner_chars)}\r{self.text} ", end="", flush=True)
|
|
||||||
|
|
||||||
def end(self):
|
|
||||||
if self.visible and self.is_tty:
|
|
||||||
print("\r" + " " * (len(self.text) + 3))
|
|
||||||
|
|
||||||
|
|
||||||
def find_common_root(abs_fnames):
|
def find_common_root(abs_fnames):
|
||||||
try:
|
try:
|
||||||
if len(abs_fnames) == 1:
|
if len(abs_fnames) == 1:
|
||||||
|
@ -384,19 +335,4 @@ def printable_shell_command(cmd_list):
|
||||||
Returns:
|
Returns:
|
||||||
str: Shell-escaped command string.
|
str: Shell-escaped command string.
|
||||||
"""
|
"""
|
||||||
if platform.system() == "Windows":
|
return oslex.join(cmd_list)
|
||||||
return subprocess.list2cmdline(cmd_list)
|
|
||||||
else:
|
|
||||||
return shlex.join(cmd_list)
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
spinner = Spinner("Running spinner...")
|
|
||||||
for _ in range(40): # 40 steps * 0.25 seconds = 10 seconds
|
|
||||||
time.sleep(0.25)
|
|
||||||
spinner.step()
|
|
||||||
spinner.end()
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
||||||
|
|
221
aider/waiting.py
Normal file
221
aider/waiting.py
Normal file
|
@ -0,0 +1,221 @@
|
||||||
|
#!/usr/bin/env python
|
||||||
|
|
||||||
|
"""
|
||||||
|
Thread-based, killable spinner utility.
|
||||||
|
|
||||||
|
Use it like:
|
||||||
|
|
||||||
|
from aider.waiting import WaitingSpinner
|
||||||
|
|
||||||
|
spinner = WaitingSpinner("Waiting for LLM")
|
||||||
|
spinner.start()
|
||||||
|
... # long task
|
||||||
|
spinner.stop()
|
||||||
|
"""
|
||||||
|
|
||||||
|
import sys
|
||||||
|
import threading
|
||||||
|
import time
|
||||||
|
|
||||||
|
from rich.console import Console
|
||||||
|
|
||||||
|
|
||||||
|
class Spinner:
|
||||||
|
"""
|
||||||
|
Minimal spinner that scans a single marker back and forth across a line.
|
||||||
|
|
||||||
|
The animation is pre-rendered into a list of frames. If the terminal
|
||||||
|
cannot display unicode the frames are converted to plain ASCII.
|
||||||
|
"""
|
||||||
|
|
||||||
|
last_frame_idx = 0 # Class variable to store the last frame index
|
||||||
|
|
||||||
|
def __init__(self, text: str, width: int = 7):
|
||||||
|
self.text = text
|
||||||
|
self.start_time = time.time()
|
||||||
|
self.last_update = 0.0
|
||||||
|
self.visible = False
|
||||||
|
self.is_tty = sys.stdout.isatty()
|
||||||
|
self.console = Console()
|
||||||
|
|
||||||
|
# Pre-render the animation frames using pure ASCII so they will
|
||||||
|
# always display, even on very limited terminals.
|
||||||
|
ascii_frames = [
|
||||||
|
"#= ", # C1 C2 space(8)
|
||||||
|
"=# ", # C2 C1 space(8)
|
||||||
|
" =# ", # space(1) C2 C1 space(7)
|
||||||
|
" =# ", # space(2) C2 C1 space(6)
|
||||||
|
" =# ", # space(3) C2 C1 space(5)
|
||||||
|
" =# ", # space(4) C2 C1 space(4)
|
||||||
|
" =# ", # space(5) C2 C1 space(3)
|
||||||
|
" =# ", # space(6) C2 C1 space(2)
|
||||||
|
" =# ", # space(7) C2 C1 space(1)
|
||||||
|
" =#", # space(8) C2 C1
|
||||||
|
" #=", # space(8) C1 C2
|
||||||
|
" #= ", # space(7) C1 C2 space(1)
|
||||||
|
" #= ", # space(6) C1 C2 space(2)
|
||||||
|
" #= ", # space(5) C1 C2 space(3)
|
||||||
|
" #= ", # space(4) C1 C2 space(4)
|
||||||
|
" #= ", # space(3) C1 C2 space(5)
|
||||||
|
" #= ", # space(2) C1 C2 space(6)
|
||||||
|
" #= ", # space(1) C1 C2 space(7)
|
||||||
|
]
|
||||||
|
|
||||||
|
self.unicode_palette = "░█"
|
||||||
|
xlate_from, xlate_to = ("=#", self.unicode_palette)
|
||||||
|
|
||||||
|
# If unicode is supported, swap the ASCII chars for nicer glyphs.
|
||||||
|
if self._supports_unicode():
|
||||||
|
translation_table = str.maketrans(xlate_from, xlate_to)
|
||||||
|
frames = [f.translate(translation_table) for f in ascii_frames]
|
||||||
|
self.scan_char = xlate_to[xlate_from.find("#")]
|
||||||
|
else:
|
||||||
|
frames = ascii_frames
|
||||||
|
self.scan_char = "#"
|
||||||
|
|
||||||
|
# Bounce the scanner back and forth.
|
||||||
|
self.frames = frames
|
||||||
|
self.frame_idx = Spinner.last_frame_idx # Initialize from class variable
|
||||||
|
self.width = len(frames[0]) - 2 # number of chars between the brackets
|
||||||
|
self.animation_len = len(frames[0])
|
||||||
|
self.last_display_len = 0 # Length of the last spinner line (frame + text)
|
||||||
|
|
||||||
|
def _supports_unicode(self) -> bool:
|
||||||
|
if not self.is_tty:
|
||||||
|
return False
|
||||||
|
try:
|
||||||
|
out = self.unicode_palette
|
||||||
|
out += "\b" * len(self.unicode_palette)
|
||||||
|
out += " " * len(self.unicode_palette)
|
||||||
|
out += "\b" * len(self.unicode_palette)
|
||||||
|
sys.stdout.write(out)
|
||||||
|
sys.stdout.flush()
|
||||||
|
return True
|
||||||
|
except UnicodeEncodeError:
|
||||||
|
return False
|
||||||
|
except Exception:
|
||||||
|
return False
|
||||||
|
|
||||||
|
def _next_frame(self) -> str:
|
||||||
|
frame = self.frames[self.frame_idx]
|
||||||
|
self.frame_idx = (self.frame_idx + 1) % len(self.frames)
|
||||||
|
Spinner.last_frame_idx = self.frame_idx # Update class variable
|
||||||
|
return frame
|
||||||
|
|
||||||
|
def step(self, text: str = None) -> None:
|
||||||
|
if text is not None:
|
||||||
|
self.text = text
|
||||||
|
|
||||||
|
if not self.is_tty:
|
||||||
|
return
|
||||||
|
|
||||||
|
now = time.time()
|
||||||
|
if not self.visible and now - self.start_time >= 0.5:
|
||||||
|
self.visible = True
|
||||||
|
self.last_update = 0.0
|
||||||
|
if self.is_tty:
|
||||||
|
self.console.show_cursor(False)
|
||||||
|
|
||||||
|
if not self.visible or now - self.last_update < 0.1:
|
||||||
|
return
|
||||||
|
|
||||||
|
self.last_update = now
|
||||||
|
frame_str = self._next_frame()
|
||||||
|
|
||||||
|
# Determine the maximum width for the spinner line
|
||||||
|
# Subtract 2 as requested, to leave a margin or prevent cursor wrapping issues
|
||||||
|
max_spinner_width = self.console.width - 2
|
||||||
|
if max_spinner_width < 0: # Handle extremely narrow terminals
|
||||||
|
max_spinner_width = 0
|
||||||
|
|
||||||
|
current_text_payload = f" {self.text}"
|
||||||
|
line_to_display = f"{frame_str}{current_text_payload}"
|
||||||
|
|
||||||
|
# Truncate the line if it's too long for the console width
|
||||||
|
if len(line_to_display) > max_spinner_width:
|
||||||
|
line_to_display = line_to_display[:max_spinner_width]
|
||||||
|
|
||||||
|
len_line_to_display = len(line_to_display)
|
||||||
|
|
||||||
|
# Calculate padding to clear any remnants from a longer previous line
|
||||||
|
padding_to_clear = " " * max(0, self.last_display_len - len_line_to_display)
|
||||||
|
|
||||||
|
# Write the spinner frame, text, and any necessary clearing spaces
|
||||||
|
sys.stdout.write(f"\r{line_to_display}{padding_to_clear}")
|
||||||
|
self.last_display_len = len_line_to_display
|
||||||
|
|
||||||
|
# Calculate number of backspaces to position cursor at the scanner character
|
||||||
|
scan_char_abs_pos = frame_str.find(self.scan_char)
|
||||||
|
|
||||||
|
# Total characters written to the line (frame + text + padding)
|
||||||
|
total_chars_written_on_line = len_line_to_display + len(padding_to_clear)
|
||||||
|
|
||||||
|
# num_backspaces will be non-positive if scan_char_abs_pos is beyond
|
||||||
|
# total_chars_written_on_line (e.g., if the scan char itself was truncated).
|
||||||
|
# (e.g., if the scan char itself was truncated).
|
||||||
|
# In such cases, (effectively) 0 backspaces are written,
|
||||||
|
# and the cursor stays at the end of the line.
|
||||||
|
num_backspaces = total_chars_written_on_line - scan_char_abs_pos
|
||||||
|
sys.stdout.write("\b" * num_backspaces)
|
||||||
|
sys.stdout.flush()
|
||||||
|
|
||||||
|
def end(self) -> None:
|
||||||
|
if self.visible and self.is_tty:
|
||||||
|
clear_len = self.last_display_len # Use the length of the last displayed content
|
||||||
|
sys.stdout.write("\r" + " " * clear_len + "\r")
|
||||||
|
sys.stdout.flush()
|
||||||
|
self.console.show_cursor(True)
|
||||||
|
self.visible = False
|
||||||
|
|
||||||
|
|
||||||
|
class WaitingSpinner:
|
||||||
|
"""Background spinner that can be started/stopped safely."""
|
||||||
|
|
||||||
|
def __init__(self, text: str = "Waiting for LLM", delay: float = 0.15):
|
||||||
|
self.spinner = Spinner(text)
|
||||||
|
self.delay = delay
|
||||||
|
self._stop_event = threading.Event()
|
||||||
|
self._thread = threading.Thread(target=self._spin, daemon=True)
|
||||||
|
|
||||||
|
def _spin(self):
|
||||||
|
while not self._stop_event.is_set():
|
||||||
|
self.spinner.step()
|
||||||
|
time.sleep(self.delay)
|
||||||
|
self.spinner.end()
|
||||||
|
|
||||||
|
def start(self):
|
||||||
|
"""Start the spinner in a background thread."""
|
||||||
|
if not self._thread.is_alive():
|
||||||
|
self._thread.start()
|
||||||
|
|
||||||
|
def stop(self):
|
||||||
|
"""Request the spinner to stop and wait briefly for the thread to exit."""
|
||||||
|
self._stop_event.set()
|
||||||
|
if self._thread.is_alive():
|
||||||
|
self._thread.join(timeout=self.delay)
|
||||||
|
self.spinner.end()
|
||||||
|
|
||||||
|
# Allow use as a context-manager
|
||||||
|
def __enter__(self):
|
||||||
|
self.start()
|
||||||
|
return self
|
||||||
|
|
||||||
|
def __exit__(self, exc_type, exc_val, exc_tb):
|
||||||
|
self.stop()
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
spinner = Spinner("Running spinner...")
|
||||||
|
try:
|
||||||
|
for _ in range(100):
|
||||||
|
time.sleep(0.15)
|
||||||
|
spinner.step()
|
||||||
|
print("Success!")
|
||||||
|
except KeyboardInterrupt:
|
||||||
|
print("\nInterrupted by user.")
|
||||||
|
finally:
|
||||||
|
spinner.end()
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
|
@ -34,6 +34,8 @@ def load_gitignores(gitignore_paths: list[Path]) -> Optional[PathSpec]:
|
||||||
"__pycache__/", # Python cache dir
|
"__pycache__/", # Python cache dir
|
||||||
".DS_Store", # macOS metadata
|
".DS_Store", # macOS metadata
|
||||||
"Thumbs.db", # Windows thumbnail cache
|
"Thumbs.db", # Windows thumbnail cache
|
||||||
|
"*.svg",
|
||||||
|
"*.pdf",
|
||||||
# IDE files
|
# IDE files
|
||||||
".idea/", # JetBrains IDEs
|
".idea/", # JetBrains IDEs
|
||||||
".vscode/", # VS Code
|
".vscode/", # VS Code
|
||||||
|
@ -64,7 +66,9 @@ class FileWatcher:
|
||||||
"""Watches source files for changes and AI comments"""
|
"""Watches source files for changes and AI comments"""
|
||||||
|
|
||||||
# Compiled regex pattern for AI comments
|
# Compiled regex pattern for AI comments
|
||||||
ai_comment_pattern = re.compile(r"(?:#|//|--|;+) *(ai\b.*|ai\b.*|.*\bai[?!]?) *$", re.IGNORECASE)
|
ai_comment_pattern = re.compile(
|
||||||
|
r"(?:#|//|--|;+) *(ai\b.*|ai\b.*|.*\bai[?!]?) *$", re.IGNORECASE
|
||||||
|
)
|
||||||
|
|
||||||
def __init__(self, coder, gitignores=None, verbose=False, analytics=None, root=None):
|
def __init__(self, coder, gitignores=None, verbose=False, analytics=None, root=None):
|
||||||
self.coder = coder
|
self.coder = coder
|
||||||
|
@ -93,15 +97,19 @@ class FileWatcher:
|
||||||
|
|
||||||
rel_path = path_abs.relative_to(self.root)
|
rel_path = path_abs.relative_to(self.root)
|
||||||
if self.verbose:
|
if self.verbose:
|
||||||
dump(rel_path)
|
print("Changed", rel_path)
|
||||||
|
|
||||||
if self.gitignore_spec and self.gitignore_spec.match_file(
|
if self.gitignore_spec and self.gitignore_spec.match_file(
|
||||||
rel_path.as_posix() + ("/" if path_abs.is_dir() else "")
|
rel_path.as_posix() + ("/" if path_abs.is_dir() else "")
|
||||||
):
|
):
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
# Check file size before reading content
|
||||||
|
if path_abs.is_file() and path_abs.stat().st_size > 1 * 1024 * 1024: # 1MB limit
|
||||||
|
return False
|
||||||
|
|
||||||
if self.verbose:
|
if self.verbose:
|
||||||
dump("ok", rel_path)
|
print("Checking", rel_path)
|
||||||
|
|
||||||
# Check if file contains AI markers
|
# Check if file contains AI markers
|
||||||
try:
|
try:
|
||||||
|
|
|
@ -24,7 +24,90 @@ cog.out(text)
|
||||||
]]]-->
|
]]]-->
|
||||||
|
|
||||||
|
|
||||||
|
### Aider v0.84.0
|
||||||
|
|
||||||
|
- Added support for new Claude models including the Sonnet 4 and Opus 4 series (e.g., `claude-sonnet-4-20250514`,
|
||||||
|
`claude-opus-4-20250514`) across various providers. The default `sonnet` and `opus` aliases were updated to these newer
|
||||||
|
versions.
|
||||||
|
- Added support for the `vertex_ai/gemini-2.5-flash-preview-05-20` model.
|
||||||
|
- Fixed OpenRouter token cost calculation for improved accuracy.
|
||||||
|
- Updated default OpenRouter models during onboarding to `deepseek/deepseek-r1:free` for the free tier and
|
||||||
|
`anthropic/claude-sonnet-4` for paid tiers.
|
||||||
|
- Automatically refresh GitHub Copilot tokens when used as OpenAI API keys, by Lih Chen.
|
||||||
|
- Aider wrote 79% of the code in this release.
|
||||||
|
|
||||||
|
### Aider v0.83.2
|
||||||
|
|
||||||
|
- Bumped configargparse to 1.7.1 as 1.7 was pulled.
|
||||||
|
- Added shell tab completion for file path arguments (by saviour) and for `--edit-format`/`--editor-edit-format` options.
|
||||||
|
- Improved OpenRouter model metadata handling by introducing a local cache, increasing reliability and performance.
|
||||||
|
- The `/settings` command now displays detailed metadata for active main, editor, and weak models.
|
||||||
|
- Fixed an issue where files explicitly added via the command line were not correctly ignored if listed in `.gitignore`.
|
||||||
|
- Improved automatic commit messages by providing more context during their generation, by wangboxue.
|
||||||
|
|
||||||
|
### Aider v0.83.1
|
||||||
|
|
||||||
|
- Improved user language detection by correctly normalizing hyphenated language codes (e.g., `en-US` to `en`) and enhancing the validation of locale results.
|
||||||
|
- Prevented Aider from instructing the LLM to reply in 'C' or 'POSIX' when these are detected as the system locale.
|
||||||
|
- Displayed a spinner with the model name when generating commit messages.
|
||||||
|
|
||||||
|
### Aider v0.83.0
|
||||||
|
|
||||||
|
- Added support for `gemini-2.5-pro-preview-05-06` models.
|
||||||
|
- Added support for `qwen3-235b` models.
|
||||||
|
- Added repo-map support for OCaml and OCaml interface files, by Andrey Popp.
|
||||||
|
- Added a spinner animation while waiting for the LLM to start streaming its response.
|
||||||
|
- Updated the spinner animation to a Knight Rider style.
|
||||||
|
- Introduced `--attribute-co-authored-by` option to add co-author trailer to commit messages, by Andrew Grigorev.
|
||||||
|
- Updated Gemini model aliases (e.g., `gemini`, `gemini-2.5-pro`) to point to the `05-06` preview versions.
|
||||||
|
- Marked Gemini 2.5 Pro preview models as `overeager` by default.
|
||||||
|
- Commit message prompt specifies the user's language.
|
||||||
|
- Updated the default weak model for Gemini 2.5 Pro models to `gemini/gemini-2.5-flash-preview-04-17`.
|
||||||
|
- Corrected `gemini-2.5-pro-exp-03-25` model settings to reflect its lack of support for `thinking_budget`.
|
||||||
|
- Ensured model-specific system prompt prefixes are placed on a new line before the main system prompt.
|
||||||
|
- Added tracking of total tokens sent and received, now included in benchmark statistics.
|
||||||
|
- Automatically fetch model parameters (context window, pricing) for OpenRouter models directly from their website, by Stefan Hladnik.
|
||||||
|
- Enabled support for `thinking_tokens` and `reasoning_effort` parameters for OpenRouter models.
|
||||||
|
- Improved cost calculation using `litellm.completion_cost` where available.
|
||||||
|
- Added model settings for `openrouter/google/gemini-2.5-pro-preview-03-25`.
|
||||||
|
- Added `--disable-playwright` flag to prevent Playwright installation prompts and usage, by Andrew Grigorev.
|
||||||
|
- The `aider scrape` command-line tool will now use Playwright for web scraping if it is available, by Jon Keys.
|
||||||
|
- Fixed linter command execution on Windows by adopting `oslex` for argument quoting, by Titusz Pan.
|
||||||
|
- Improved cross-platform display of shell commands by using `oslex` for robust argument quoting, by Titusz Pan.
|
||||||
|
- Improved `/ask` mode to instruct the LLM to elide unchanging code in its responses.
|
||||||
|
- Ensured web scraping in the GUI also respects Playwright availability and the `--disable-playwright` flag.
|
||||||
|
- Improved display of filenames in the prompt header using rich Text formatting.
|
||||||
|
- Enabled `reasoning_effort` for Gemini 2.5 Flash models.
|
||||||
|
- Added a `--shell-completions` argument to generate shell completion scripts (e.g., for bash, zsh).
|
||||||
|
- Explicit `--attribute-author` or `--attribute-committer` flags now override the default behavior when `--attribute-co-authored-by` is used, allowing finer control over commit attribution, by Andrew Grigorev.
|
||||||
|
- Fixed an issue where read-only status of files might not be preserved correctly by some commands (e.g. `/drop` after adding a read-only file).
|
||||||
|
- The `aider-args` utility (or `python -m aider.args`) now defaults to printing a sample YAML configuration if no arguments are provided.
|
||||||
|
- Displayed token count progress and the name of the file or identifier being processed during repo map updates.
|
||||||
|
- Extended the waiting spinner to also show for non-streaming responses and further enhanced its animation with console width clipping, cursor hiding, and a more continuous appearance.
|
||||||
|
- Dropped support for Python 3.9.
|
||||||
|
- Aider wrote 55% of the code in this release.
|
||||||
|
|
||||||
|
### Aider v0.82.3
|
||||||
|
|
||||||
|
- Add support for `gemini-2.5-flash-preview-04-17` models.
|
||||||
|
- Improved robustness of edit block parsing when filenames start with backticks or fences.
|
||||||
|
- Add new `udiff-simple` edit format, for Gemini 2.5 Pro.
|
||||||
|
- Update default weak/editor models for Gemini 2.5 Pro models to use `gemini-2.5-flash-preview-04-17`.
|
||||||
|
- Instruct models to reply in the user's detected system language.
|
||||||
|
- Fix parsing of diffs for newly created files (`--- /dev/null`).
|
||||||
|
- Add markdown syntax highlighting support when editing multi-line commit messages via `/commit`, by Kay Gosho.
|
||||||
|
- Set Gemini 2.5 Pro models to use the `overeager` prompt setting by default.
|
||||||
|
- Add common file types (`.svg`, `.pdf`) to the default list of ignored files for AI comment scanning (`--watch`).
|
||||||
|
- Skip scanning files larger than 1MB for AI comments (`--watch`).
|
||||||
|
|
||||||
|
### Aider v0.82.2
|
||||||
|
|
||||||
|
- Fix editing shell files with diff-fenced, by zjy1412.
|
||||||
|
- Improve robustness of patch application by allowing multiple update/delete actions for the same file within a single response.
|
||||||
|
- Update prompts to instruct LLMs to consolidate all edits for a given file into a single block within the patch.
|
||||||
|
|
||||||
### Aider v0.82.1
|
### Aider v0.82.1
|
||||||
|
|
||||||
- Added support for `o3` and `o4-mini` including provider-specific versions for OpenAI, OpenRouter, and Azure.
|
- Added support for `o3` and `o4-mini` including provider-specific versions for OpenAI, OpenRouter, and Azure.
|
||||||
- Added support for Azure specific `gpt-4.1` and `gpt-4.1-mini` models.
|
- Added support for Azure specific `gpt-4.1` and `gpt-4.1-mini` models.
|
||||||
- Disabled streaming for `o3` models since you need identity verification to stream.
|
- Disabled streaming for `o3` models since you need identity verification to stream.
|
||||||
|
@ -372,7 +455,7 @@ cog.out(text)
|
||||||
- [Aider works with LLM web chat UIs](https://aider.chat/docs/usage/copypaste.html).
|
- [Aider works with LLM web chat UIs](https://aider.chat/docs/usage/copypaste.html).
|
||||||
- New `--copy-paste` mode.
|
- New `--copy-paste` mode.
|
||||||
- New `/copy-context` command.
|
- New `/copy-context` command.
|
||||||
- [Set API keys and other environment variables for all providers from command line or yaml conf file](https://aider.chat/docs/config/aider_conf.html#storing-llm-keys).
|
- [Set API keys and other environment variables for all providers from command line or YAML conf file](https://aider.chat/docs/config/aider_conf.html#storing-llm-keys).
|
||||||
- New `--api-key provider=key` setting.
|
- New `--api-key provider=key` setting.
|
||||||
- New `--set-env VAR=value` setting.
|
- New `--set-env VAR=value` setting.
|
||||||
- Added bash and zsh support to `--watch-files`.
|
- Added bash and zsh support to `--watch-files`.
|
||||||
|
@ -540,7 +623,7 @@ cog.out(text)
|
||||||
|
|
||||||
### Aider v0.59.1
|
### Aider v0.59.1
|
||||||
|
|
||||||
- Check for obsolete `yes: true` in yaml config, show helpful error.
|
- Check for obsolete `yes: true` in YAML config, show helpful error.
|
||||||
- Model settings for openrouter/anthropic/claude-3.5-sonnet:beta
|
- Model settings for openrouter/anthropic/claude-3.5-sonnet:beta
|
||||||
|
|
||||||
### Aider v0.59.0
|
### Aider v0.59.0
|
||||||
|
@ -550,7 +633,7 @@ cog.out(text)
|
||||||
- Still auto-completes the full paths of the repo files like `/add`.
|
- Still auto-completes the full paths of the repo files like `/add`.
|
||||||
- Now supports globs like `src/**/*.py`
|
- Now supports globs like `src/**/*.py`
|
||||||
- Renamed `--yes` to `--yes-always`.
|
- Renamed `--yes` to `--yes-always`.
|
||||||
- Now uses `AIDER_YES_ALWAYS` env var and `yes-always:` yaml key.
|
- Now uses `AIDER_YES_ALWAYS` env var and `yes-always:` YAML key.
|
||||||
- Existing YAML and .env files will need to be updated.
|
- Existing YAML and .env files will need to be updated.
|
||||||
- Can still abbreviate to `--yes` on the command line.
|
- Can still abbreviate to `--yes` on the command line.
|
||||||
- Config file now uses standard YAML list syntax with ` - list entries`, one per line.
|
- Config file now uses standard YAML list syntax with ` - list entries`, one per line.
|
||||||
|
@ -757,7 +840,7 @@ cog.out(text)
|
||||||
- Use `--map-refresh <always|files|manual|auto>` to configure.
|
- Use `--map-refresh <always|files|manual|auto>` to configure.
|
||||||
- Improved cost estimate logic for caching.
|
- Improved cost estimate logic for caching.
|
||||||
- Improved editing performance on Jupyter Notebook `.ipynb` files.
|
- Improved editing performance on Jupyter Notebook `.ipynb` files.
|
||||||
- Show which config yaml file is loaded with `--verbose`.
|
- Show which config YAML file is loaded with `--verbose`.
|
||||||
- Bumped dependency versions.
|
- Bumped dependency versions.
|
||||||
- Bugfix: properly load `.aider.models.metadata.json` data.
|
- Bugfix: properly load `.aider.models.metadata.json` data.
|
||||||
- Bugfix: Using `--msg /ask ...` caused an exception.
|
- Bugfix: Using `--msg /ask ...` caused an exception.
|
||||||
|
|
|
@ -32,7 +32,7 @@ aux_links:
|
||||||
"GitHub":
|
"GitHub":
|
||||||
- "https://github.com/Aider-AI/aider"
|
- "https://github.com/Aider-AI/aider"
|
||||||
"Discord":
|
"Discord":
|
||||||
- "https://discord.gg/Tv2uQnR88V"
|
- "https://discord.gg/Y7X7bhMQFV"
|
||||||
"Blog":
|
"Blog":
|
||||||
- "/blog/"
|
- "/blog/"
|
||||||
|
|
||||||
|
@ -40,7 +40,7 @@ nav_external_links:
|
||||||
- title: "GitHub"
|
- title: "GitHub"
|
||||||
url: "https://github.com/Aider-AI/aider"
|
url: "https://github.com/Aider-AI/aider"
|
||||||
- title: "Discord"
|
- title: "Discord"
|
||||||
url: "https://discord.gg/Tv2uQnR88V"
|
url: "https://discord.gg/Y7X7bhMQFV"
|
||||||
|
|
||||||
repository: Aider-AI/aider
|
repository: Aider-AI/aider
|
||||||
|
|
||||||
|
|
|
@ -4500,3 +4500,228 @@
|
||||||
Paul Gauthier (aider): 1567
|
Paul Gauthier (aider): 1567
|
||||||
start_tag: v0.81.0
|
start_tag: v0.81.0
|
||||||
total_lines: 1706
|
total_lines: 1706
|
||||||
|
- aider_percentage: 54.32
|
||||||
|
aider_total: 1409
|
||||||
|
end_date: '2025-05-09'
|
||||||
|
end_tag: v0.83.0
|
||||||
|
file_counts:
|
||||||
|
.github/workflows/check_pypi_version.yml:
|
||||||
|
Paul Gauthier (aider): 1
|
||||||
|
.github/workflows/pre-commit.yml:
|
||||||
|
MDW: 48
|
||||||
|
.github/workflows/ubuntu-tests.yml:
|
||||||
|
Paul Gauthier (aider): 1
|
||||||
|
.github/workflows/windows-tests.yml:
|
||||||
|
Paul Gauthier (aider): 1
|
||||||
|
.github/workflows/windows_check_pypi_version.yml:
|
||||||
|
Paul Gauthier (aider): 1
|
||||||
|
aider/__init__.py:
|
||||||
|
Paul Gauthier: 1
|
||||||
|
aider/args.py:
|
||||||
|
Andrew Grigorev: 21
|
||||||
|
Andrew Grigorev (aider): 5
|
||||||
|
Paul Gauthier (aider): 38
|
||||||
|
aider/coders/__init__.py:
|
||||||
|
Paul Gauthier (aider): 2
|
||||||
|
aider/coders/base_coder.py:
|
||||||
|
Andrew Grigorev (aider): 2
|
||||||
|
Paul Gauthier: 60
|
||||||
|
Paul Gauthier (aider): 104
|
||||||
|
aider/coders/editblock_coder.py:
|
||||||
|
Paul Gauthier: 10
|
||||||
|
Paul Gauthier (aider): 7
|
||||||
|
zjy1412: 2
|
||||||
|
aider/coders/editblock_fenced_coder.py:
|
||||||
|
MDW: 1
|
||||||
|
aider/coders/help_coder.py:
|
||||||
|
MDW: 1
|
||||||
|
aider/coders/patch_coder.py:
|
||||||
|
Paul Gauthier (aider): 38
|
||||||
|
aider/coders/shell.py:
|
||||||
|
Paul Gauthier: 37
|
||||||
|
aider/coders/udiff_coder.py:
|
||||||
|
Paul Gauthier: 2
|
||||||
|
Paul Gauthier (aider): 9
|
||||||
|
aider/coders/udiff_simple.py:
|
||||||
|
Paul Gauthier (aider): 14
|
||||||
|
aider/commands.py:
|
||||||
|
Andrew Grigorev: 10
|
||||||
|
Paul Gauthier: 7
|
||||||
|
Paul Gauthier (aider): 1
|
||||||
|
aider/gui.py:
|
||||||
|
Jon Keys: 2
|
||||||
|
aider/io.py:
|
||||||
|
Kay Gosho: 1
|
||||||
|
Paul Gauthier (aider): 5
|
||||||
|
aider/linter.py:
|
||||||
|
Paul Gauthier: 1
|
||||||
|
Titusz Pan: 1
|
||||||
|
aider/main.py:
|
||||||
|
Paul Gauthier (aider): 9
|
||||||
|
aider/mdstream.py:
|
||||||
|
Paul Gauthier (aider): 11
|
||||||
|
aider/models.py:
|
||||||
|
Paul Gauthier: 4
|
||||||
|
Paul Gauthier (aider): 66
|
||||||
|
Stefan Hladnik: 4
|
||||||
|
Stefan Hladnik (aider): 41
|
||||||
|
aider/queries/tree-sitter-language-pack/ocaml_interface-tags.scm:
|
||||||
|
Andrey Popp: 98
|
||||||
|
aider/queries/tree-sitter-languages/ocaml_interface-tags.scm:
|
||||||
|
Andrey Popp: 98
|
||||||
|
aider/repo.py:
|
||||||
|
Andrew Grigorev: 115
|
||||||
|
Andrew Grigorev (aider): 21
|
||||||
|
Paul Gauthier: 6
|
||||||
|
Paul Gauthier (aider): 33
|
||||||
|
aider/repomap.py:
|
||||||
|
Paul Gauthier: 5
|
||||||
|
Paul Gauthier (aider): 6
|
||||||
|
aider/resources/model-settings.yml:
|
||||||
|
Paul Gauthier: 183
|
||||||
|
Paul Gauthier (aider): 175
|
||||||
|
cantalupo555: 1
|
||||||
|
aider/scrape.py:
|
||||||
|
Jon Keys: 12
|
||||||
|
aider/utils.py:
|
||||||
|
Paul Gauthier: 13
|
||||||
|
Paul Gauthier (aider): 131
|
||||||
|
Titusz Pan: 1
|
||||||
|
aider/waiting.py:
|
||||||
|
Paul Gauthier: 1
|
||||||
|
Paul Gauthier (aider): 54
|
||||||
|
aider/watch.py:
|
||||||
|
Paul Gauthier: 6
|
||||||
|
Paul Gauthier (aider): 7
|
||||||
|
aider/website/_includes/leaderboard_table.js:
|
||||||
|
Paul Gauthier: 2
|
||||||
|
Paul Gauthier (aider): 18
|
||||||
|
aider/website/docs/leaderboards/index.md:
|
||||||
|
Paul Gauthier: 1
|
||||||
|
Paul Gauthier (aider): 2
|
||||||
|
aider/website/index.html:
|
||||||
|
Paul Gauthier: 13
|
||||||
|
benchmark/benchmark.py:
|
||||||
|
Paul Gauthier: 3
|
||||||
|
Paul Gauthier (aider): 42
|
||||||
|
benchmark/docker.sh:
|
||||||
|
Paul Gauthier: 2
|
||||||
|
benchmark/refactor_tools.py:
|
||||||
|
MDW: 1
|
||||||
|
scripts/30k-image.py:
|
||||||
|
MDW: 1
|
||||||
|
scripts/clean_metadata.py:
|
||||||
|
Paul Gauthier (aider): 258
|
||||||
|
scripts/update-history.py:
|
||||||
|
Paul Gauthier: 2
|
||||||
|
Paul Gauthier (aider): 7
|
||||||
|
tests/basic/test_coder.py:
|
||||||
|
Paul Gauthier (aider): 3
|
||||||
|
tests/basic/test_commands.py:
|
||||||
|
Paul Gauthier: 2
|
||||||
|
Paul Gauthier (aider): 90
|
||||||
|
tests/basic/test_editblock.py:
|
||||||
|
Paul Gauthier: 10
|
||||||
|
zjy1412: 52
|
||||||
|
tests/basic/test_io.py:
|
||||||
|
Paul Gauthier (aider): 132
|
||||||
|
tests/basic/test_linter.py:
|
||||||
|
Paul Gauthier: 22
|
||||||
|
Titusz Pan: 10
|
||||||
|
tests/basic/test_repo.py:
|
||||||
|
Andrew Grigorev: 75
|
||||||
|
Andrew Grigorev (aider): 65
|
||||||
|
Paul Gauthier: 79
|
||||||
|
Paul Gauthier (aider): 6
|
||||||
|
tests/basic/test_repomap.py:
|
||||||
|
Andrey Popp: 7
|
||||||
|
tests/basic/test_watch.py:
|
||||||
|
MDW: 1
|
||||||
|
tests/fixtures/languages/ocaml_interface/test.mli:
|
||||||
|
Andrey Popp: 14
|
||||||
|
tests/scrape/test_playwright_disable.py:
|
||||||
|
Andrew Grigorev: 111
|
||||||
|
Paul Gauthier: 25
|
||||||
|
Paul Gauthier (aider): 3
|
||||||
|
grand_total:
|
||||||
|
Andrew Grigorev: 332
|
||||||
|
Andrew Grigorev (aider): 93
|
||||||
|
Andrey Popp: 217
|
||||||
|
Jon Keys: 14
|
||||||
|
Kay Gosho: 1
|
||||||
|
MDW: 53
|
||||||
|
Paul Gauthier: 497
|
||||||
|
Paul Gauthier (aider): 1275
|
||||||
|
Stefan Hladnik: 4
|
||||||
|
Stefan Hladnik (aider): 41
|
||||||
|
Titusz Pan: 12
|
||||||
|
cantalupo555: 1
|
||||||
|
zjy1412: 54
|
||||||
|
start_tag: v0.82.0
|
||||||
|
total_lines: 2594
|
||||||
|
- aider_percentage: 78.92
|
||||||
|
aider_total: 655
|
||||||
|
end_date: '2025-05-30'
|
||||||
|
end_tag: v0.84.0
|
||||||
|
file_counts:
|
||||||
|
aider/__init__.py:
|
||||||
|
Paul Gauthier: 1
|
||||||
|
aider/args.py:
|
||||||
|
Paul Gauthier (aider): 27
|
||||||
|
saviour: 2
|
||||||
|
aider/args_formatter.py:
|
||||||
|
Paul Gauthier: 1
|
||||||
|
aider/coders/base_coder.py:
|
||||||
|
Paul Gauthier: 4
|
||||||
|
Paul Gauthier (aider): 10
|
||||||
|
aider/commands.py:
|
||||||
|
Paul Gauthier (aider): 23
|
||||||
|
wangboxue: 1
|
||||||
|
aider/models.py:
|
||||||
|
Lih Chen: 15
|
||||||
|
Paul Gauthier: 16
|
||||||
|
Paul Gauthier (aider): 12
|
||||||
|
aider/onboarding.py:
|
||||||
|
Paul Gauthier: 2
|
||||||
|
aider/openrouter.py:
|
||||||
|
Paul Gauthier (aider): 120
|
||||||
|
aider/repo.py:
|
||||||
|
Paul Gauthier: 1
|
||||||
|
Paul Gauthier (aider): 10
|
||||||
|
aider/repomap.py:
|
||||||
|
Paul Gauthier (aider): 1
|
||||||
|
aider/resources/model-settings.yml:
|
||||||
|
Paul Gauthier: 71
|
||||||
|
Paul Gauthier (aider): 193
|
||||||
|
Trung Dinh: 11
|
||||||
|
aider/utils.py:
|
||||||
|
Paul Gauthier (aider): 1
|
||||||
|
aider/waiting.py:
|
||||||
|
Paul Gauthier: 2
|
||||||
|
Paul Gauthier (aider): 6
|
||||||
|
aider/website/docs/leaderboards/index.md:
|
||||||
|
Paul Gauthier: 1
|
||||||
|
aider/website/index.html:
|
||||||
|
Paul Gauthier: 43
|
||||||
|
scripts/update-history.py:
|
||||||
|
Paul Gauthier: 2
|
||||||
|
tests/basic/test_coder.py:
|
||||||
|
Paul Gauthier: 2
|
||||||
|
Paul Gauthier (aider): 144
|
||||||
|
tests/basic/test_main.py:
|
||||||
|
Paul Gauthier (aider): 28
|
||||||
|
tests/basic/test_models.py:
|
||||||
|
Paul Gauthier (aider): 2
|
||||||
|
tests/basic/test_onboarding.py:
|
||||||
|
Paul Gauthier (aider): 5
|
||||||
|
tests/basic/test_openrouter.py:
|
||||||
|
Paul Gauthier (aider): 73
|
||||||
|
grand_total:
|
||||||
|
Lih Chen: 15
|
||||||
|
Paul Gauthier: 146
|
||||||
|
Paul Gauthier (aider): 655
|
||||||
|
Trung Dinh: 11
|
||||||
|
saviour: 2
|
||||||
|
wangboxue: 1
|
||||||
|
start_tag: v0.83.0
|
||||||
|
total_lines: 830
|
||||||
|
|
|
@ -831,7 +831,7 @@
|
||||||
date: 2025-04-12
|
date: 2025-04-12
|
||||||
versions: 0.81.3.dev
|
versions: 0.81.3.dev
|
||||||
seconds_per_case: 45.3
|
seconds_per_case: 45.3
|
||||||
total_cost: 6.3174
|
total_cost: 0 # incorrect: 6.3174
|
||||||
|
|
||||||
- dirname: 2025-03-29-05-24-55--chatgpt4o-mar28-diff
|
- dirname: 2025-03-29-05-24-55--chatgpt4o-mar28-diff
|
||||||
test_cases: 225
|
test_cases: 225
|
||||||
|
@ -1197,4 +1197,284 @@
|
||||||
date: 2025-04-19
|
date: 2025-04-19
|
||||||
versions: 0.82.2.dev
|
versions: 0.82.2.dev
|
||||||
seconds_per_case: 195.6
|
seconds_per_case: 195.6
|
||||||
total_cost: 0.0000
|
total_cost: 0.0000
|
||||||
|
|
||||||
|
- dirname: 2025-04-20-19-54-31--flash25-diff-no-think
|
||||||
|
test_cases: 225
|
||||||
|
model: gemini-2.5-flash-preview-04-17 (default)
|
||||||
|
edit_format: diff
|
||||||
|
commit_hash: 7fcce5d-dirty
|
||||||
|
pass_rate_1: 21.8
|
||||||
|
pass_rate_2: 47.1
|
||||||
|
pass_num_1: 49
|
||||||
|
pass_num_2: 106
|
||||||
|
percent_cases_well_formed: 85.3
|
||||||
|
error_outputs: 60
|
||||||
|
num_malformed_responses: 55
|
||||||
|
num_with_malformed_responses: 33
|
||||||
|
user_asks: 82
|
||||||
|
lazy_comments: 1
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 5
|
||||||
|
test_timeouts: 4
|
||||||
|
total_tests: 225
|
||||||
|
command: aider --model gemini/gemini-2.5-flash-preview-04-17
|
||||||
|
date: 2025-04-20
|
||||||
|
versions: 0.82.3.dev
|
||||||
|
seconds_per_case: 50.1
|
||||||
|
total_cost: 1.8451
|
||||||
|
|
||||||
|
- dirname: 2025-05-07-19-32-40--gemini0506-diff-fenced-completion_cost
|
||||||
|
test_cases: 225
|
||||||
|
model: Gemini 2.5 Pro Preview 05-06
|
||||||
|
edit_format: diff-fenced
|
||||||
|
commit_hash: 3b08327-dirty
|
||||||
|
pass_rate_1: 36.4
|
||||||
|
pass_rate_2: 76.9
|
||||||
|
pass_num_1: 82
|
||||||
|
pass_num_2: 173
|
||||||
|
percent_cases_well_formed: 97.3
|
||||||
|
error_outputs: 15
|
||||||
|
num_malformed_responses: 7
|
||||||
|
num_with_malformed_responses: 6
|
||||||
|
user_asks: 105
|
||||||
|
lazy_comments: 0
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 0
|
||||||
|
test_timeouts: 2
|
||||||
|
total_tests: 225
|
||||||
|
command: aider --model gemini/gemini-2.5-pro-preview-05-06
|
||||||
|
date: 2025-05-07
|
||||||
|
versions: 0.82.4.dev
|
||||||
|
seconds_per_case: 165.3
|
||||||
|
total_cost: 37.4104
|
||||||
|
|
||||||
|
- dirname: 2025-05-08-03-20-24--qwen3-32b-default
|
||||||
|
test_cases: 225
|
||||||
|
model: Qwen3 32B
|
||||||
|
edit_format: diff
|
||||||
|
commit_hash: aaacee5-dirty, aeaf259
|
||||||
|
pass_rate_1: 14.2
|
||||||
|
pass_rate_2: 40.0
|
||||||
|
pass_num_1: 32
|
||||||
|
pass_num_2: 90
|
||||||
|
percent_cases_well_formed: 83.6
|
||||||
|
error_outputs: 119
|
||||||
|
num_malformed_responses: 50
|
||||||
|
num_with_malformed_responses: 37
|
||||||
|
user_asks: 97
|
||||||
|
lazy_comments: 0
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 12
|
||||||
|
prompt_tokens: 317591
|
||||||
|
completion_tokens: 120418
|
||||||
|
test_timeouts: 5
|
||||||
|
total_tests: 225
|
||||||
|
command: aider --model openrouter/qwen/qwen3-32b
|
||||||
|
date: 2025-05-08
|
||||||
|
versions: 0.82.4.dev
|
||||||
|
seconds_per_case: 372.2
|
||||||
|
total_cost: 0.7603
|
||||||
|
|
||||||
|
- dirname: 2025-05-09-17-02-02--qwen3-235b-a22b.unthink_16k_diff
|
||||||
|
test_cases: 225
|
||||||
|
model: Qwen3 235B A22B diff, no think, Alibaba API
|
||||||
|
edit_format: diff
|
||||||
|
commit_hash: 91d7fbd-dirty
|
||||||
|
pass_rate_1: 28.9
|
||||||
|
pass_rate_2: 59.6
|
||||||
|
pass_num_1: 65
|
||||||
|
pass_num_2: 134
|
||||||
|
percent_cases_well_formed: 92.9
|
||||||
|
error_outputs: 22
|
||||||
|
num_malformed_responses: 22
|
||||||
|
num_with_malformed_responses: 16
|
||||||
|
user_asks: 111
|
||||||
|
lazy_comments: 0
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 0
|
||||||
|
prompt_tokens: 2816192
|
||||||
|
completion_tokens: 342062
|
||||||
|
test_timeouts: 1
|
||||||
|
total_tests: 225
|
||||||
|
command: aider --model openai/qwen3-235b-a22b
|
||||||
|
date: 2025-05-09
|
||||||
|
versions: 0.82.4.dev
|
||||||
|
seconds_per_case: 45.4
|
||||||
|
total_cost: 0.0000
|
||||||
|
|
||||||
|
- dirname: 2025-05-24-21-17-54--sonnet4-diff-exuser
|
||||||
|
test_cases: 225
|
||||||
|
model: claude-sonnet-4-20250514 (no thinking)
|
||||||
|
edit_format: diff
|
||||||
|
commit_hash: ef3f8bb-dirty
|
||||||
|
pass_rate_1: 20.4
|
||||||
|
pass_rate_2: 56.4
|
||||||
|
pass_num_1: 46
|
||||||
|
pass_num_2: 127
|
||||||
|
percent_cases_well_formed: 98.2
|
||||||
|
error_outputs: 6
|
||||||
|
num_malformed_responses: 4
|
||||||
|
num_with_malformed_responses: 4
|
||||||
|
user_asks: 129
|
||||||
|
lazy_comments: 0
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 1
|
||||||
|
prompt_tokens: 3460663
|
||||||
|
completion_tokens: 433373
|
||||||
|
test_timeouts: 7
|
||||||
|
total_tests: 225
|
||||||
|
command: aider --model claude-sonnet-4-20250514
|
||||||
|
date: 2025-05-24
|
||||||
|
versions: 0.83.3.dev
|
||||||
|
seconds_per_case: 29.8
|
||||||
|
total_cost: 15.8155
|
||||||
|
|
||||||
|
- dirname: 2025-05-24-22-10-36--sonnet4-diff-exuser-think32k
|
||||||
|
test_cases: 225
|
||||||
|
model: claude-sonnet-4-20250514 (32k thinking)
|
||||||
|
edit_format: diff
|
||||||
|
commit_hash: e3cb907
|
||||||
|
thinking_tokens: 32000
|
||||||
|
pass_rate_1: 25.8
|
||||||
|
pass_rate_2: 61.3
|
||||||
|
pass_num_1: 58
|
||||||
|
pass_num_2: 138
|
||||||
|
percent_cases_well_formed: 97.3
|
||||||
|
error_outputs: 10
|
||||||
|
num_malformed_responses: 10
|
||||||
|
num_with_malformed_responses: 6
|
||||||
|
user_asks: 111
|
||||||
|
lazy_comments: 0
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 0
|
||||||
|
prompt_tokens: 2863068
|
||||||
|
completion_tokens: 1271074
|
||||||
|
test_timeouts: 6
|
||||||
|
total_tests: 225
|
||||||
|
command: aider --model claude-sonnet-4-20250514
|
||||||
|
date: 2025-05-24
|
||||||
|
versions: 0.83.3.dev
|
||||||
|
seconds_per_case: 79.9
|
||||||
|
total_cost: 26.5755
|
||||||
|
|
||||||
|
- dirname: 2025-05-25-19-57-20--opus4-diff-exuser
|
||||||
|
test_cases: 225
|
||||||
|
model: claude-opus-4-20250514 (no think)
|
||||||
|
edit_format: diff
|
||||||
|
commit_hash: 9ef3211
|
||||||
|
pass_rate_1: 32.9
|
||||||
|
pass_rate_2: 70.7
|
||||||
|
pass_num_1: 74
|
||||||
|
pass_num_2: 159
|
||||||
|
percent_cases_well_formed: 98.7
|
||||||
|
error_outputs: 3
|
||||||
|
num_malformed_responses: 3
|
||||||
|
num_with_malformed_responses: 3
|
||||||
|
user_asks: 105
|
||||||
|
lazy_comments: 0
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 0
|
||||||
|
prompt_tokens: 2671437
|
||||||
|
completion_tokens: 380717
|
||||||
|
test_timeouts: 3
|
||||||
|
total_tests: 225
|
||||||
|
command: aider --model claude-opus-4-20250514
|
||||||
|
date: 2025-05-25
|
||||||
|
versions: 0.83.3.dev
|
||||||
|
seconds_per_case: 42.5
|
||||||
|
total_cost: 68.6253
|
||||||
|
|
||||||
|
- dirname: 2025-05-25-20-40-51--opus4-diff-exuser
|
||||||
|
test_cases: 225
|
||||||
|
model: claude-opus-4-20250514 (32k thinking)
|
||||||
|
edit_format: diff
|
||||||
|
commit_hash: 9ef3211
|
||||||
|
thinking_tokens: 32000
|
||||||
|
pass_rate_1: 37.3
|
||||||
|
pass_rate_2: 72.0
|
||||||
|
pass_num_1: 84
|
||||||
|
pass_num_2: 162
|
||||||
|
percent_cases_well_formed: 97.3
|
||||||
|
error_outputs: 10
|
||||||
|
num_malformed_responses: 6
|
||||||
|
num_with_malformed_responses: 6
|
||||||
|
user_asks: 97
|
||||||
|
lazy_comments: 0
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 0
|
||||||
|
prompt_tokens: 2567514
|
||||||
|
completion_tokens: 363142
|
||||||
|
test_timeouts: 4
|
||||||
|
total_tests: 225
|
||||||
|
command: aider --model claude-opus-4-20250514
|
||||||
|
date: 2025-05-25
|
||||||
|
versions: 0.83.3.dev
|
||||||
|
seconds_per_case: 44.1
|
||||||
|
total_cost: 65.7484
|
||||||
|
|
||||||
|
- dirname: 2025-05-26-15-56-31--flash25-05-20-24k-think # dirname is misleading
|
||||||
|
test_cases: 225
|
||||||
|
model: gemini-2.5-flash-preview-05-20 (no think)
|
||||||
|
edit_format: diff
|
||||||
|
commit_hash: 214b811-dirty
|
||||||
|
thinking_tokens: 0 # <-- no thinking
|
||||||
|
pass_rate_1: 20.9
|
||||||
|
pass_rate_2: 44.0
|
||||||
|
pass_num_1: 47
|
||||||
|
pass_num_2: 99
|
||||||
|
percent_cases_well_formed: 93.8
|
||||||
|
error_outputs: 16
|
||||||
|
num_malformed_responses: 16
|
||||||
|
num_with_malformed_responses: 14
|
||||||
|
user_asks: 79
|
||||||
|
lazy_comments: 0
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 0
|
||||||
|
prompt_tokens: 5512458
|
||||||
|
completion_tokens: 514145
|
||||||
|
test_timeouts: 4
|
||||||
|
total_tests: 225
|
||||||
|
command: aider --model gemini/gemini-2.5-flash-preview-05-20
|
||||||
|
date: 2025-05-26
|
||||||
|
versions: 0.83.3.dev
|
||||||
|
seconds_per_case: 12.2
|
||||||
|
total_cost: 1.1354
|
||||||
|
|
||||||
|
- dirname: 2025-05-25-22-58-44--flash25-05-20-24k-think
|
||||||
|
test_cases: 225
|
||||||
|
model: gemini-2.5-flash-preview-05-20 (24k think)
|
||||||
|
edit_format: diff
|
||||||
|
commit_hash: a8568c3-dirty
|
||||||
|
thinking_tokens: 24576
|
||||||
|
pass_rate_1: 26.2
|
||||||
|
pass_rate_2: 55.1
|
||||||
|
pass_num_1: 59
|
||||||
|
pass_num_2: 124
|
||||||
|
percent_cases_well_formed: 95.6
|
||||||
|
error_outputs: 15
|
||||||
|
num_malformed_responses: 15
|
||||||
|
num_with_malformed_responses: 10
|
||||||
|
user_asks: 101
|
||||||
|
lazy_comments: 0
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 0
|
||||||
|
prompt_tokens: 3666792
|
||||||
|
completion_tokens: 2703162
|
||||||
|
test_timeouts: 4
|
||||||
|
total_tests: 225
|
||||||
|
command: aider --model gemini/gemini-2.5-flash-preview-05-20
|
||||||
|
date: 2025-05-25
|
||||||
|
versions: 0.83.3.dev
|
||||||
|
seconds_per_case: 53.9
|
||||||
|
total_cost: 8.5625
|
272
aider/website/_data/qwen3_leaderboard.yml
Normal file
272
aider/website/_data/qwen3_leaderboard.yml
Normal file
|
@ -0,0 +1,272 @@
|
||||||
|
- dirname: 2025-05-08-03-20-24--qwen3-32b-default
|
||||||
|
test_cases: 225
|
||||||
|
model: Qwen3 32B diff on OpenRouter, all providers, default settings (thinking)
|
||||||
|
edit_format: diff
|
||||||
|
commit_hash: aaacee5-dirty, aeaf259
|
||||||
|
pass_rate_1: 14.2
|
||||||
|
pass_rate_2: 40.0
|
||||||
|
pass_num_1: 32
|
||||||
|
pass_num_2: 90
|
||||||
|
percent_cases_well_formed: 83.6
|
||||||
|
error_outputs: 119
|
||||||
|
num_malformed_responses: 50
|
||||||
|
num_with_malformed_responses: 37
|
||||||
|
user_asks: 97
|
||||||
|
lazy_comments: 0
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 12
|
||||||
|
prompt_tokens: 317591
|
||||||
|
completion_tokens: 120418
|
||||||
|
test_timeouts: 5
|
||||||
|
total_tests: 225
|
||||||
|
command: aider --model openrouter/qwen/qwen3-32b
|
||||||
|
date: 2025-05-08
|
||||||
|
versions: 0.82.4.dev
|
||||||
|
seconds_per_case: 372.2
|
||||||
|
total_cost: 0.7603
|
||||||
|
|
||||||
|
- dirname: 2025-05-08-03-22-37--qwen3-235b-defaults
|
||||||
|
test_cases: 225
|
||||||
|
model: Qwen3 235B A22B diff on OpenRouter, all providers, default settings (thinking)
|
||||||
|
edit_format: diff
|
||||||
|
commit_hash: aaacee5-dirty
|
||||||
|
pass_rate_1: 17.3
|
||||||
|
pass_rate_2: 49.8
|
||||||
|
pass_num_1: 39
|
||||||
|
pass_num_2: 112
|
||||||
|
percent_cases_well_formed: 91.6
|
||||||
|
error_outputs: 58
|
||||||
|
num_malformed_responses: 29
|
||||||
|
num_with_malformed_responses: 19
|
||||||
|
user_asks: 102
|
||||||
|
lazy_comments: 0
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 0
|
||||||
|
prompt_tokens: 0
|
||||||
|
completion_tokens: 0
|
||||||
|
test_timeouts: 1
|
||||||
|
total_tests: 225
|
||||||
|
command: aider --model openrouter/qwen/qwen3-235b-a22b
|
||||||
|
date: 2025-05-08
|
||||||
|
versions: 0.82.4.dev
|
||||||
|
seconds_per_case: 428.1
|
||||||
|
total_cost: 1.8037
|
||||||
|
|
||||||
|
|
||||||
|
- dirname: 2025-05-08-17-39-14--qwen3-235b-or-together-only
|
||||||
|
test_cases: 225
|
||||||
|
model: Qwen3 235B A22B diff on OpenRouter only TogetherAI, recommended /no_think settings
|
||||||
|
edit_format: diff
|
||||||
|
commit_hash: 328584e
|
||||||
|
pass_rate_1: 28.0
|
||||||
|
pass_rate_2: 54.7
|
||||||
|
pass_num_1: 63
|
||||||
|
pass_num_2: 123
|
||||||
|
percent_cases_well_formed: 90.7
|
||||||
|
error_outputs: 39
|
||||||
|
num_malformed_responses: 32
|
||||||
|
num_with_malformed_responses: 21
|
||||||
|
user_asks: 106
|
||||||
|
lazy_comments: 0
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 0
|
||||||
|
prompt_tokens: 2816606
|
||||||
|
completion_tokens: 362346
|
||||||
|
test_timeouts: 2
|
||||||
|
total_tests: 225
|
||||||
|
command: aider --model openrouter/qwen/qwen3-235b-a22b
|
||||||
|
date: 2025-05-08
|
||||||
|
versions: 0.82.4.dev
|
||||||
|
seconds_per_case: 77.2
|
||||||
|
total_cost: 0.6399
|
||||||
|
|
||||||
|
|
||||||
|
- dirname: 2025-04-30-04-49-37--Qwen3-235B-A22B-whole-nothink
|
||||||
|
test_cases: 225
|
||||||
|
model: Qwen3-235B-A22B whole with VLLM, bfloat16, recommended /no_think settings
|
||||||
|
edit_format: whole
|
||||||
|
commit_hash: 0c383df-dirty
|
||||||
|
pass_rate_1: 28.0
|
||||||
|
pass_rate_2: 65.3
|
||||||
|
pass_num_1: 63
|
||||||
|
pass_num_2: 147
|
||||||
|
percent_cases_well_formed: 100.0
|
||||||
|
error_outputs: 3
|
||||||
|
num_malformed_responses: 0
|
||||||
|
num_with_malformed_responses: 0
|
||||||
|
user_asks: 166
|
||||||
|
lazy_comments: 0
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 3
|
||||||
|
test_timeouts: 0
|
||||||
|
total_tests: 225
|
||||||
|
command: aider --model openai/Qwen3-235B-A22B
|
||||||
|
date: 2025-04-30
|
||||||
|
versions: 0.81.4.dev
|
||||||
|
seconds_per_case: 166.0
|
||||||
|
total_cost: 0.0000
|
||||||
|
|
||||||
|
- dirname: 2025-04-30-04-49-50--Qwen3-235B-A22B-diff-nothink
|
||||||
|
test_cases: 225
|
||||||
|
model: Qwen3-235B-A22B diff with VLLM, bfloat16, recommended /no_think settings
|
||||||
|
edit_format: diff
|
||||||
|
commit_hash: 0c383df-dirty
|
||||||
|
pass_rate_1: 29.8
|
||||||
|
pass_rate_2: 61.3
|
||||||
|
pass_num_1: 67
|
||||||
|
pass_num_2: 138
|
||||||
|
percent_cases_well_formed: 94.7
|
||||||
|
error_outputs: 25
|
||||||
|
num_malformed_responses: 25
|
||||||
|
num_with_malformed_responses: 12
|
||||||
|
user_asks: 97
|
||||||
|
lazy_comments: 0
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 0
|
||||||
|
test_timeouts: 2
|
||||||
|
total_tests: 225
|
||||||
|
command: aider --model openai/Qwen3-235B-A22B
|
||||||
|
date: 2025-04-30
|
||||||
|
versions: 0.81.4.dev
|
||||||
|
seconds_per_case: 158.2
|
||||||
|
total_cost: 0.0000
|
||||||
|
|
||||||
|
- dirname: 2025-04-30-04-08-41--Qwen3-32B-whole-nothink
|
||||||
|
test_cases: 225
|
||||||
|
model: Qwen3-32B whole with VLLM, bfloat16, recommended /no_think settings
|
||||||
|
edit_format: whole
|
||||||
|
commit_hash: 0c383df-dirty
|
||||||
|
pass_rate_1: 20.4
|
||||||
|
pass_rate_2: 45.8
|
||||||
|
pass_num_1: 46
|
||||||
|
pass_num_2: 103
|
||||||
|
percent_cases_well_formed: 100.0
|
||||||
|
error_outputs: 3
|
||||||
|
num_malformed_responses: 0
|
||||||
|
num_with_malformed_responses: 0
|
||||||
|
user_asks: 94
|
||||||
|
lazy_comments: 0
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 3
|
||||||
|
test_timeouts: 5
|
||||||
|
total_tests: 225
|
||||||
|
command: aider --model openai/Qwen3-32B
|
||||||
|
date: 2025-04-30
|
||||||
|
versions: 0.81.4.dev
|
||||||
|
seconds_per_case: 48.1
|
||||||
|
total_cost: 0.0000
|
||||||
|
|
||||||
|
- dirname: 2025-04-30-04-08-51--Qwen3-32B-diff-nothink
|
||||||
|
test_cases: 225
|
||||||
|
model: Qwen3-32B diff with VLLM, bfloat16, recommended /no_think settings
|
||||||
|
edit_format: diff
|
||||||
|
commit_hash: 0c383df-dirty
|
||||||
|
pass_rate_1: 20.4
|
||||||
|
pass_rate_2: 41.3
|
||||||
|
pass_num_1: 46
|
||||||
|
pass_num_2: 93
|
||||||
|
percent_cases_well_formed: 94.2
|
||||||
|
error_outputs: 17
|
||||||
|
num_malformed_responses: 14
|
||||||
|
num_with_malformed_responses: 13
|
||||||
|
user_asks: 83
|
||||||
|
lazy_comments: 0
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 3
|
||||||
|
test_timeouts: 4
|
||||||
|
total_tests: 225
|
||||||
|
command: aider --model openai/Qwen3-32B
|
||||||
|
date: 2025-04-30
|
||||||
|
versions: 0.81.4.dev
|
||||||
|
seconds_per_case: 59.4
|
||||||
|
total_cost: 0.0000
|
||||||
|
|
||||||
|
- dirname: 2025-05-07-03-15-59--Qwen3-235B-A22B-Q5_K_M-whole-nothink
|
||||||
|
test_cases: 225
|
||||||
|
model: Qwen3-235B-A22B whole with llama.cpp, Q5_K_M (unsloth), recommended /no_think settings
|
||||||
|
edit_format: whole
|
||||||
|
commit_hash: 8159cbf
|
||||||
|
pass_rate_1: 27.1
|
||||||
|
pass_rate_2: 59.1
|
||||||
|
pass_num_1: 61
|
||||||
|
pass_num_2: 133
|
||||||
|
percent_cases_well_formed: 100.0
|
||||||
|
error_outputs: 1
|
||||||
|
num_malformed_responses: 0
|
||||||
|
num_with_malformed_responses: 0
|
||||||
|
user_asks: 169
|
||||||
|
lazy_comments: 0
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 0
|
||||||
|
test_timeouts: 1
|
||||||
|
total_tests: 225
|
||||||
|
command: aider --model openai/Qwen3-235B-A22B-Q5_K_M
|
||||||
|
date: 2025-05-07
|
||||||
|
versions: 0.82.4.dev
|
||||||
|
seconds_per_case: 635.2
|
||||||
|
total_cost: 0.0000
|
||||||
|
|
||||||
|
|
||||||
|
- dirname: 2025-05-09-17-02-02--qwen3-235b-a22b.unthink_16k_diff
|
||||||
|
test_cases: 225
|
||||||
|
model: Qwen3 235B A22B diff, no think, via official Alibaba API
|
||||||
|
edit_format: diff
|
||||||
|
commit_hash: 91d7fbd-dirty
|
||||||
|
pass_rate_1: 28.9
|
||||||
|
pass_rate_2: 59.6
|
||||||
|
pass_num_1: 65
|
||||||
|
pass_num_2: 134
|
||||||
|
percent_cases_well_formed: 92.9
|
||||||
|
error_outputs: 22
|
||||||
|
num_malformed_responses: 22
|
||||||
|
num_with_malformed_responses: 16
|
||||||
|
user_asks: 111
|
||||||
|
lazy_comments: 0
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 0
|
||||||
|
prompt_tokens: 2816192
|
||||||
|
completion_tokens: 342062
|
||||||
|
test_timeouts: 1
|
||||||
|
total_tests: 225
|
||||||
|
command: aider --model openai/qwen3-235b-a22b
|
||||||
|
date: 2025-05-09
|
||||||
|
versions: 0.82.4.dev
|
||||||
|
seconds_per_case: 45.4
|
||||||
|
total_cost: 0.0000
|
||||||
|
|
||||||
|
- dirname: 2025-05-09-23-01-22--qwen3-235b-a22b.unthink_16k_whole
|
||||||
|
test_cases: 225
|
||||||
|
model: Qwen3 235B A22B whole, no think, via official Alibaba API
|
||||||
|
edit_format: whole
|
||||||
|
commit_hash: 425fb6d
|
||||||
|
pass_rate_1: 26.7
|
||||||
|
pass_rate_2: 61.8
|
||||||
|
pass_num_1: 60
|
||||||
|
pass_num_2: 139
|
||||||
|
percent_cases_well_formed: 100.0
|
||||||
|
error_outputs: 0
|
||||||
|
num_malformed_responses: 0
|
||||||
|
num_with_malformed_responses: 0
|
||||||
|
user_asks: 175
|
||||||
|
lazy_comments: 0
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 0
|
||||||
|
prompt_tokens: 2768173
|
||||||
|
completion_tokens: 384000
|
||||||
|
test_timeouts: 1
|
||||||
|
total_tests: 225
|
||||||
|
command: aider --model openai/qwen3-235b-a22b
|
||||||
|
date: 2025-05-09
|
||||||
|
versions: 0.82.4.dev
|
||||||
|
seconds_per_case: 50.8
|
||||||
|
total_cost: 0.0000
|
|
@ -27,7 +27,7 @@ document.addEventListener('DOMContentLoaded', function () {
|
||||||
labels: labels,
|
labels: labels,
|
||||||
datasets: [{
|
datasets: [{
|
||||||
label: 'Aider\'s percent of new code by release',
|
label: 'Aider\'s percent of new code by release',
|
||||||
data: [{% for row in site.data.blame %}{ x: '{{ row.end_tag }}', y: {{ row.aider_percentage }}, lines: {{ row.aider_total }} },{% endfor %}],
|
data: [{% for row in site.data.blame %}{ x: '{{ row.end_tag }}', y: {{ row.aider_percentage }}, lines: {{ row.aider_total }}, end_date: '{{ row.end_date }}' },{% endfor %}],
|
||||||
backgroundColor: 'rgba(54, 162, 235, 0.8)',
|
backgroundColor: 'rgba(54, 162, 235, 0.8)',
|
||||||
borderColor: 'rgba(54, 162, 235, 1)',
|
borderColor: 'rgba(54, 162, 235, 1)',
|
||||||
borderWidth: 1
|
borderWidth: 1
|
||||||
|
@ -88,6 +88,10 @@ document.addEventListener('DOMContentLoaded', function () {
|
||||||
var value = context.parsed.y || 0;
|
var value = context.parsed.y || 0;
|
||||||
var lines = context.raw.lines || 0;
|
var lines = context.raw.lines || 0;
|
||||||
return `${label}: ${Math.round(value)}% (${lines} lines)`;
|
return `${label}: ${Math.round(value)}% (${lines} lines)`;
|
||||||
|
},
|
||||||
|
afterLabel: function(context) {
|
||||||
|
let date = context.raw.end_date || 'n/a';
|
||||||
|
return `Date: ` + date;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
|
|
@ -1,10 +1,13 @@
|
||||||
|
|
||||||
If you already have python 3.8-3.13 installed, you can get started quickly like this:
|
If you already have python 3.8-3.13 installed, you can get started quickly like this.
|
||||||
|
|
||||||
|
First, install aider:
|
||||||
|
|
||||||
|
{% include install.md %}
|
||||||
|
|
||||||
|
Start working with aider on your codebase:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
python -m pip install aider-install
|
|
||||||
aider-install
|
|
||||||
|
|
||||||
# Change directory into your codebase
|
# Change directory into your codebase
|
||||||
cd /to/your/project
|
cd /to/your/project
|
||||||
|
|
||||||
|
|
|
@ -2,7 +2,7 @@ If you need more help, please check our
|
||||||
[GitHub issues](https://github.com/Aider-AI/aider/issues)
|
[GitHub issues](https://github.com/Aider-AI/aider/issues)
|
||||||
and file a new issue if your problem isn't discussed.
|
and file a new issue if your problem isn't discussed.
|
||||||
Or drop into our
|
Or drop into our
|
||||||
[Discord](https://discord.gg/Tv2uQnR88V)
|
[Discord](https://discord.gg/Y7X7bhMQFV)
|
||||||
to chat with us.
|
to chat with us.
|
||||||
|
|
||||||
When reporting problems, it is very helpful if you can provide:
|
When reporting problems, it is very helpful if you can provide:
|
||||||
|
|
5
aider/website/_includes/install.md
Normal file
5
aider/website/_includes/install.md
Normal file
|
@ -0,0 +1,5 @@
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python -m pip install aider-install
|
||||||
|
aider-install
|
||||||
|
```
|
|
@ -188,10 +188,15 @@ document.addEventListener('DOMContentLoaded', function() {
|
||||||
|
|
||||||
// Update the leaderboard title based on mode and selection
|
// Update the leaderboard title based on mode and selection
|
||||||
if (leaderboardTitle) {
|
if (leaderboardTitle) {
|
||||||
if (currentMode === 'view' && selectedRows.size > 0) {
|
// Check if a custom title is provided globally
|
||||||
leaderboardTitle.textContent = filteredTitle;
|
if (typeof LEADERBOARD_CUSTOM_TITLE !== 'undefined' && LEADERBOARD_CUSTOM_TITLE) {
|
||||||
|
leaderboardTitle.textContent = LEADERBOARD_CUSTOM_TITLE;
|
||||||
} else {
|
} else {
|
||||||
leaderboardTitle.textContent = defaultTitle;
|
if (currentMode === 'view' && selectedRows.size > 0) {
|
||||||
|
leaderboardTitle.textContent = filteredTitle;
|
||||||
|
} else {
|
||||||
|
leaderboardTitle.textContent = defaultTitle;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -3,5 +3,5 @@
|
||||||
Aider is on
|
Aider is on
|
||||||
<a href="https://github.com/Aider-AI/aider">GitHub</a>
|
<a href="https://github.com/Aider-AI/aider">GitHub</a>
|
||||||
and
|
and
|
||||||
<a href="https://discord.gg/Tv2uQnR88V">Discord</a>.
|
<a href="https://discord.gg/Y7X7bhMQFV">Discord</a>.
|
||||||
</footer>
|
</footer>
|
||||||
|
|
|
@ -15,12 +15,12 @@ nav_exclude: true
|
||||||
I recently wanted to draw a graph showing how LLM code editing skill has been
|
I recently wanted to draw a graph showing how LLM code editing skill has been
|
||||||
changing over time as new models have been released by OpenAI, Anthropic and others.
|
changing over time as new models have been released by OpenAI, Anthropic and others.
|
||||||
I have all the
|
I have all the
|
||||||
[data in a yaml file](https://github.com/Aider-AI/aider/blob/main/website/_data/edit_leaderboard.yml) that is used to render
|
[data in a YAML file](https://github.com/Aider-AI/aider/blob/main/website/_data/edit_leaderboard.yml) that is used to render
|
||||||
[aider's LLM leaderboards](https://aider.chat/docs/leaderboards/).
|
[aider's LLM leaderboards](https://aider.chat/docs/leaderboards/).
|
||||||
|
|
||||||
Below is the aider chat transcript, which shows:
|
Below is the aider chat transcript, which shows:
|
||||||
|
|
||||||
- I launch aider with the yaml file, a file with other plots I've done recently (so GPT can crib the style) and an empty file called `over_time.py`.
|
- I launch aider with the YAML file, a file with other plots I've done recently (so GPT can crib the style) and an empty file called `over_time.py`.
|
||||||
- Then I ask GPT to draw the scatterplot I want.
|
- Then I ask GPT to draw the scatterplot I want.
|
||||||
- I run the resulting script and share the error output with GPT so it can fix a small bug.
|
- I run the resulting script and share the error output with GPT so it can fix a small bug.
|
||||||
- I ask it to color the points for GPT-4 and GPT-3.5 family models differently, to better see trends within those model families.
|
- I ask it to color the points for GPT-4 and GPT-3.5 family models differently, to better see trends within those model families.
|
||||||
|
@ -28,7 +28,7 @@ Below is the aider chat transcript, which shows:
|
||||||
- I work through a series of other small style changes, like changing fonts and the graph border.
|
- I work through a series of other small style changes, like changing fonts and the graph border.
|
||||||
|
|
||||||
In the end I have the graph, but I also have the python code in my repo.
|
In the end I have the graph, but I also have the python code in my repo.
|
||||||
So I can update this graph easily whenever I add new entries to the yaml data file.
|
So I can update this graph easily whenever I add new entries to the YAML data file.
|
||||||
|
|
||||||
|
|
||||||
## Aider chat transcript
|
## Aider chat transcript
|
||||||
|
|
114
aider/website/_posts/2025-05-07-gemini-cost.md
Normal file
114
aider/website/_posts/2025-05-07-gemini-cost.md
Normal file
|
@ -0,0 +1,114 @@
|
||||||
|
---
|
||||||
|
title: Gemini 2.5 Pro Preview 03-25 benchmark cost
|
||||||
|
excerpt: The $6.32 benchmark cost reported for Gemini 2.5 Pro Preview 03-25 was incorrect.
|
||||||
|
draft: false
|
||||||
|
nav_exclude: true
|
||||||
|
---
|
||||||
|
{% if page.date %}
|
||||||
|
<p class="post-date">{{ page.date | date: "%B %d, %Y" }}</p>
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
# Gemini 2.5 Pro Preview 03-25 benchmark cost
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
The $6.32 cost reported to run the aider polyglot benchmark on
|
||||||
|
Gemini 2.5 Pro Preview 03-25 was incorrect.
|
||||||
|
The true cost was higher, possibly significantly so.
|
||||||
|
The incorrect cost has been removed from the leaderboard.
|
||||||
|
|
||||||
|
An investigation determined the primary cause was that the litellm
|
||||||
|
package (used by aider for LLM API connections) was not properly including reasoning tokens in
|
||||||
|
the token counts it reported.
|
||||||
|
While an incorrect price-per-token entry for the model also existed in litellm's cost
|
||||||
|
database at that time, this was found not to be a contributing factor.
|
||||||
|
Aider's own internal, correct pricing data was utilized during the benchmark.
|
||||||
|
|
||||||
|
## Resolution
|
||||||
|
|
||||||
|
Litellm began correctly including reasoning tokens in the reported counts
|
||||||
|
on April 21, 2025 in
|
||||||
|
commit [a7db0df](https://github.com/BerriAI/litellm/commit/a7db0df0434bfbac2b68ebe1c343b77955becb4b).
|
||||||
|
This change was released in litellm v1.67.1.
|
||||||
|
Aider picked up this change April 28, 2025 when it upgraded its litellm dependency
|
||||||
|
from v1.65.7 to v1.67.4.post1
|
||||||
|
in commit [9351f37](https://github.com/Aider-AI/aider/commit/9351f37).
|
||||||
|
That dependency change shipped on May 5, 2025 in aider v0.82.3.
|
||||||
|
|
||||||
|
Unfortunately the 03-25 version of Gemini 2.5 Pro Preview is no longer available,
|
||||||
|
so it is not possible to re-run the benchmark to obtain an accurate cost.
|
||||||
|
As a possibly relevant comparison, the newer 05-06 version of Gemini 2.5 Pro Preview
|
||||||
|
completed the benchmark at a cost of about $37.
|
||||||
|
|
||||||
|
## Investigation detail
|
||||||
|
|
||||||
|
The version of litellm available at that time of the benchmark appears to have been
|
||||||
|
excluding reasoning tokens from the token counts it reported.
|
||||||
|
So even though aider had correct per-token pricing, it did not have the correct token counts
|
||||||
|
used during the benchmark.
|
||||||
|
This resulted in an underestimate of the benchmark costs.
|
||||||
|
|
||||||
|
The incorrect litellm database entry does not appear to have affected the aider benchmark costs.
|
||||||
|
Aider maintains and uses its own database of costs for some models, and it contained
|
||||||
|
the correct pricing at the time of the benchmark.
|
||||||
|
Aider appears to have
|
||||||
|
loaded the correct cost data from its database and made use of it during the benchmark.
|
||||||
|
|
||||||
|
Every aider benchmark report contains the git commit hash of the aider repository state used to
|
||||||
|
run the benchmark.
|
||||||
|
The
|
||||||
|
[benchmark run in question](https://github.com/Aider-AI/aider/blob/edbfec0ce4e1fe86735c915cb425b0d8636edc32/aider/website/_data/polyglot_leaderboard.yml#L814)
|
||||||
|
was built from
|
||||||
|
commit [0282574](https://github.com/Aider-AI/aider/commit/0282574).
|
||||||
|
|
||||||
|
Additional runs of the benchmark from that build verified that the error in litellm's
|
||||||
|
model cost database appears not to have been a factor:
|
||||||
|
|
||||||
|
- Aider's internal model database correctly overrides the litellm database, which contained an incorrect token cost at the time.
|
||||||
|
- The correct pricing is loaded from aider's internal model database and produces similar (incorrect) costs as the original run.
|
||||||
|
- Updating aider's internal model database with an absurdly high token cost resulted in an appropriately high benchmark cost report, demonstrating that the internal database costs were in effect.
|
||||||
|
|
||||||
|
This specific build of aider was then updated with various versions of litellm using `git biset`
|
||||||
|
to identify the first litellm commit where reasoning tokens counts were correctly reported.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## Timeline
|
||||||
|
|
||||||
|
Below is the full timeline of git commits related to this issue in the aider and litellm repositories.
|
||||||
|
Each entry has a UTC timestamp, followed by the original literal timestamp obtained from the
|
||||||
|
relevant source.
|
||||||
|
|
||||||
|
- 2025-04-04 19:54:45 UTC (Sat Apr 5 08:54:45 2025 +1300)
|
||||||
|
- Correct value `"output_cost_per_token": 0.000010` for `gemini/gemini-2.5-pro-preview-03-25` added to `aider/resources/model-metadata.json`
|
||||||
|
- Commit [eda796d](https://github.com/Aider-AI/aider/commit/eda796d) in aider.
|
||||||
|
|
||||||
|
- 2025-04-05 16:20:01 UTC (Sun Apr 6 00:20:01 2025 +0800)
|
||||||
|
- First litellm commit of `gemini/gemini-2.5-pro-preview-03-25` metadata, with incorrect price `"output_cost_per_token": 0.0000010`
|
||||||
|
- Commit [cd0a1e6](https://github.com/BerriAI/litellm/commit/cd0a1e6) in litellm.
|
||||||
|
|
||||||
|
- 2025-04-10 01:48:43 UTC (Wed Apr 9 18:48:43 2025 -0700)
|
||||||
|
- litellm commit updates `gemini/gemini-2.5-pro-preview-03-25` metadata, but not price
|
||||||
|
- Commit [ac4f32f](https://github.com/BerriAI/litellm/commit/ac4f32f) in litellm.
|
||||||
|
|
||||||
|
- 2025-04-12 04:55:50 UTC (2025-04-12-04-55-50 UTC)
|
||||||
|
- Benchmark performed.
|
||||||
|
- Aider repo hash [0282574 recorded in benchmark results](https://github.com/Aider-AI/aider/blob/7fbeafa1cfd4ad83f7499417837cdfa6b16fe7a1/aider/website/_data/polyglot_leaderboard.yml#L814), without a "dirty" annotation, indicating that the benchmark was run on a clean checkout of the aider repo at commit [0282574](https://github.com/Aider-AI/aider/commit/0282574).
|
||||||
|
- Correct value `"output_cost_per_token": 0.000010` is in `aider/resources/model-metadata.json` at this commit [0282574](https://github.com/Aider-AI/aider/blob/0282574/aider/resources/model-metadata.json#L357).
|
||||||
|
|
||||||
|
- 2025-04-12 15:06:39 UTC (Apr 12 08:06:39 2025 -0700)
|
||||||
|
- Benchmark results added to aider repo.
|
||||||
|
- Commit [7fbeafa](https://github.com/Aider-AI/aider/commit/7fbeafa) in aider.
|
||||||
|
|
||||||
|
- 2025-04-12 15:20:04 UTC (Sat Apr 12 19:20:04 2025 +0400)
|
||||||
|
- litellm commit fixes `gemini/gemini-2.5-pro-preview-03-25` price metadata to `"output_cost_per_token": 0.00001`
|
||||||
|
- Commit [93037ea](https://github.com/BerriAI/litellm/commit/93037ea) in litellm.
|
||||||
|
|
||||||
|
- 2025-04-22 05:48:00 UTC (Mon Apr 21 22:48:00 2025 -0700)
|
||||||
|
- Litellm started including reasoning tokens in token count reporting.
|
||||||
|
- Commit [a7db0df](https://github.com/BerriAI/litellm/commit/a7db0df0434bfbac2b68ebe1c343b77955becb4b) in litellm.
|
||||||
|
- This fix was released in litellm v1.67.1.
|
||||||
|
|
||||||
|
- 2025-04-28 14:53:20 UTC (Mon Apr 28 07:53:20 2025 -0700)
|
||||||
|
- Aider upgraded its litellm dependency from v1.65.7 to v1.67.4.post1, which included the reasoning token count fix.
|
||||||
|
- Commit [9351f37](https://github.com/Aider-AI/aider/commit/9351f37) in aider.
|
||||||
|
- This dependency change shipped on May 5, 2025 in aider v0.82.3.
|
365
aider/website/_posts/2025-05-08-qwen3.md
Normal file
365
aider/website/_posts/2025-05-08-qwen3.md
Normal file
|
@ -0,0 +1,365 @@
|
||||||
|
---
|
||||||
|
layout: post
|
||||||
|
title: Qwen3 benchmark results
|
||||||
|
excerpt: "Benchmark results for Qwen3 models using the Aider polyglot coding benchmark."
|
||||||
|
highlight_image: /assets/2025-05-08-qwen3.jpg
|
||||||
|
date: 2025-05-08
|
||||||
|
---
|
||||||
|
|
||||||
|
# Qwen3 results on the aider polyglot benchmark
|
||||||
|
|
||||||
|
As [previously discussed when Qwen2.5 was released](/2024/11/21/quantization.html),
|
||||||
|
details matter when working with open source models for AI coding.
|
||||||
|
Proprietary models are served by their creators or trusted providers with stable inference settings.
|
||||||
|
Open source models are wonderful because anyone can serve them,
|
||||||
|
but API providers can use very different inference settings, quantizations, etc.
|
||||||
|
|
||||||
|
Below are collection of aider polyglot benchmark results for the new Qwen3 models.
|
||||||
|
Results are presented using both "diff" and "whole"
|
||||||
|
[edit formats](https://aider.chat/docs/more/edit-formats.html),
|
||||||
|
with various models settings, against various API providers.
|
||||||
|
|
||||||
|
See details on the
|
||||||
|
[model settings](https://aider.chat/docs/config/adv-model-settings.html#model-settings)
|
||||||
|
used after the results table.
|
||||||
|
|
||||||
|
{: .note }
|
||||||
|
This article is being updated as new results become available.
|
||||||
|
Also, some results were submitted by aider users and have not been verified.
|
||||||
|
|
||||||
|
<h2 id="leaderboard-title">Qwen3 results on the aider polyglot benchmark</h2>
|
||||||
|
|
||||||
|
<div id="controls-container" style="display: flex; align-items: center; width: 100%; max-width: 800px; margin: 10px auto; gap: 10px; box-sizing: border-box; padding: 0 5px; position: relative;">
|
||||||
|
<input type="text" id="editSearchInput" placeholder="Search..." style="flex-grow: 1; padding: 8px; border: 1px solid #ddd; border-radius: 4px;">
|
||||||
|
<div id="view-mode-toggle" style="display: inline-flex; border: 1px solid #ccc; border-radius: 4px;">
|
||||||
|
<button id="mode-view-btn" class="mode-button active" data-mode="view" style="padding: 8px 8px; border: none; border-radius: 3px 0 0 3px; cursor: pointer; font-size: 14px; line-height: 1.5; min-width: 50px;">View</button>
|
||||||
|
<button id="mode-select-btn" class="mode-button" data-mode="select" style="padding: 8px 8px; border: none; background-color: #f8f9fa; border-radius: 0; cursor: pointer; border-left: 1px solid #ccc; font-size: 14px; line-height: 1.5; min-width: 50px;">Select</button>
|
||||||
|
<button id="mode-detail-btn" class="mode-button" data-mode="detail" style="padding: 8px 8px; border: none; background-color: #f8f9fa; border-radius: 0 3px 3px 0; cursor: pointer; border-left: 1px solid #ccc; font-size: 14px; line-height: 1.5; min-width: 50px;">Detail</button>
|
||||||
|
</div>
|
||||||
|
<button id="close-controls-btn" style="width: 18px; height: 18px; padding: 0; border: 1px solid #ddd; border-radius: 50%; background-color: transparent; cursor: pointer; display: flex; align-items: center; justify-content: center; font-size: 12px; margin-left: 4px; color: #999;">×</button>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<table style="width: 100%; max-width: 800px; margin: auto; border-collapse: collapse; box-shadow: 0 2px 4px rgba(0,0,0,0.1); font-size: 14px;">
|
||||||
|
<thead style="background-color: #f2f2f2;">
|
||||||
|
<tr>
|
||||||
|
<th style="padding: 8px; width: 40px; text-align: center; vertical-align: middle;">
|
||||||
|
<input type="checkbox" id="select-all-checkbox" style="display: none; cursor: pointer; vertical-align: middle;">
|
||||||
|
</th> <!-- Header checkbox added here -->
|
||||||
|
<th style="padding: 8px; text-align: left;">Model</th>
|
||||||
|
<th style="padding: 8px; text-align: center; width: 25%">Percent correct</th>
|
||||||
|
<th style="padding: 8px; text-align: center; width: 25%">Cost</th>
|
||||||
|
<th style="padding: 8px; text-align: left;" class="col-command">Command</th>
|
||||||
|
<th style="padding: 8px; text-align: center; width: 10%" class="col-conform">Correct edit format</th>
|
||||||
|
<th style="padding: 8px; text-align: left; width: 10%" class="col-edit-format">Edit Format</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
{% assign max_cost = 0 %}
|
||||||
|
{% for row in site.data.qwen3_leaderboard %}
|
||||||
|
{% if row.total_cost > max_cost %}
|
||||||
|
{% assign max_cost = row.total_cost %}
|
||||||
|
{% endif %}
|
||||||
|
{% endfor %}
|
||||||
|
{% if max_cost == 0 %}{% assign max_cost = 1 %}{% endif %}
|
||||||
|
{% assign edit_sorted = site.data.qwen3_leaderboard | sort: 'pass_rate_2' | reverse %}
|
||||||
|
{% for row in edit_sorted %} {% comment %} Add loop index for unique IDs {% endcomment %}
|
||||||
|
{% assign row_index = forloop.index0 %}
|
||||||
|
<tr id="main-row-{{ row_index }}">
|
||||||
|
<td style="padding: 8px; text-align: center; vertical-align: middle;">
|
||||||
|
<button class="toggle-details" data-target="details-{{ row_index }}" style="background: none; border: none; cursor: pointer; font-size: 16px; padding: 0; vertical-align: middle;">▶</button>
|
||||||
|
<input type="checkbox" class="row-selector" data-row-index="{{ row_index }}" style="display: none; cursor: pointer; vertical-align: middle;">
|
||||||
|
</td>
|
||||||
|
<td style="padding: 8px;"><span>{{ row.model }}</span></td>
|
||||||
|
<td class="bar-cell">
|
||||||
|
<div class="bar-viz" style="width: {{ row.pass_rate_2 }}%; background-color: rgba(40, 167, 69, 0.3); border-right: 1px solid rgba(40, 167, 69, 0.5);"></div>
|
||||||
|
<span>{{ row.pass_rate_2 }}%</span>
|
||||||
|
</td>
|
||||||
|
<td class="bar-cell cost-bar-cell">
|
||||||
|
{% if row.total_cost > 0 %}
|
||||||
|
<div class="bar-viz cost-bar" data-cost="{{ row.total_cost }}" data-max-cost="{{ max_cost }}" style="width: 0%; background-color: rgba(13, 110, 253, 0.3); border-right: 1px solid rgba(13, 110, 253, 0.5);"></div>
|
||||||
|
{% endif %}
|
||||||
|
{% assign rounded_cost = row.total_cost | times: 1.0 | round: 2 %}
|
||||||
|
<span>{% if row.total_cost == 0 or rounded_cost == 0.00 %}{% else %}${{ rounded_cost }}{% endif %}</span>
|
||||||
|
</td>
|
||||||
|
<td style="padding: 8px;" class="col-command"><span><code>{{ row.command }}</code></span></td>
|
||||||
|
<td style="padding: 8px; text-align: center;" class="col-conform"><span>{{ row.percent_cases_well_formed }}%</span></td>
|
||||||
|
<td style="padding: 8px;" class="col-edit-format"><span>{{ row.edit_format }}</span></td>
|
||||||
|
</tr>
|
||||||
|
<tr class="details-row" id="details-{{ row_index }}" style="display: none; background-color: #f9f9f9;">
|
||||||
|
<td colspan="7" style="padding: 15px; border-bottom: 1px solid #ddd;">
|
||||||
|
<ul style="margin: 0; padding-left: 20px; list-style: none; border-bottom: 1px solid #ddd;">
|
||||||
|
{% for pair in row %}
|
||||||
|
{% if pair[1] != "" and pair[1] != nil %}
|
||||||
|
<li><strong>
|
||||||
|
{% if pair[0] == 'percent_cases_well_formed' %}
|
||||||
|
Percent cases well formed
|
||||||
|
{% else %}
|
||||||
|
{{ pair[0] | replace: '_', ' ' | capitalize }}
|
||||||
|
{% endif %}
|
||||||
|
:</strong>
|
||||||
|
{% if pair[0] == 'command' %}<code>{{ pair[1] }}</code>{% else %}{{ pair[1] }}{% endif %}
|
||||||
|
</li>
|
||||||
|
{% endif %}
|
||||||
|
{% endfor %}
|
||||||
|
</ul>
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
{% endfor %}
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
|
||||||
|
<style>
|
||||||
|
#leaderboard-title {
|
||||||
|
margin-bottom: 20px; /* Add space below the title */
|
||||||
|
}
|
||||||
|
tr.selected {
|
||||||
|
color: #0056b3;
|
||||||
|
}
|
||||||
|
table {
|
||||||
|
table-layout: fixed;
|
||||||
|
}
|
||||||
|
thead {
|
||||||
|
border-top: 1px solid #ddd; /* Add top border to header */
|
||||||
|
}
|
||||||
|
td, th {
|
||||||
|
border: none; /* Remove internal cell borders */
|
||||||
|
word-wrap: break-word;
|
||||||
|
overflow-wrap: break-word;
|
||||||
|
vertical-align: middle; /* Ensure consistent vertical alignment */
|
||||||
|
}
|
||||||
|
tbody tr {
|
||||||
|
height: 50px; /* Set a minimum height for all data rows */
|
||||||
|
}
|
||||||
|
td.col-command { /* Command column */
|
||||||
|
font-size: 12px; /* Keep font size adjustment for command column if desired, or remove */
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Hide new columns first on smaller screens */
|
||||||
|
@media screen and (max-width: 991px) {
|
||||||
|
th.col-conform, td.col-conform,
|
||||||
|
th.col-edit-format, td.col-edit-format {
|
||||||
|
display: none;
|
||||||
|
}
|
||||||
|
/* Increase width of Percent correct and Cost columns when others are hidden */
|
||||||
|
th:nth-child(3), td:nth-child(3), /* Percent correct */
|
||||||
|
th:nth-child(4), td:nth-child(4) { /* Cost */
|
||||||
|
width: 33% !important; /* Override inline style */
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Hide command column on even smaller screens */
|
||||||
|
@media screen and (max-width: 767px) {
|
||||||
|
th.col-command, td.col-command { /* Command column */
|
||||||
|
display: none;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/* --- Control Styles --- */
|
||||||
|
#controls-container {
|
||||||
|
margin-bottom: 20px; /* Add some space below controls */
|
||||||
|
}
|
||||||
|
|
||||||
|
#editSearchInput, #view-mode-select {
|
||||||
|
padding: 8px 12px; /* Consistent padding */
|
||||||
|
border: 1px solid #ccc; /* Slightly softer border */
|
||||||
|
border-radius: 4px;
|
||||||
|
font-size: 14px; /* Match table font size */
|
||||||
|
height: 38px; /* Match height */
|
||||||
|
box-sizing: border-box; /* Include padding/border in height */
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
.bar-cell {
|
||||||
|
position: relative; /* Positioning context for the bar */
|
||||||
|
padding: 8px;
|
||||||
|
/* text-align: center; Removed */
|
||||||
|
overflow: hidden; /* Prevent bar from overflowing cell boundaries if needed */
|
||||||
|
}
|
||||||
|
.cost-bar-cell {
|
||||||
|
background-image: none; /* Remove default gradient for cost cells */
|
||||||
|
}
|
||||||
|
.percent-tick, .cost-tick {
|
||||||
|
position: absolute;
|
||||||
|
top: 50%;
|
||||||
|
transform: translateY(10px);
|
||||||
|
height: 8px; /* Short tick */
|
||||||
|
width: 1px;
|
||||||
|
background-color: rgba(170, 170, 170, 0.5);
|
||||||
|
z-index: 2; /* Above the bar but below the text */
|
||||||
|
}
|
||||||
|
.bar-viz {
|
||||||
|
position: absolute;
|
||||||
|
left: 0;
|
||||||
|
top: 50%; /* Position at the middle of the cell */
|
||||||
|
transform: translateY(-50%); /* Center the bar vertically */
|
||||||
|
z-index: 1; /* Above background, below ticks and text */
|
||||||
|
height: 36px;
|
||||||
|
border-radius: 0 2px 2px 0; /* Slightly rounded end corners */
|
||||||
|
/* Width and colors are set inline via style attribute */
|
||||||
|
}
|
||||||
|
/* Add a tooltip class for showing cost information on hover */
|
||||||
|
.cost-bar-cell:hover .bar-viz[style*="background-image"] {
|
||||||
|
animation: stripe-animation 2s linear infinite;
|
||||||
|
}
|
||||||
|
@keyframes stripe-animation {
|
||||||
|
0% { background-position: 0 0; }
|
||||||
|
100% { background-position: 20px 0; }
|
||||||
|
}
|
||||||
|
.bar-cell span {
|
||||||
|
position: absolute; /* Position relative to the cell */
|
||||||
|
left: 5px; /* Position slightly inside the left edge */
|
||||||
|
top: 50%; /* Center vertically */
|
||||||
|
transform: translateY(-50%); /* Adjust vertical centering */
|
||||||
|
z-index: 3; /* Ensure text is above everything else */
|
||||||
|
background-color: rgba(255, 255, 255, 0.7); /* Semi-transparent white background */
|
||||||
|
padding: 0 4px; /* Add padding around the text */
|
||||||
|
border-radius: 3px; /* Rounded corners for the text background */
|
||||||
|
font-size: 14px; /* Adjust font size for the numbers */
|
||||||
|
}
|
||||||
|
.toggle-details {
|
||||||
|
color: #888; /* Make toggle symbol more subtle */
|
||||||
|
transition: color 0.2s; /* Smooth transition on hover */
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
/* Style for selected rows */
|
||||||
|
tr.row-selected > td {
|
||||||
|
background-color: #e7f3ff; /* Example light blue highlight */
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Ensure checkbox is vertically aligned if needed */
|
||||||
|
.row-selector {
|
||||||
|
vertical-align: middle;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Hide rows not matching the filter */
|
||||||
|
tr.hidden-by-mode {
|
||||||
|
display: none !important; /* Use important to override other display styles if necessary */
|
||||||
|
}
|
||||||
|
tr.hidden-by-search {
|
||||||
|
display: none !important;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* --- Mode Toggle Button Styles --- */
|
||||||
|
#view-mode-toggle {
|
||||||
|
height: 38px; /* Match input height */
|
||||||
|
box-sizing: border-box;
|
||||||
|
flex-shrink: 0; /* Prevent toggle from shrinking on small screens */
|
||||||
|
}
|
||||||
|
.mode-button {
|
||||||
|
transition: background-color 0.2s ease-in-out, color 0.2s ease-in-out;
|
||||||
|
white-space: nowrap; /* Prevent text wrapping */
|
||||||
|
}
|
||||||
|
.mode-button:not(.active) {
|
||||||
|
background-color: #f8f9fa; /* Light grey background */
|
||||||
|
color: #495057; /* Dark grey text */
|
||||||
|
}
|
||||||
|
.mode-button:not(.active):hover {
|
||||||
|
background-color: #e2e6ea; /* Slightly darker grey on hover */
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Style for highlighted rows in view mode */
|
||||||
|
tr.view-highlighted > td {
|
||||||
|
background-color: #fffef5; /* Very light yellow/cream */
|
||||||
|
/* Border moved to specific cell below */
|
||||||
|
}
|
||||||
|
/* Apply border and adjust padding ONLY for the first *visible* cell (Model name) in view mode */
|
||||||
|
tr.view-highlighted > td:nth-child(2) {
|
||||||
|
border-left: 4px solid #ffc107; /* Warning yellow border */
|
||||||
|
/* Original padding is 8px. Subtract border width. */
|
||||||
|
padding-left: 4px;
|
||||||
|
}
|
||||||
|
</style>
|
||||||
|
|
||||||
|
<script>
|
||||||
|
const LEADERBOARD_CUSTOM_TITLE = "Qwen3 results on the aider polyglot benchmark";
|
||||||
|
{% include leaderboard_table.js %}
|
||||||
|
</script>
|
||||||
|
|
||||||
|
|
||||||
|
## No think, via official Alibaba API
|
||||||
|
|
||||||
|
These results were obtained running against `https://dashscope.aliyuncs.com/compatible-mode/v1`
|
||||||
|
with no thinking.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export OPENAI_API_BASE=https://dashscope.aliyuncs.com/compatible-mode/v1
|
||||||
|
export OPENAI_API_KEY=<key>
|
||||||
|
```
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
- name: openai/qwen3-235b-a22b
|
||||||
|
use_temperature: 0.7
|
||||||
|
streaming: false
|
||||||
|
extra_params:
|
||||||
|
stream: false
|
||||||
|
max_tokens: 16384
|
||||||
|
top_p: 0.8
|
||||||
|
top_k: 20
|
||||||
|
temperature: 0.7
|
||||||
|
enable_thinking: false
|
||||||
|
extra_body:
|
||||||
|
enable_thinking: false
|
||||||
|
```
|
||||||
|
|
||||||
|
## OpenRouter only TogetherAI, recommended /no_think settings
|
||||||
|
|
||||||
|
These results were obtained with the
|
||||||
|
[recommended](https://huggingface.co/Qwen/Qwen3-235B-A22B#best-practices)
|
||||||
|
non-thinking model settings in `.aider.model.settings.yml`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
- name: openrouter/qwen/qwen3-235b-a22b
|
||||||
|
system_prompt_prefix: "/no_think"
|
||||||
|
use_temperature: 0.7
|
||||||
|
extra_params:
|
||||||
|
max_tokens: 24000
|
||||||
|
top_p: 0.8
|
||||||
|
top_k: 20
|
||||||
|
min_p: 0.0
|
||||||
|
temperature: 0.7
|
||||||
|
extra_body:
|
||||||
|
provider:
|
||||||
|
order: ["Together"]
|
||||||
|
```
|
||||||
|
|
||||||
|
And then running aider:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
aider --model openrouter/qwen/qwen3-235b-a22b
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## OpenRouter, all providers, default settings (thinking)
|
||||||
|
|
||||||
|
These results were obtained by simply running aider as shown below, without any model specific settings.
|
||||||
|
This should have enabled thinking, assuming upstream API providers honor that convention for Qwen3.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
aider --model openrouter/qwen/qwen3-xxx
|
||||||
|
```
|
||||||
|
|
||||||
|
## VLLM, bfloat16, recommended /no_think
|
||||||
|
|
||||||
|
These [benchmarks results were obtained by GitHub user AlongWY](https://github.com/Aider-AI/aider/pull/3908)
|
||||||
|
with the
|
||||||
|
[recommended](https://huggingface.co/Qwen/Qwen3-235B-A22B#best-practices)
|
||||||
|
non-thinking model settings in `.aider.model.settings.yml`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
- name: openai/<model-name>
|
||||||
|
system_prompt_prefix: "/no_think"
|
||||||
|
use_temperature: 0.7
|
||||||
|
extra_params:
|
||||||
|
max_tokens: 24000
|
||||||
|
top_p: 0.8
|
||||||
|
top_k: 20
|
||||||
|
min_p: 0.0
|
||||||
|
temperature: 0.7
|
||||||
|
```
|
||||||
|
|
||||||
|
And then running aider:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
aider --model openai/<model-name> --openai-api-base <url>
|
||||||
|
```
|
BIN
aider/website/assets/2025-05-08-qwen3.jpg
Normal file
BIN
aider/website/assets/2025-05-08-qwen3.jpg
Normal file
Binary file not shown.
After Width: | Height: | Size: 221 KiB |
|
@ -446,7 +446,7 @@ code, pre, .code-block {
|
||||||
}
|
}
|
||||||
|
|
||||||
.testimonial-text::before {
|
.testimonial-text::before {
|
||||||
content: "\201C"; /* Opening fancy quote */
|
content: "\201C\00A0"; /* Opening fancy quote */
|
||||||
color: var(--primary);
|
color: var(--primary);
|
||||||
margin-right: 4px;
|
margin-right: 4px;
|
||||||
vertical-align: -0.3em;
|
vertical-align: -0.3em;
|
||||||
|
|
File diff suppressed because it is too large
Load diff
|
@ -4,7 +4,7 @@
|
||||||
# Place in your home dir, or at the root of your git repo.
|
# Place in your home dir, or at the root of your git repo.
|
||||||
##########################################################
|
##########################################################
|
||||||
|
|
||||||
# Note: You can only put OpenAI and Anthropic API keys in the yaml
|
# Note: You can only put OpenAI and Anthropic API keys in the YAML
|
||||||
# config file. Keys for all APIs can be stored in a .env file
|
# config file. Keys for all APIs can be stored in a .env file
|
||||||
# https://aider.chat/docs/config/dotenv.html
|
# https://aider.chat/docs/config/dotenv.html
|
||||||
|
|
||||||
|
@ -224,11 +224,11 @@
|
||||||
## Enable/disable commits when repo is found dirty (default: True)
|
## Enable/disable commits when repo is found dirty (default: True)
|
||||||
#dirty-commits: true
|
#dirty-commits: true
|
||||||
|
|
||||||
## Attribute aider code changes in the git author name (default: True)
|
## Attribute aider code changes in the git author name (default: True). If explicitly set to True, overrides --attribute-co-authored-by precedence.
|
||||||
#attribute-author: true
|
#attribute-author: xxx
|
||||||
|
|
||||||
## Attribute aider commits in the git committer name (default: True)
|
## Attribute aider commits in the git committer name (default: True). If explicitly set to True, overrides --attribute-co-authored-by precedence for aider edits.
|
||||||
#attribute-committer: true
|
#attribute-committer: xxx
|
||||||
|
|
||||||
## Prefix commit messages with 'aider: ' if aider authored the changes (default: False)
|
## Prefix commit messages with 'aider: ' if aider authored the changes (default: False)
|
||||||
#attribute-commit-message-author: false
|
#attribute-commit-message-author: false
|
||||||
|
@ -236,6 +236,9 @@
|
||||||
## Prefix all commit messages with 'aider: ' (default: False)
|
## Prefix all commit messages with 'aider: ' (default: False)
|
||||||
#attribute-commit-message-committer: false
|
#attribute-commit-message-committer: false
|
||||||
|
|
||||||
|
## Attribute aider edits using the Co-authored-by trailer in the commit message (default: False). If True, this takes precedence over default --attribute-author and --attribute-committer behavior unless they are explicitly set to True.
|
||||||
|
#attribute-co-authored-by: false
|
||||||
|
|
||||||
## Enable/disable git pre-commit hooks with --no-verify (default: False)
|
## Enable/disable git pre-commit hooks with --no-verify (default: False)
|
||||||
#git-commit-verify: false
|
#git-commit-verify: false
|
||||||
|
|
||||||
|
@ -358,6 +361,9 @@
|
||||||
#################
|
#################
|
||||||
# Other settings:
|
# Other settings:
|
||||||
|
|
||||||
|
## Never prompt for or attempt to install Playwright for web scraping (default: False).
|
||||||
|
#disable-playwright: false
|
||||||
|
|
||||||
## specify a file to edit (can be used multiple times)
|
## specify a file to edit (can be used multiple times)
|
||||||
#file: xxx
|
#file: xxx
|
||||||
## Specify multiple values like this:
|
## Specify multiple values like this:
|
||||||
|
@ -422,6 +428,9 @@
|
||||||
## Specify which editor to use for the /editor command
|
## Specify which editor to use for the /editor command
|
||||||
#editor: xxx
|
#editor: xxx
|
||||||
|
|
||||||
|
## Print shell completion script for the specified SHELL and exit. Supported shells: bash, tcsh, zsh. Example: aider --shell-completions bash
|
||||||
|
#shell-completions: xxx
|
||||||
|
|
||||||
############################
|
############################
|
||||||
# Deprecated model settings:
|
# Deprecated model settings:
|
||||||
|
|
||||||
|
|
|
@ -213,11 +213,11 @@
|
||||||
## Enable/disable commits when repo is found dirty (default: True)
|
## Enable/disable commits when repo is found dirty (default: True)
|
||||||
#AIDER_DIRTY_COMMITS=true
|
#AIDER_DIRTY_COMMITS=true
|
||||||
|
|
||||||
## Attribute aider code changes in the git author name (default: True)
|
## Attribute aider code changes in the git author name (default: True). If explicitly set to True, overrides --attribute-co-authored-by precedence.
|
||||||
#AIDER_ATTRIBUTE_AUTHOR=true
|
#AIDER_ATTRIBUTE_AUTHOR=
|
||||||
|
|
||||||
## Attribute aider commits in the git committer name (default: True)
|
## Attribute aider commits in the git committer name (default: True). If explicitly set to True, overrides --attribute-co-authored-by precedence for aider edits.
|
||||||
#AIDER_ATTRIBUTE_COMMITTER=true
|
#AIDER_ATTRIBUTE_COMMITTER=
|
||||||
|
|
||||||
## Prefix commit messages with 'aider: ' if aider authored the changes (default: False)
|
## Prefix commit messages with 'aider: ' if aider authored the changes (default: False)
|
||||||
#AIDER_ATTRIBUTE_COMMIT_MESSAGE_AUTHOR=false
|
#AIDER_ATTRIBUTE_COMMIT_MESSAGE_AUTHOR=false
|
||||||
|
@ -225,6 +225,9 @@
|
||||||
## Prefix all commit messages with 'aider: ' (default: False)
|
## Prefix all commit messages with 'aider: ' (default: False)
|
||||||
#AIDER_ATTRIBUTE_COMMIT_MESSAGE_COMMITTER=false
|
#AIDER_ATTRIBUTE_COMMIT_MESSAGE_COMMITTER=false
|
||||||
|
|
||||||
|
## Attribute aider edits using the Co-authored-by trailer in the commit message (default: False). If True, this takes precedence over default --attribute-author and --attribute-committer behavior unless they are explicitly set to True.
|
||||||
|
#AIDER_ATTRIBUTE_CO_AUTHORED_BY=false
|
||||||
|
|
||||||
## Enable/disable git pre-commit hooks with --no-verify (default: False)
|
## Enable/disable git pre-commit hooks with --no-verify (default: False)
|
||||||
#AIDER_GIT_COMMIT_VERIFY=false
|
#AIDER_GIT_COMMIT_VERIFY=false
|
||||||
|
|
||||||
|
@ -339,6 +342,9 @@
|
||||||
#################
|
#################
|
||||||
# Other settings:
|
# Other settings:
|
||||||
|
|
||||||
|
## Never prompt for or attempt to install Playwright for web scraping (default: False).
|
||||||
|
#AIDER_DISABLE_PLAYWRIGHT=false
|
||||||
|
|
||||||
## specify a file to edit (can be used multiple times)
|
## specify a file to edit (can be used multiple times)
|
||||||
#AIDER_FILE=
|
#AIDER_FILE=
|
||||||
|
|
||||||
|
@ -390,6 +396,9 @@
|
||||||
## Specify which editor to use for the /editor command
|
## Specify which editor to use for the /editor command
|
||||||
#AIDER_EDITOR=
|
#AIDER_EDITOR=
|
||||||
|
|
||||||
|
## Print shell completion script for the specified SHELL and exit. Supported shells: bash, tcsh, zsh. Example: aider --shell-completions bash
|
||||||
|
#AIDER_SHELL_COMPLETIONS=
|
||||||
|
|
||||||
############################
|
############################
|
||||||
# Deprecated model settings:
|
# Deprecated model settings:
|
||||||
|
|
||||||
|
|
|
@ -81,7 +81,7 @@ You can override or add settings for any model by creating a `.aider.model.setti
|
||||||
If the files above exist, they will be loaded in that order.
|
If the files above exist, they will be loaded in that order.
|
||||||
Files loaded last will take priority.
|
Files loaded last will take priority.
|
||||||
|
|
||||||
The yaml file should be a list of dictionary objects for each model.
|
The YAML file should be a list of dictionary objects for each model.
|
||||||
|
|
||||||
|
|
||||||
### Passing extra params to litellm.completion
|
### Passing extra params to litellm.completion
|
||||||
|
@ -117,40 +117,6 @@ For example:
|
||||||
These settings will be merged with any model-specific settings, with the
|
These settings will be merged with any model-specific settings, with the
|
||||||
`aider/extra_params` settings taking precedence for any direct conflicts.
|
`aider/extra_params` settings taking precedence for any direct conflicts.
|
||||||
|
|
||||||
### Controlling o1 reasoning effort
|
|
||||||
|
|
||||||
You need this chunk of yaml:
|
|
||||||
|
|
||||||
```
|
|
||||||
extra_params:
|
|
||||||
extra_body:
|
|
||||||
reasoning_effort: high
|
|
||||||
```
|
|
||||||
|
|
||||||
This is a full entry for o1 with that setting, obtained by finding the default
|
|
||||||
entry in the list below and adding the above `extra_params` entry:
|
|
||||||
|
|
||||||
```
|
|
||||||
- name: o1
|
|
||||||
edit_format: diff
|
|
||||||
weak_model_name: gpt-4o-mini
|
|
||||||
use_repo_map: true
|
|
||||||
send_undo_reply: false
|
|
||||||
lazy: false
|
|
||||||
reminder: user
|
|
||||||
examples_as_sys_msg: false
|
|
||||||
cache_control: false
|
|
||||||
caches_by_default: false
|
|
||||||
use_system_prompt: true
|
|
||||||
use_temperature: false
|
|
||||||
streaming: false
|
|
||||||
editor_model_name: gpt-4o
|
|
||||||
editor_edit_format: editor-diff
|
|
||||||
extra_params:
|
|
||||||
extra_body:
|
|
||||||
reasoning_effort: high
|
|
||||||
```
|
|
||||||
|
|
||||||
### Default model settings
|
### Default model settings
|
||||||
|
|
||||||
Below are all the pre-configured model settings to give a sense for the settings which are supported.
|
Below are all the pre-configured model settings to give a sense for the settings which are supported.
|
||||||
|
@ -192,6 +158,34 @@ cog.out("```\n")
|
||||||
system_prompt_prefix: null
|
system_prompt_prefix: null
|
||||||
accepts_settings: null
|
accepts_settings: null
|
||||||
|
|
||||||
|
- name: anthropic.claude-opus-4-20250514-v1:0
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
|
use_repo_map: true
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 32000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings:
|
||||||
|
- thinking_tokens
|
||||||
|
|
||||||
|
- name: anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
|
use_repo_map: true
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 64000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings:
|
||||||
|
- thinking_tokens
|
||||||
|
|
||||||
- name: anthropic/claude-3-5-haiku-20241022
|
- name: anthropic/claude-3-5-haiku-20241022
|
||||||
edit_format: diff
|
edit_format: diff
|
||||||
weak_model_name: anthropic/claude-3-5-haiku-20241022
|
weak_model_name: anthropic/claude-3-5-haiku-20241022
|
||||||
|
@ -280,6 +274,34 @@ cog.out("```\n")
|
||||||
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25
|
||||||
cache_control: true
|
cache_control: true
|
||||||
|
|
||||||
|
- name: anthropic/claude-opus-4-20250514
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: anthropic/claude-3-5-haiku-20241022
|
||||||
|
use_repo_map: true
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 32000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: anthropic/claude-sonnet-4-20250514
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings:
|
||||||
|
- thinking_tokens
|
||||||
|
|
||||||
|
- name: anthropic/claude-sonnet-4-20250514
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: anthropic/claude-3-5-haiku-20241022
|
||||||
|
use_repo_map: true
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 64000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: anthropic/claude-sonnet-4-20250514
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings:
|
||||||
|
- thinking_tokens
|
||||||
|
|
||||||
- name: azure/gpt-4.1
|
- name: azure/gpt-4.1
|
||||||
edit_format: diff
|
edit_format: diff
|
||||||
weak_model_name: azure/gpt-4.1-mini
|
weak_model_name: azure/gpt-4.1-mini
|
||||||
|
@ -441,6 +463,20 @@ cog.out("```\n")
|
||||||
accepts_settings:
|
accepts_settings:
|
||||||
- thinking_tokens
|
- thinking_tokens
|
||||||
|
|
||||||
|
- name: bedrock/anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: bedrock/anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
|
use_repo_map: true
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 64000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: bedrock/anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings:
|
||||||
|
- thinking_tokens
|
||||||
|
|
||||||
- name: bedrock/us.anthropic.claude-3-7-sonnet-20250219-v1:0
|
- name: bedrock/us.anthropic.claude-3-7-sonnet-20250219-v1:0
|
||||||
edit_format: diff
|
edit_format: diff
|
||||||
weak_model_name: bedrock/us.anthropic.claude-3-5-haiku-20241022-v1:0
|
weak_model_name: bedrock/us.anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
|
@ -457,6 +493,20 @@ cog.out("```\n")
|
||||||
accepts_settings:
|
accepts_settings:
|
||||||
- thinking_tokens
|
- thinking_tokens
|
||||||
|
|
||||||
|
- name: bedrock/us.anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: bedrock/us.anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
|
use_repo_map: true
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 64000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: bedrock/us.anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings:
|
||||||
|
- thinking_tokens
|
||||||
|
|
||||||
- name: bedrock_converse/anthropic.claude-3-7-sonnet-20250219-v1:0
|
- name: bedrock_converse/anthropic.claude-3-7-sonnet-20250219-v1:0
|
||||||
edit_format: diff
|
edit_format: diff
|
||||||
weak_model_name: bedrock_converse/anthropic.claude-3-5-haiku-20241022-v1:0
|
weak_model_name: bedrock_converse/anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
|
@ -473,6 +523,62 @@ cog.out("```\n")
|
||||||
accepts_settings:
|
accepts_settings:
|
||||||
- thinking_tokens
|
- thinking_tokens
|
||||||
|
|
||||||
|
- name: bedrock_converse/anthropic.claude-opus-4-20250514-v1:0
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: bedrock_converse/anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
|
use_repo_map: true
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 32000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: bedrock_converse/anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings:
|
||||||
|
- thinking_tokens
|
||||||
|
|
||||||
|
- name: bedrock_converse/anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: bedrock_converse/anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
|
use_repo_map: true
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 64000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: bedrock_converse/anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings:
|
||||||
|
- thinking_tokens
|
||||||
|
|
||||||
|
- name: bedrock_converse/eu.anthropic.claude-opus-4-20250514-v1:0
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: bedrock_converse/eu.anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
|
use_repo_map: true
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 32000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: bedrock_converse/eu.anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings:
|
||||||
|
- thinking_tokens
|
||||||
|
|
||||||
|
- name: bedrock_converse/eu.anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: bedrock_converse/eu.anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
|
use_repo_map: true
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 64000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: bedrock_converse/eu.anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings:
|
||||||
|
- thinking_tokens
|
||||||
|
|
||||||
- name: bedrock_converse/us.anthropic.claude-3-7-sonnet-20250219-v1:0
|
- name: bedrock_converse/us.anthropic.claude-3-7-sonnet-20250219-v1:0
|
||||||
edit_format: diff
|
edit_format: diff
|
||||||
weak_model_name: bedrock_converse/us.anthropic.claude-3-5-haiku-20241022-v1:0
|
weak_model_name: bedrock_converse/us.anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
|
@ -489,6 +595,34 @@ cog.out("```\n")
|
||||||
accepts_settings:
|
accepts_settings:
|
||||||
- thinking_tokens
|
- thinking_tokens
|
||||||
|
|
||||||
|
- name: bedrock_converse/us.anthropic.claude-opus-4-20250514-v1:0
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: bedrock_converse/us.anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
|
use_repo_map: true
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 32000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: bedrock_converse/us.anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings:
|
||||||
|
- thinking_tokens
|
||||||
|
|
||||||
|
- name: bedrock_converse/us.anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: bedrock_converse/us.anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
|
use_repo_map: true
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 64000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: bedrock_converse/us.anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings:
|
||||||
|
- thinking_tokens
|
||||||
|
|
||||||
- name: claude-3-5-haiku-20241022
|
- name: claude-3-5-haiku-20241022
|
||||||
edit_format: diff
|
edit_format: diff
|
||||||
weak_model_name: claude-3-5-haiku-20241022
|
weak_model_name: claude-3-5-haiku-20241022
|
||||||
|
@ -572,6 +706,34 @@ cog.out("```\n")
|
||||||
- name: claude-3-sonnet-20240229
|
- name: claude-3-sonnet-20240229
|
||||||
weak_model_name: claude-3-5-haiku-20241022
|
weak_model_name: claude-3-5-haiku-20241022
|
||||||
|
|
||||||
|
- name: claude-opus-4-20250514
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: claude-3-5-haiku-20241022
|
||||||
|
use_repo_map: true
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 32000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: claude-sonnet-4-20250514
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings:
|
||||||
|
- thinking_tokens
|
||||||
|
|
||||||
|
- name: claude-sonnet-4-20250514
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: claude-3-5-haiku-20241022
|
||||||
|
use_repo_map: true
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 64000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: claude-sonnet-4-20250514
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings:
|
||||||
|
- thinking_tokens
|
||||||
|
|
||||||
- name: cohere_chat/command-a-03-2025
|
- name: cohere_chat/command-a-03-2025
|
||||||
examples_as_sys_msg: true
|
examples_as_sys_msg: true
|
||||||
|
|
||||||
|
@ -634,6 +796,34 @@ cog.out("```\n")
|
||||||
editor_model_name: deepseek/deepseek-chat
|
editor_model_name: deepseek/deepseek-chat
|
||||||
editor_edit_format: editor-diff
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: eu.anthropic.claude-opus-4-20250514-v1:0
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: eu.anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
|
use_repo_map: true
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 32000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: eu.anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings:
|
||||||
|
- thinking_tokens
|
||||||
|
|
||||||
|
- name: eu.anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: eu.anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
|
use_repo_map: true
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 64000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: eu.anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings:
|
||||||
|
- thinking_tokens
|
||||||
|
|
||||||
- name: fireworks_ai/accounts/fireworks/models/deepseek-r1
|
- name: fireworks_ai/accounts/fireworks/models/deepseek-r1
|
||||||
edit_format: diff
|
edit_format: diff
|
||||||
weak_model_name: fireworks_ai/accounts/fireworks/models/deepseek-v3
|
weak_model_name: fireworks_ai/accounts/fireworks/models/deepseek-v3
|
||||||
|
@ -674,6 +864,13 @@ cog.out("```\n")
|
||||||
editor_edit_format: editor-diff
|
editor_edit_format: editor-diff
|
||||||
reasoning_tag: think
|
reasoning_tag: think
|
||||||
|
|
||||||
|
- name: gemini-2.5-flash-preview-04-17
|
||||||
|
edit_format: diff
|
||||||
|
use_repo_map: true
|
||||||
|
accepts_settings:
|
||||||
|
- reasoning_effort
|
||||||
|
- thinking_tokens
|
||||||
|
|
||||||
- name: gemini/gemini-1.5-flash-002
|
- name: gemini/gemini-1.5-flash-002
|
||||||
|
|
||||||
- name: gemini/gemini-1.5-flash-exp-0827
|
- name: gemini/gemini-1.5-flash-exp-0827
|
||||||
|
@ -702,15 +899,30 @@ cog.out("```\n")
|
||||||
edit_format: diff
|
edit_format: diff
|
||||||
use_repo_map: true
|
use_repo_map: true
|
||||||
|
|
||||||
|
- name: gemini/gemini-2.5-flash-preview-04-17
|
||||||
|
edit_format: diff
|
||||||
|
use_repo_map: true
|
||||||
|
accepts_settings:
|
||||||
|
- reasoning_effort
|
||||||
|
- thinking_tokens
|
||||||
|
|
||||||
- name: gemini/gemini-2.5-pro-exp-03-25
|
- name: gemini/gemini-2.5-pro-exp-03-25
|
||||||
edit_format: diff-fenced
|
edit_format: diff-fenced
|
||||||
weak_model_name: gemini/gemini-2.0-flash
|
weak_model_name: gemini/gemini-2.5-flash-preview-04-17
|
||||||
use_repo_map: true
|
use_repo_map: true
|
||||||
|
overeager: true
|
||||||
|
|
||||||
- name: gemini/gemini-2.5-pro-preview-03-25
|
- name: gemini/gemini-2.5-pro-preview-03-25
|
||||||
edit_format: diff-fenced
|
edit_format: diff-fenced
|
||||||
weak_model_name: gemini/gemini-2.0-flash
|
weak_model_name: gemini/gemini-2.0-flash
|
||||||
use_repo_map: true
|
use_repo_map: true
|
||||||
|
overeager: true
|
||||||
|
|
||||||
|
- name: gemini/gemini-2.5-pro-preview-05-06
|
||||||
|
edit_format: diff-fenced
|
||||||
|
weak_model_name: gemini/gemini-2.5-flash-preview-04-17
|
||||||
|
use_repo_map: true
|
||||||
|
overeager: true
|
||||||
|
|
||||||
- name: gemini/gemini-exp-1114
|
- name: gemini/gemini-exp-1114
|
||||||
edit_format: diff
|
edit_format: diff
|
||||||
|
@ -1157,6 +1369,34 @@ cog.out("```\n")
|
||||||
accepts_settings:
|
accepts_settings:
|
||||||
- thinking_tokens
|
- thinking_tokens
|
||||||
|
|
||||||
|
- name: openrouter/anthropic/claude-opus-4
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: openrouter/anthropic/claude-3-5-haiku
|
||||||
|
use_repo_map: true
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 32000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: openrouter/anthropic/claude-sonnet-4
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings:
|
||||||
|
- thinking_tokens
|
||||||
|
|
||||||
|
- name: openrouter/anthropic/claude-sonnet-4
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: openrouter/anthropic/claude-3-5-haiku
|
||||||
|
use_repo_map: true
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 64000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: openrouter/anthropic/claude-sonnet-4
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings:
|
||||||
|
- thinking_tokens
|
||||||
|
|
||||||
- name: openrouter/cohere/command-a-03-2025
|
- name: openrouter/cohere/command-a-03-2025
|
||||||
examples_as_sys_msg: true
|
examples_as_sys_msg: true
|
||||||
|
|
||||||
|
@ -1236,10 +1476,23 @@ cog.out("```\n")
|
||||||
max_tokens: 8192
|
max_tokens: 8192
|
||||||
caches_by_default: true
|
caches_by_default: true
|
||||||
|
|
||||||
- name: openrouter/google/gemini-2.5-pro-exp-03-25:free
|
- name: openrouter/google/gemini-2.5-pro-exp-03-25
|
||||||
edit_format: diff-fenced
|
edit_format: diff-fenced
|
||||||
weak_model_name: openrouter/google/gemini-2.0-flash-exp:free
|
weak_model_name: openrouter/google/gemini-2.0-flash-exp:free
|
||||||
use_repo_map: true
|
use_repo_map: true
|
||||||
|
overeager: true
|
||||||
|
|
||||||
|
- name: openrouter/google/gemini-2.5-pro-preview-03-25
|
||||||
|
edit_format: diff-fenced
|
||||||
|
weak_model_name: openrouter/google/gemini-2.0-flash-001
|
||||||
|
use_repo_map: true
|
||||||
|
overeager: true
|
||||||
|
|
||||||
|
- name: openrouter/google/gemini-2.5-pro-preview-05-06
|
||||||
|
edit_format: diff-fenced
|
||||||
|
weak_model_name: openrouter/google/gemini-2.0-flash-001
|
||||||
|
use_repo_map: true
|
||||||
|
overeager: true
|
||||||
|
|
||||||
- name: openrouter/google/gemma-3-27b-it
|
- name: openrouter/google/gemma-3-27b-it
|
||||||
use_system_prompt: false
|
use_system_prompt: false
|
||||||
|
@ -1433,6 +1686,34 @@ cog.out("```\n")
|
||||||
accepts_settings:
|
accepts_settings:
|
||||||
- reasoning_effort
|
- reasoning_effort
|
||||||
|
|
||||||
|
- name: us.anthropic.claude-opus-4-20250514-v1:0
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: us.anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
|
use_repo_map: true
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 32000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: us.anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings:
|
||||||
|
- thinking_tokens
|
||||||
|
|
||||||
|
- name: us.anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: us.anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
|
use_repo_map: true
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 64000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: us.anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings:
|
||||||
|
- thinking_tokens
|
||||||
|
|
||||||
- name: vertex_ai-anthropic_models/vertex_ai/claude-3-7-sonnet@20250219
|
- name: vertex_ai-anthropic_models/vertex_ai/claude-3-7-sonnet@20250219
|
||||||
edit_format: diff
|
edit_format: diff
|
||||||
weak_model_name: vertex_ai/claude-3-5-haiku@20241022
|
weak_model_name: vertex_ai/claude-3-5-haiku@20241022
|
||||||
|
@ -1446,6 +1727,35 @@ cog.out("```\n")
|
||||||
accepts_settings:
|
accepts_settings:
|
||||||
- thinking_tokens
|
- thinking_tokens
|
||||||
|
|
||||||
|
- name: vertex_ai-anthropic_models/vertex_ai/claude-opus-4@20250514
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: vertex_ai/claude-3-5-haiku@20241022
|
||||||
|
use_repo_map: true
|
||||||
|
extra_params:
|
||||||
|
max_tokens: 32000
|
||||||
|
editor_model_name: vertex_ai-anthropic_models/vertex_ai/claude-sonnet-4@20250514
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings:
|
||||||
|
- thinking_tokens
|
||||||
|
|
||||||
|
- name: vertex_ai-anthropic_models/vertex_ai/claude-sonnet-4@20250514
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: vertex_ai/claude-3-5-haiku@20241022
|
||||||
|
use_repo_map: true
|
||||||
|
extra_params:
|
||||||
|
max_tokens: 64000
|
||||||
|
editor_model_name: vertex_ai-anthropic_models/vertex_ai/claude-sonnet-4@20250514
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings:
|
||||||
|
- thinking_tokens
|
||||||
|
|
||||||
|
- name: vertex_ai-language-models/gemini-2.5-flash-preview-04-17
|
||||||
|
edit_format: diff
|
||||||
|
use_repo_map: true
|
||||||
|
accepts_settings:
|
||||||
|
- reasoning_effort
|
||||||
|
- thinking_tokens
|
||||||
|
|
||||||
- name: vertex_ai/claude-3-5-haiku@20241022
|
- name: vertex_ai/claude-3-5-haiku@20241022
|
||||||
edit_format: diff
|
edit_format: diff
|
||||||
weak_model_name: vertex_ai/claude-3-5-haiku@20241022
|
weak_model_name: vertex_ai/claude-3-5-haiku@20241022
|
||||||
|
@ -1494,13 +1804,55 @@ cog.out("```\n")
|
||||||
- name: vertex_ai/claude-3-sonnet@20240229
|
- name: vertex_ai/claude-3-sonnet@20240229
|
||||||
weak_model_name: vertex_ai/claude-3-5-haiku@20241022
|
weak_model_name: vertex_ai/claude-3-5-haiku@20241022
|
||||||
|
|
||||||
|
- name: vertex_ai/claude-opus-4@20250514
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: vertex_ai/claude-3-5-haiku@20241022
|
||||||
|
use_repo_map: true
|
||||||
|
extra_params:
|
||||||
|
max_tokens: 32000
|
||||||
|
editor_model_name: vertex_ai/claude-sonnet-4@20250514
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings:
|
||||||
|
- thinking_tokens
|
||||||
|
|
||||||
|
- name: vertex_ai/claude-sonnet-4@20250514
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: vertex_ai/claude-3-5-haiku@20241022
|
||||||
|
use_repo_map: true
|
||||||
|
extra_params:
|
||||||
|
max_tokens: 64000
|
||||||
|
editor_model_name: vertex_ai/claude-sonnet-4@20250514
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
accepts_settings:
|
||||||
|
- thinking_tokens
|
||||||
|
|
||||||
|
- name: vertex_ai/gemini-2.5-flash-preview-05-20
|
||||||
|
edit_format: diff
|
||||||
|
use_repo_map: true
|
||||||
|
accepts_settings:
|
||||||
|
- reasoning_effort
|
||||||
|
- thinking_tokens
|
||||||
|
|
||||||
- name: vertex_ai/gemini-2.5-pro-exp-03-25
|
- name: vertex_ai/gemini-2.5-pro-exp-03-25
|
||||||
edit_format: diff-fenced
|
edit_format: diff-fenced
|
||||||
|
weak_model_name: vertex_ai-language-models/gemini-2.5-flash-preview-04-17
|
||||||
use_repo_map: true
|
use_repo_map: true
|
||||||
|
overeager: true
|
||||||
|
editor_model_name: vertex_ai-language-models/gemini-2.5-flash-preview-04-17
|
||||||
|
|
||||||
- name: vertex_ai/gemini-2.5-pro-preview-03-25
|
- name: vertex_ai/gemini-2.5-pro-preview-03-25
|
||||||
edit_format: diff-fenced
|
edit_format: diff-fenced
|
||||||
|
weak_model_name: vertex_ai-language-models/gemini-2.5-flash-preview-04-17
|
||||||
use_repo_map: true
|
use_repo_map: true
|
||||||
|
overeager: true
|
||||||
|
editor_model_name: vertex_ai-language-models/gemini-2.5-flash-preview-04-17
|
||||||
|
|
||||||
|
- name: vertex_ai/gemini-2.5-pro-preview-05-06
|
||||||
|
edit_format: diff-fenced
|
||||||
|
weak_model_name: vertex_ai-language-models/gemini-2.5-flash-preview-04-17
|
||||||
|
use_repo_map: true
|
||||||
|
overeager: true
|
||||||
|
editor_model_name: vertex_ai-language-models/gemini-2.5-flash-preview-04-17
|
||||||
|
|
||||||
- name: vertex_ai/gemini-pro-experimental
|
- name: vertex_ai/gemini-pro-experimental
|
||||||
edit_format: diff-fenced
|
edit_format: diff-fenced
|
||||||
|
|
|
@ -1,7 +1,7 @@
|
||||||
---
|
---
|
||||||
parent: Configuration
|
parent: Configuration
|
||||||
nav_order: 15
|
nav_order: 15
|
||||||
description: How to configure aider with a yaml config file.
|
description: How to configure aider with a YAML config file.
|
||||||
---
|
---
|
||||||
|
|
||||||
# YAML config file
|
# YAML config file
|
||||||
|
@ -58,7 +58,7 @@ cog.outl("```")
|
||||||
# Place in your home dir, or at the root of your git repo.
|
# Place in your home dir, or at the root of your git repo.
|
||||||
##########################################################
|
##########################################################
|
||||||
|
|
||||||
# Note: You can only put OpenAI and Anthropic API keys in the yaml
|
# Note: You can only put OpenAI and Anthropic API keys in the YAML
|
||||||
# config file. Keys for all APIs can be stored in a .env file
|
# config file. Keys for all APIs can be stored in a .env file
|
||||||
# https://aider.chat/docs/config/dotenv.html
|
# https://aider.chat/docs/config/dotenv.html
|
||||||
|
|
||||||
|
@ -278,11 +278,11 @@ cog.outl("```")
|
||||||
## Enable/disable commits when repo is found dirty (default: True)
|
## Enable/disable commits when repo is found dirty (default: True)
|
||||||
#dirty-commits: true
|
#dirty-commits: true
|
||||||
|
|
||||||
## Attribute aider code changes in the git author name (default: True)
|
## Attribute aider code changes in the git author name (default: True). If explicitly set to True, overrides --attribute-co-authored-by precedence.
|
||||||
#attribute-author: true
|
#attribute-author: xxx
|
||||||
|
|
||||||
## Attribute aider commits in the git committer name (default: True)
|
## Attribute aider commits in the git committer name (default: True). If explicitly set to True, overrides --attribute-co-authored-by precedence for aider edits.
|
||||||
#attribute-committer: true
|
#attribute-committer: xxx
|
||||||
|
|
||||||
## Prefix commit messages with 'aider: ' if aider authored the changes (default: False)
|
## Prefix commit messages with 'aider: ' if aider authored the changes (default: False)
|
||||||
#attribute-commit-message-author: false
|
#attribute-commit-message-author: false
|
||||||
|
@ -290,6 +290,9 @@ cog.outl("```")
|
||||||
## Prefix all commit messages with 'aider: ' (default: False)
|
## Prefix all commit messages with 'aider: ' (default: False)
|
||||||
#attribute-commit-message-committer: false
|
#attribute-commit-message-committer: false
|
||||||
|
|
||||||
|
## Attribute aider edits using the Co-authored-by trailer in the commit message (default: False). If True, this takes precedence over default --attribute-author and --attribute-committer behavior unless they are explicitly set to True.
|
||||||
|
#attribute-co-authored-by: false
|
||||||
|
|
||||||
## Enable/disable git pre-commit hooks with --no-verify (default: False)
|
## Enable/disable git pre-commit hooks with --no-verify (default: False)
|
||||||
#git-commit-verify: false
|
#git-commit-verify: false
|
||||||
|
|
||||||
|
@ -412,6 +415,9 @@ cog.outl("```")
|
||||||
#################
|
#################
|
||||||
# Other settings:
|
# Other settings:
|
||||||
|
|
||||||
|
## Never prompt for or attempt to install Playwright for web scraping (default: False).
|
||||||
|
#disable-playwright: false
|
||||||
|
|
||||||
## specify a file to edit (can be used multiple times)
|
## specify a file to edit (can be used multiple times)
|
||||||
#file: xxx
|
#file: xxx
|
||||||
## Specify multiple values like this:
|
## Specify multiple values like this:
|
||||||
|
@ -476,6 +482,9 @@ cog.outl("```")
|
||||||
## Specify which editor to use for the /editor command
|
## Specify which editor to use for the /editor command
|
||||||
#editor: xxx
|
#editor: xxx
|
||||||
|
|
||||||
|
## Print shell completion script for the specified SHELL and exit. Supported shells: bash, tcsh, zsh. Example: aider --shell-completions bash
|
||||||
|
#shell-completions: xxx
|
||||||
|
|
||||||
############################
|
############################
|
||||||
# Deprecated model settings:
|
# Deprecated model settings:
|
||||||
|
|
||||||
|
|
|
@ -40,9 +40,9 @@ OPENAI_API_KEY=<key>
|
||||||
ANTHROPIC_API_KEY=<key>
|
ANTHROPIC_API_KEY=<key>
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Yaml config file
|
#### YAML config file
|
||||||
You can also set those API keys via special entries in the
|
You can also set those API keys via special entries in the
|
||||||
[yaml config file](/docs/config/aider_conf.html), like this:
|
[YAML config file](/docs/config/aider_conf.html), like this:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
openai-api-key: <key>
|
openai-api-key: <key>
|
||||||
|
@ -74,7 +74,7 @@ OPENROUTER_API_KEY=bar
|
||||||
DEEPSEEK_API_KEY=baz
|
DEEPSEEK_API_KEY=baz
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Yaml config file
|
#### YAML config file
|
||||||
|
|
||||||
|
|
||||||
You can also set API keys in the
|
You can also set API keys in the
|
||||||
|
|
|
@ -253,11 +253,11 @@ cog.outl("```")
|
||||||
## Enable/disable commits when repo is found dirty (default: True)
|
## Enable/disable commits when repo is found dirty (default: True)
|
||||||
#AIDER_DIRTY_COMMITS=true
|
#AIDER_DIRTY_COMMITS=true
|
||||||
|
|
||||||
## Attribute aider code changes in the git author name (default: True)
|
## Attribute aider code changes in the git author name (default: True). If explicitly set to True, overrides --attribute-co-authored-by precedence.
|
||||||
#AIDER_ATTRIBUTE_AUTHOR=true
|
#AIDER_ATTRIBUTE_AUTHOR=
|
||||||
|
|
||||||
## Attribute aider commits in the git committer name (default: True)
|
## Attribute aider commits in the git committer name (default: True). If explicitly set to True, overrides --attribute-co-authored-by precedence for aider edits.
|
||||||
#AIDER_ATTRIBUTE_COMMITTER=true
|
#AIDER_ATTRIBUTE_COMMITTER=
|
||||||
|
|
||||||
## Prefix commit messages with 'aider: ' if aider authored the changes (default: False)
|
## Prefix commit messages with 'aider: ' if aider authored the changes (default: False)
|
||||||
#AIDER_ATTRIBUTE_COMMIT_MESSAGE_AUTHOR=false
|
#AIDER_ATTRIBUTE_COMMIT_MESSAGE_AUTHOR=false
|
||||||
|
@ -265,6 +265,9 @@ cog.outl("```")
|
||||||
## Prefix all commit messages with 'aider: ' (default: False)
|
## Prefix all commit messages with 'aider: ' (default: False)
|
||||||
#AIDER_ATTRIBUTE_COMMIT_MESSAGE_COMMITTER=false
|
#AIDER_ATTRIBUTE_COMMIT_MESSAGE_COMMITTER=false
|
||||||
|
|
||||||
|
## Attribute aider edits using the Co-authored-by trailer in the commit message (default: False). If True, this takes precedence over default --attribute-author and --attribute-committer behavior unless they are explicitly set to True.
|
||||||
|
#AIDER_ATTRIBUTE_CO_AUTHORED_BY=false
|
||||||
|
|
||||||
## Enable/disable git pre-commit hooks with --no-verify (default: False)
|
## Enable/disable git pre-commit hooks with --no-verify (default: False)
|
||||||
#AIDER_GIT_COMMIT_VERIFY=false
|
#AIDER_GIT_COMMIT_VERIFY=false
|
||||||
|
|
||||||
|
@ -379,6 +382,9 @@ cog.outl("```")
|
||||||
#################
|
#################
|
||||||
# Other settings:
|
# Other settings:
|
||||||
|
|
||||||
|
## Never prompt for or attempt to install Playwright for web scraping (default: False).
|
||||||
|
#AIDER_DISABLE_PLAYWRIGHT=false
|
||||||
|
|
||||||
## specify a file to edit (can be used multiple times)
|
## specify a file to edit (can be used multiple times)
|
||||||
#AIDER_FILE=
|
#AIDER_FILE=
|
||||||
|
|
||||||
|
@ -430,6 +436,9 @@ cog.outl("```")
|
||||||
## Specify which editor to use for the /editor command
|
## Specify which editor to use for the /editor command
|
||||||
#AIDER_EDITOR=
|
#AIDER_EDITOR=
|
||||||
|
|
||||||
|
## Print shell completion script for the specified SHELL and exit. Supported shells: bash, tcsh, zsh. Example: aider --shell-completions bash
|
||||||
|
#AIDER_SHELL_COMPLETIONS=
|
||||||
|
|
||||||
############################
|
############################
|
||||||
# Deprecated model settings:
|
# Deprecated model settings:
|
||||||
|
|
||||||
|
|
|
@ -12,7 +12,7 @@ Aider allows you to configure your preferred text editor for use with the `/edit
|
||||||
|
|
||||||
You can specify the text editor with the `--editor` switch or using
|
You can specify the text editor with the `--editor` switch or using
|
||||||
`editor:` in aider's
|
`editor:` in aider's
|
||||||
[yaml config file](https://aider.chat/docs/config/aider_conf.html).
|
[YAML config file](https://aider.chat/docs/config/aider_conf.html).
|
||||||
|
|
||||||
## Environment variables
|
## Environment variables
|
||||||
|
|
||||||
|
|
|
@ -79,17 +79,17 @@ for alias, model in sorted(MODEL_ALIASES.items()):
|
||||||
- `4-turbo`: gpt-4-1106-preview
|
- `4-turbo`: gpt-4-1106-preview
|
||||||
- `4o`: gpt-4o
|
- `4o`: gpt-4o
|
||||||
- `deepseek`: deepseek/deepseek-chat
|
- `deepseek`: deepseek/deepseek-chat
|
||||||
- `flash`: gemini/gemini-2.0-flash-exp
|
- `flash`: gemini/gemini-2.5-flash-preview-04-17
|
||||||
- `gemini`: gemini/gemini-2.5-pro-preview-03-25
|
- `gemini`: gemini/gemini-2.5-pro-preview-05-06
|
||||||
- `gemini-2.5-pro`: gemini/gemini-2.5-pro-exp-03-25
|
- `gemini-2.5-pro`: gemini/gemini-2.5-pro-preview-05-06
|
||||||
- `gemini-exp`: gemini/gemini-2.5-pro-exp-03-25
|
- `gemini-exp`: gemini/gemini-2.5-pro-exp-03-25
|
||||||
- `grok3`: xai/grok-3-beta
|
- `grok3`: xai/grok-3-beta
|
||||||
- `haiku`: claude-3-5-haiku-20241022
|
- `haiku`: claude-3-5-haiku-20241022
|
||||||
- `optimus`: openrouter/openrouter/optimus-alpha
|
- `optimus`: openrouter/openrouter/optimus-alpha
|
||||||
- `opus`: claude-3-opus-20240229
|
- `opus`: claude-opus-4-20250514
|
||||||
- `quasar`: openrouter/openrouter/quasar-alpha
|
- `quasar`: openrouter/openrouter/quasar-alpha
|
||||||
- `r1`: deepseek/deepseek-reasoner
|
- `r1`: deepseek/deepseek-reasoner
|
||||||
- `sonnet`: anthropic/claude-3-7-sonnet-20250219
|
- `sonnet`: anthropic/claude-sonnet-4-20250514
|
||||||
<!--[[[end]]]-->
|
<!--[[[end]]]-->
|
||||||
|
|
||||||
## Priority
|
## Priority
|
||||||
|
|
|
@ -56,6 +56,7 @@ usage: aider [-h] [--model] [--openai-api-key] [--anthropic-api-key]
|
||||||
[--attribute-committer | --no-attribute-committer]
|
[--attribute-committer | --no-attribute-committer]
|
||||||
[--attribute-commit-message-author | --no-attribute-commit-message-author]
|
[--attribute-commit-message-author | --no-attribute-commit-message-author]
|
||||||
[--attribute-commit-message-committer | --no-attribute-commit-message-committer]
|
[--attribute-commit-message-committer | --no-attribute-commit-message-committer]
|
||||||
|
[--attribute-co-authored-by | --no-attribute-co-authored-by]
|
||||||
[--git-commit-verify | --no-git-commit-verify]
|
[--git-commit-verify | --no-git-commit-verify]
|
||||||
[--commit] [--commit-prompt] [--dry-run | --no-dry-run]
|
[--commit] [--commit-prompt] [--dry-run | --no-dry-run]
|
||||||
[--skip-sanity-check-repo]
|
[--skip-sanity-check-repo]
|
||||||
|
@ -72,17 +73,19 @@ usage: aider [-h] [--model] [--openai-api-key] [--anthropic-api-key]
|
||||||
[--copy-paste | --no-copy-paste] [--apply]
|
[--copy-paste | --no-copy-paste] [--apply]
|
||||||
[--apply-clipboard-edits] [--exit] [--show-repo-map]
|
[--apply-clipboard-edits] [--exit] [--show-repo-map]
|
||||||
[--show-prompts] [--voice-format] [--voice-language]
|
[--show-prompts] [--voice-format] [--voice-language]
|
||||||
[--voice-input-device] [--file] [--read] [--vim]
|
[--voice-input-device] [--disable-playwright] [--file]
|
||||||
[--chat-language] [--yes-always] [-v] [--load]
|
[--read] [--vim] [--chat-language] [--yes-always] [-v]
|
||||||
[--encoding] [--line-endings] [-c] [--env-file]
|
[--load] [--encoding] [--line-endings] [-c]
|
||||||
|
[--env-file]
|
||||||
[--suggest-shell-commands | --no-suggest-shell-commands]
|
[--suggest-shell-commands | --no-suggest-shell-commands]
|
||||||
[--fancy-input | --no-fancy-input]
|
[--fancy-input | --no-fancy-input]
|
||||||
[--multiline | --no-multiline]
|
[--multiline | --no-multiline]
|
||||||
[--notifications | --no-notifications]
|
[--notifications | --no-notifications]
|
||||||
[--notifications-command]
|
[--notifications-command]
|
||||||
[--detect-urls | --no-detect-urls] [--editor] [--opus]
|
[--detect-urls | --no-detect-urls] [--editor]
|
||||||
[--sonnet] [--haiku] [--4] [--4o] [--mini] [--4-turbo]
|
[--shell-completions] [--opus] [--sonnet] [--haiku]
|
||||||
[--35turbo] [--deepseek] [--o1-mini] [--o1-preview]
|
[--4] [--4o] [--mini] [--4-turbo] [--35turbo]
|
||||||
|
[--deepseek] [--o1-mini] [--o1-preview]
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -412,16 +415,14 @@ Aliases:
|
||||||
- `--no-dirty-commits`
|
- `--no-dirty-commits`
|
||||||
|
|
||||||
### `--attribute-author`
|
### `--attribute-author`
|
||||||
Attribute aider code changes in the git author name (default: True)
|
Attribute aider code changes in the git author name (default: True). If explicitly set to True, overrides --attribute-co-authored-by precedence.
|
||||||
Default: True
|
|
||||||
Environment variable: `AIDER_ATTRIBUTE_AUTHOR`
|
Environment variable: `AIDER_ATTRIBUTE_AUTHOR`
|
||||||
Aliases:
|
Aliases:
|
||||||
- `--attribute-author`
|
- `--attribute-author`
|
||||||
- `--no-attribute-author`
|
- `--no-attribute-author`
|
||||||
|
|
||||||
### `--attribute-committer`
|
### `--attribute-committer`
|
||||||
Attribute aider commits in the git committer name (default: True)
|
Attribute aider commits in the git committer name (default: True). If explicitly set to True, overrides --attribute-co-authored-by precedence for aider edits.
|
||||||
Default: True
|
|
||||||
Environment variable: `AIDER_ATTRIBUTE_COMMITTER`
|
Environment variable: `AIDER_ATTRIBUTE_COMMITTER`
|
||||||
Aliases:
|
Aliases:
|
||||||
- `--attribute-committer`
|
- `--attribute-committer`
|
||||||
|
@ -443,6 +444,14 @@ Aliases:
|
||||||
- `--attribute-commit-message-committer`
|
- `--attribute-commit-message-committer`
|
||||||
- `--no-attribute-commit-message-committer`
|
- `--no-attribute-commit-message-committer`
|
||||||
|
|
||||||
|
### `--attribute-co-authored-by`
|
||||||
|
Attribute aider edits using the Co-authored-by trailer in the commit message (default: False). If True, this takes precedence over default --attribute-author and --attribute-committer behavior unless they are explicitly set to True.
|
||||||
|
Default: False
|
||||||
|
Environment variable: `AIDER_ATTRIBUTE_CO_AUTHORED_BY`
|
||||||
|
Aliases:
|
||||||
|
- `--attribute-co-authored-by`
|
||||||
|
- `--no-attribute-co-authored-by`
|
||||||
|
|
||||||
### `--git-commit-verify`
|
### `--git-commit-verify`
|
||||||
Enable/disable git pre-commit hooks with --no-verify (default: False)
|
Enable/disable git pre-commit hooks with --no-verify (default: False)
|
||||||
Default: False
|
Default: False
|
||||||
|
@ -652,6 +661,11 @@ Environment variable: `AIDER_VOICE_INPUT_DEVICE`
|
||||||
|
|
||||||
## Other settings:
|
## Other settings:
|
||||||
|
|
||||||
|
### `--disable-playwright`
|
||||||
|
Never prompt for or attempt to install Playwright for web scraping (default: False).
|
||||||
|
Default: False
|
||||||
|
Environment variable: `AIDER_DISABLE_PLAYWRIGHT`
|
||||||
|
|
||||||
### `--file FILE`
|
### `--file FILE`
|
||||||
specify a file to edit (can be used multiple times)
|
specify a file to edit (can be used multiple times)
|
||||||
Environment variable: `AIDER_FILE`
|
Environment variable: `AIDER_FILE`
|
||||||
|
@ -754,6 +768,10 @@ Aliases:
|
||||||
Specify which editor to use for the /editor command
|
Specify which editor to use for the /editor command
|
||||||
Environment variable: `AIDER_EDITOR`
|
Environment variable: `AIDER_EDITOR`
|
||||||
|
|
||||||
|
### `--shell-completions SHELL`
|
||||||
|
Print shell completion script for the specified SHELL and exit. Supported shells: bash, tcsh, zsh. Example: aider --shell-completions bash
|
||||||
|
Environment variable: `AIDER_SHELL_COMPLETIONS`
|
||||||
|
|
||||||
## Deprecated model settings:
|
## Deprecated model settings:
|
||||||
|
|
||||||
### `--opus`
|
### `--opus`
|
||||||
|
|
|
@ -264,15 +264,12 @@ tr:hover { background-color: #f5f5f5; }
|
||||||
</style>
|
</style>
|
||||||
<table>
|
<table>
|
||||||
<tr><th>Model Name</th><th class='right'>Total Tokens</th><th class='right'>Percent</th></tr>
|
<tr><th>Model Name</th><th class='right'>Total Tokens</th><th class='right'>Percent</th></tr>
|
||||||
<tr><td>gemini/gemini-2.5-pro-exp-03-25</td><td class='right'>2,499,338</td><td class='right'>83.9%</td></tr>
|
<tr><td>o3</td><td class='right'>542,669</td><td class='right'>45.1%</td></tr>
|
||||||
<tr><td>openrouter/anthropic/claude-3.7-sonnet</td><td class='right'>313,377</td><td class='right'>10.5%</td></tr>
|
<tr><td>gemini/gemini-2.5-pro-exp-03-25</td><td class='right'>479,518</td><td class='right'>39.9%</td></tr>
|
||||||
<tr><td>o3</td><td class='right'>100,777</td><td class='right'>3.4%</td></tr>
|
<tr><td>anthropic/claude-sonnet-4-20250514</td><td class='right'>131,972</td><td class='right'>11.0%</td></tr>
|
||||||
<tr><td>gemini/gemini-2.5-pro-preview-03-25</td><td class='right'>16,524</td><td class='right'>0.6%</td></tr>
|
<tr><td>gemini/gemini-2.5-pro-preview-05-06</td><td class='right'>40,256</td><td class='right'>3.3%</td></tr>
|
||||||
<tr><td>o4-mini</td><td class='right'>16,499</td><td class='right'>0.6%</td></tr>
|
<tr><td>gemini/gemini-2.5-flash-preview-05-20</td><td class='right'>7,638</td><td class='right'>0.6%</td></tr>
|
||||||
<tr><td>gpt-4.1-mini</td><td class='right'>11,775</td><td class='right'>0.4%</td></tr>
|
<tr><td>gemini/REDACTED</td><td class='right'>643</td><td class='right'>0.1%</td></tr>
|
||||||
<tr><td>gpt-4.1</td><td class='right'>10,687</td><td class='right'>0.4%</td></tr>
|
|
||||||
<tr><td>None</td><td class='right'>8,001</td><td class='right'>0.3%</td></tr>
|
|
||||||
<tr><td>gemini/REDACTED</td><td class='right'>606</td><td class='right'>0.0%</td></tr>
|
|
||||||
</table>
|
</table>
|
||||||
|
|
||||||
{: .note :}
|
{: .note :}
|
||||||
|
|
|
@ -71,4 +71,6 @@ Additionally, you can use the following options to prefix commit messages:
|
||||||
- `--attribute-commit-message-author`: Prefix commit messages with 'aider: ' if aider authored the changes.
|
- `--attribute-commit-message-author`: Prefix commit messages with 'aider: ' if aider authored the changes.
|
||||||
- `--attribute-commit-message-committer`: Prefix all commit messages with 'aider: ', regardless of whether aider authored the changes or not.
|
- `--attribute-commit-message-committer`: Prefix all commit messages with 'aider: ', regardless of whether aider authored the changes or not.
|
||||||
|
|
||||||
Both of these options are disabled by default, but can be useful for easily identifying changes made by aider.
|
Finally, you can use `--attribute-co-authored-by` to have aider append a Co-authored-by trailer to the end of the commit string.
|
||||||
|
This will disable appending `(aider)` to the git author and git committer unless you have explicitly enabled those settings.
|
||||||
|
|
||||||
|
|
|
@ -28,12 +28,6 @@ These one-liners will install aider, along with python 3.12 if needed.
|
||||||
They are based on the
|
They are based on the
|
||||||
[uv installers](https://docs.astral.sh/uv/getting-started/installation/).
|
[uv installers](https://docs.astral.sh/uv/getting-started/installation/).
|
||||||
|
|
||||||
#### Windows
|
|
||||||
|
|
||||||
```powershell
|
|
||||||
powershell -ExecutionPolicy ByPass -c "irm https://aider.chat/install.ps1 | iex"
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Mac & Linux
|
#### Mac & Linux
|
||||||
|
|
||||||
Use curl to download the script and execute it with sh:
|
Use curl to download the script and execute it with sh:
|
||||||
|
@ -48,6 +42,12 @@ If your system doesn't have curl, you can use wget:
|
||||||
wget -qO- https://aider.chat/install.sh | sh
|
wget -qO- https://aider.chat/install.sh | sh
|
||||||
```
|
```
|
||||||
|
|
||||||
|
#### Windows
|
||||||
|
|
||||||
|
```powershell
|
||||||
|
powershell -ExecutionPolicy ByPass -c "irm https://aider.chat/install.ps1 | iex"
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
## Install with uv
|
## Install with uv
|
||||||
|
|
||||||
|
@ -55,7 +55,7 @@ You can install aider with uv:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
python -m pip install uv # If you need to install uv
|
python -m pip install uv # If you need to install uv
|
||||||
uv tool install --force --python python3.12 aider-chat@latest
|
uv tool install --force --python python3.12 --with pip aider-chat@latest
|
||||||
```
|
```
|
||||||
|
|
||||||
This will install uv using your existing python version 3.8-3.13,
|
This will install uv using your existing python version 3.8-3.13,
|
||||||
|
|
|
@ -180,6 +180,8 @@ cog.out(get_supported_languages_md())
|
||||||
| nix | .nix | | ✓ |
|
| nix | .nix | | ✓ |
|
||||||
| nqc | .nqc | | ✓ |
|
| nqc | .nqc | | ✓ |
|
||||||
| objc | .mm | | ✓ |
|
| objc | .mm | | ✓ |
|
||||||
|
| ocaml | .ml | ✓ | ✓ |
|
||||||
|
| ocaml_interface | .mli | ✓ | ✓ |
|
||||||
| odin | .odin | | ✓ |
|
| odin | .odin | | ✓ |
|
||||||
| org | .org | | ✓ |
|
| org | .org | | ✓ |
|
||||||
| pascal | .pas | | ✓ |
|
| pascal | .pas | | ✓ |
|
||||||
|
|
|
@ -285,6 +285,6 @@ mod_dates = [get_last_modified_date(file) for file in files]
|
||||||
latest_mod_date = max(mod_dates)
|
latest_mod_date = max(mod_dates)
|
||||||
cog.out(f"{latest_mod_date.strftime('%B %d, %Y.')}")
|
cog.out(f"{latest_mod_date.strftime('%B %d, %Y.')}")
|
||||||
]]]-->
|
]]]-->
|
||||||
April 20, 2025.
|
May 26, 2025.
|
||||||
<!--[[[end]]]-->
|
<!--[[[end]]]-->
|
||||||
</p>
|
</p>
|
||||||
|
|
|
@ -9,8 +9,7 @@ nav_order: 800
|
||||||
|
|
||||||
All pricing information is the cost to run the benchmark at the time it was
|
All pricing information is the cost to run the benchmark at the time it was
|
||||||
run.
|
run.
|
||||||
Providers change their pricing, and every benchmark run ends up with a slightly
|
Providers change their pricing and sometimes introduce entirely novel pricing structures.
|
||||||
different cost.
|
|
||||||
Pricing is provided on a *best efforts* basis, and may not always be current
|
Pricing is provided on a *best efforts* basis, and may not always be current
|
||||||
or fully accurate.
|
or fully accurate.
|
||||||
|
|
||||||
|
|
|
@ -16,9 +16,10 @@ description: Aider can connect to most LLMs for AI pair programming.
|
||||||
|
|
||||||
Aider works best with these models, which are skilled at editing code:
|
Aider works best with these models, which are skilled at editing code:
|
||||||
|
|
||||||
|
- [Gemini 2.5 Pro](/docs/llms/gemini.html)
|
||||||
- [DeepSeek R1 and V3](/docs/llms/deepseek.html)
|
- [DeepSeek R1 and V3](/docs/llms/deepseek.html)
|
||||||
- [Claude 3.7 Sonnet](/docs/llms/anthropic.html)
|
- [Claude 3.7 Sonnet](/docs/llms/anthropic.html)
|
||||||
- [OpenAI o1, o3-mini and GPT-4o](/docs/llms/openai.html)
|
- [OpenAI o3, o4-mini and GPT-4.1](/docs/llms/openai.html)
|
||||||
|
|
||||||
|
|
||||||
## Free models
|
## Free models
|
||||||
|
@ -26,10 +27,8 @@ Aider works best with these models, which are skilled at editing code:
|
||||||
|
|
||||||
Aider works with a number of **free** API providers:
|
Aider works with a number of **free** API providers:
|
||||||
|
|
||||||
- Google's [Gemini 1.5 Pro](/docs/llms/gemini.html) works with aider, with
|
- [OpenRouter offers free access to many models](https://openrouter.ai/models/?q=free), with limitations on daily usage.
|
||||||
code editing capabilities similar to GPT-3.5.
|
- Google's [Gemini 2.5 Pro Exp](/docs/llms/gemini.html) works very well with aider.
|
||||||
- You can use [Llama 3 70B on Groq](/docs/llms/groq.html) which is comparable to GPT-3.5 in code editing performance.
|
|
||||||
- Cohere also offers free API access to their [Command-R+ model](/docs/llms/cohere.html), which works with aider as a *very basic* coding assistant.
|
|
||||||
|
|
||||||
## Local models
|
## Local models
|
||||||
{: .no_toc }
|
{: .no_toc }
|
||||||
|
|
|
@ -10,21 +10,26 @@ To work with Anthropic's models, you need to provide your
|
||||||
either in the `ANTHROPIC_API_KEY` environment variable or
|
either in the `ANTHROPIC_API_KEY` environment variable or
|
||||||
via the `--anthropic-api-key` command line switch.
|
via the `--anthropic-api-key` command line switch.
|
||||||
|
|
||||||
Aider has some built in shortcuts for the most popular Anthropic models and
|
First, install aider:
|
||||||
has been tested and benchmarked to work well with them:
|
|
||||||
|
{% include install.md %}
|
||||||
|
|
||||||
|
Then configure your API keys:
|
||||||
|
|
||||||
```
|
```
|
||||||
python -m pip install -U aider-chat
|
|
||||||
|
|
||||||
export ANTHROPIC_API_KEY=<key> # Mac/Linux
|
export ANTHROPIC_API_KEY=<key> # Mac/Linux
|
||||||
setx ANTHROPIC_API_KEY <key> # Windows, restart shell after setx
|
setx ANTHROPIC_API_KEY <key> # Windows, restart shell after setx
|
||||||
|
```
|
||||||
|
|
||||||
|
Start working with aider and Anthropic on your codebase:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Change directory into your codebase
|
||||||
|
cd /to/your/project
|
||||||
|
|
||||||
# Aider uses Claude 3.7 Sonnet by default
|
# Aider uses Claude 3.7 Sonnet by default
|
||||||
aider
|
aider
|
||||||
|
|
||||||
# Claude 3 Opus
|
|
||||||
aider --model claude-3-opus-20240229
|
|
||||||
|
|
||||||
# List models available from Anthropic
|
# List models available from Anthropic
|
||||||
aider --list-models anthropic/
|
aider --list-models anthropic/
|
||||||
```
|
```
|
||||||
|
|
|
@ -7,9 +7,13 @@ nav_order: 500
|
||||||
|
|
||||||
Aider can connect to the OpenAI models on Azure.
|
Aider can connect to the OpenAI models on Azure.
|
||||||
|
|
||||||
```
|
First, install aider:
|
||||||
python -m pip install -U aider-chat
|
|
||||||
|
|
||||||
|
{% include install.md %}
|
||||||
|
|
||||||
|
Then configure your API keys and endpoint:
|
||||||
|
|
||||||
|
```
|
||||||
# Mac/Linux:
|
# Mac/Linux:
|
||||||
export AZURE_API_KEY=<key>
|
export AZURE_API_KEY=<key>
|
||||||
export AZURE_API_VERSION=2024-12-01-preview
|
export AZURE_API_VERSION=2024-12-01-preview
|
||||||
|
@ -20,6 +24,13 @@ setx AZURE_API_KEY <key>
|
||||||
setx AZURE_API_VERSION 2024-12-01-preview
|
setx AZURE_API_VERSION 2024-12-01-preview
|
||||||
setx AZURE_API_BASE https://myendpt.openai.azure.com
|
setx AZURE_API_BASE https://myendpt.openai.azure.com
|
||||||
# ... restart your shell after setx commands
|
# ... restart your shell after setx commands
|
||||||
|
```
|
||||||
|
|
||||||
|
Start working with aider and Azure on your codebase:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Change directory into your codebase
|
||||||
|
cd /to/your/project
|
||||||
|
|
||||||
aider --model azure/<your_model_deployment_name>
|
aider --model azure/<your_model_deployment_name>
|
||||||
|
|
||||||
|
|
|
@ -6,8 +6,6 @@ nav_order: 560
|
||||||
# Amazon Bedrock
|
# Amazon Bedrock
|
||||||
|
|
||||||
Aider can connect to models provided by Amazon Bedrock.
|
Aider can connect to models provided by Amazon Bedrock.
|
||||||
You will need to have an AWS account with access to the Bedrock service.
|
|
||||||
|
|
||||||
To configure Aider to use the Amazon Bedrock API, you need to set up your AWS credentials.
|
To configure Aider to use the Amazon Bedrock API, you need to set up your AWS credentials.
|
||||||
This can be done using the AWS CLI or by setting environment variables.
|
This can be done using the AWS CLI or by setting environment variables.
|
||||||
|
|
||||||
|
@ -37,6 +35,14 @@ feature, you will receive an error message like the following:
|
||||||
anthropic.claude-3-7-sonnet-20250219-v1:0 with on-demand throughput isn\xe2\x80\x99t supported. Retry your
|
anthropic.claude-3-7-sonnet-20250219-v1:0 with on-demand throughput isn\xe2\x80\x99t supported. Retry your
|
||||||
request with the ID or ARN of an inference profile that contains this model."}'
|
request with the ID or ARN of an inference profile that contains this model."}'
|
||||||
|
|
||||||
|
## Installation and Configuration
|
||||||
|
|
||||||
|
First, install aider:
|
||||||
|
|
||||||
|
{% include install.md %}
|
||||||
|
|
||||||
|
Next, configure your AWS credentials. This can be done using the AWS CLI or by setting environment variables.
|
||||||
|
|
||||||
## AWS CLI Configuration
|
## AWS CLI Configuration
|
||||||
|
|
||||||
If you haven't already, install the [AWS CLI](https://aws.amazon.com/cli/) and configure it with your credentials:
|
If you haven't already, install the [AWS CLI](https://aws.amazon.com/cli/) and configure it with your credentials:
|
||||||
|
@ -49,7 +55,7 @@ This will prompt you to enter your AWS Access Key ID, Secret Access Key, and def
|
||||||
|
|
||||||
## Environment Variables
|
## Environment Variables
|
||||||
|
|
||||||
Alternatively, you can set the following environment variables:
|
You can set the following environment variables:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
export AWS_REGION=your_preferred_region
|
export AWS_REGION=your_preferred_region
|
||||||
|
@ -75,32 +81,15 @@ $env:AWS_SECRET_ACCESS_KEY = 'your_secret_key'
|
||||||
$env:AWS_REGION = 'us-west-2' # Put whichever AWS region that you'd like, that the Bedrock service supports.
|
$env:AWS_REGION = 'us-west-2' # Put whichever AWS region that you'd like, that the Bedrock service supports.
|
||||||
```
|
```
|
||||||
|
|
||||||
## Install boto3
|
|
||||||
|
|
||||||
The AWS Bedrock provider requires the `boto3` package in order to function correctly:
|
## Get Started
|
||||||
|
|
||||||
```bash
|
|
||||||
pip install boto3
|
|
||||||
```
|
|
||||||
|
|
||||||
To use aider installed via `pipx` with AWS Bedrock, you must add the `boto3` dependency to aider's virtual environment by running
|
|
||||||
|
|
||||||
```bash
|
|
||||||
pipx inject aider-chat boto3
|
|
||||||
```
|
|
||||||
|
|
||||||
You must install `boto3` dependency to aider's virtual environment installed via one-liner or uv by running
|
|
||||||
|
|
||||||
```bash
|
|
||||||
uv tool run --from aider-chat pip install boto3
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
## Running Aider with Bedrock
|
|
||||||
|
|
||||||
Once your AWS credentials are set up, you can run Aider with the `--model` command line switch, specifying the Bedrock model you want to use:
|
Once your AWS credentials are set up, you can run Aider with the `--model` command line switch, specifying the Bedrock model you want to use:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
# Change directory into your codebase
|
||||||
|
cd /to/your/project
|
||||||
|
|
||||||
aider --model bedrock/anthropic.claude-3-5-sonnet-20240620-v1:0
|
aider --model bedrock/anthropic.claude-3-5-sonnet-20240620-v1:0
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -121,6 +110,20 @@ aider --list-models bedrock/
|
||||||
|
|
||||||
Make sure you have access to these models in your AWS account before attempting to use them with Aider.
|
Make sure you have access to these models in your AWS account before attempting to use them with Aider.
|
||||||
|
|
||||||
|
## Install boto3
|
||||||
|
You may need to install the `boto3` package.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# If you installed with aider-install or `uv tool`
|
||||||
|
uv tool run --from aider-chat pip install boto3
|
||||||
|
|
||||||
|
# Or with pipx...
|
||||||
|
pipx inject aider-chat boto3
|
||||||
|
|
||||||
|
# Or with pip
|
||||||
|
pip install -U boto3
|
||||||
|
```
|
||||||
|
|
||||||
# More info
|
# More info
|
||||||
|
|
||||||
For more information on Amazon Bedrock and its models, refer to the [official AWS documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/what-is-bedrock.html).
|
For more information on Amazon Bedrock and its models, refer to the [official AWS documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/what-is-bedrock.html).
|
||||||
|
|
|
@ -10,13 +10,22 @@ Their Command-R+ model works well with aider
|
||||||
as a *very basic* coding assistant.
|
as a *very basic* coding assistant.
|
||||||
You'll need a [Cohere API key](https://dashboard.cohere.com/welcome/login).
|
You'll need a [Cohere API key](https://dashboard.cohere.com/welcome/login).
|
||||||
|
|
||||||
To use **Command-R+**:
|
First, install aider:
|
||||||
|
|
||||||
|
{% include install.md %}
|
||||||
|
|
||||||
|
Then configure your API keys:
|
||||||
|
|
||||||
```
|
```
|
||||||
python -m pip install -U aider-chat
|
|
||||||
|
|
||||||
export COHERE_API_KEY=<key> # Mac/Linux
|
export COHERE_API_KEY=<key> # Mac/Linux
|
||||||
setx COHERE_API_KEY <key> # Windows, restart shell after setx
|
setx COHERE_API_KEY <key> # Windows, restart shell after setx
|
||||||
|
```
|
||||||
|
|
||||||
|
Start working with aider and Cohere on your codebase:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Change directory into your codebase
|
||||||
|
cd /to/your/project
|
||||||
|
|
||||||
aider --model command-r-plus-08-2024
|
aider --model command-r-plus-08-2024
|
||||||
|
|
||||||
|
|
|
@ -9,11 +9,22 @@ Aider can connect to the DeepSeek.com API.
|
||||||
To work with DeepSeek's models, you need to set the `DEEPSEEK_API_KEY` environment variable with your [DeepSeek API key](https://platform.deepseek.com/api_keys).
|
To work with DeepSeek's models, you need to set the `DEEPSEEK_API_KEY` environment variable with your [DeepSeek API key](https://platform.deepseek.com/api_keys).
|
||||||
The DeepSeek Chat V3 model has a top score on aider's code editing benchmark.
|
The DeepSeek Chat V3 model has a top score on aider's code editing benchmark.
|
||||||
|
|
||||||
```
|
First, install aider:
|
||||||
python -m pip install -U aider-chat
|
|
||||||
|
|
||||||
|
{% include install.md %}
|
||||||
|
|
||||||
|
Then configure your API keys:
|
||||||
|
|
||||||
|
```
|
||||||
export DEEPSEEK_API_KEY=<key> # Mac/Linux
|
export DEEPSEEK_API_KEY=<key> # Mac/Linux
|
||||||
setx DEEPSEEK_API_KEY <key> # Windows, restart shell after setx
|
setx DEEPSEEK_API_KEY <key> # Windows, restart shell after setx
|
||||||
|
```
|
||||||
|
|
||||||
|
Start working with aider and DeepSeek on your codebase:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Change directory into your codebase
|
||||||
|
cd /to/your/project
|
||||||
|
|
||||||
# Use DeepSeek Chat v3
|
# Use DeepSeek Chat v3
|
||||||
aider --model deepseek/deepseek-chat
|
aider --model deepseek/deepseek-chat
|
||||||
|
|
|
@ -7,22 +7,43 @@ nav_order: 300
|
||||||
|
|
||||||
You'll need a [Gemini API key](https://aistudio.google.com/app/u/2/apikey).
|
You'll need a [Gemini API key](https://aistudio.google.com/app/u/2/apikey).
|
||||||
|
|
||||||
```
|
First, install aider:
|
||||||
python -m pip install -U aider-chat
|
|
||||||
|
|
||||||
# You may need to install google-generativeai
|
{% include install.md %}
|
||||||
pip install -U google-generativeai
|
|
||||||
|
|
||||||
# Or with pipx...
|
Then configure your API keys:
|
||||||
pipx inject aider-chat google-generativeai
|
|
||||||
|
|
||||||
|
```bash
|
||||||
export GEMINI_API_KEY=<key> # Mac/Linux
|
export GEMINI_API_KEY=<key> # Mac/Linux
|
||||||
setx GEMINI_API_KEY <key> # Windows, restart shell after setx
|
setx GEMINI_API_KEY <key> # Windows, restart shell after setx
|
||||||
|
```
|
||||||
|
|
||||||
# You can run the Gemini 2.5 Pro model with:
|
Start working with aider and Gemini on your codebase:
|
||||||
aider --model gemini-2.5-pro
|
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Change directory into your codebase
|
||||||
|
cd /to/your/project
|
||||||
|
|
||||||
|
# You can run the Gemini 2.5 Pro model with this shortcut:
|
||||||
|
aider --model gemini
|
||||||
|
|
||||||
|
# You can run the Gemini 2.5 Pro Exp for free, with usage limits:
|
||||||
|
aider --model gemini-exp
|
||||||
|
|
||||||
# List models available from Gemini
|
# List models available from Gemini
|
||||||
aider --list-models gemini/
|
aider --list-models gemini/
|
||||||
```
|
```
|
||||||
|
|
||||||
|
You may need to install the `google-generativeai` package.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# If you installed with aider-install or `uv tool`
|
||||||
|
uv tool run --from aider-chat pip install google-generativeai
|
||||||
|
|
||||||
|
# Or with pipx...
|
||||||
|
pipx inject aider-chat google-generativeai
|
||||||
|
|
||||||
|
# Or with pip
|
||||||
|
pip install -U google-generativeai
|
||||||
|
```
|
||||||
|
|
105
aider/website/docs/llms/github.md
Normal file
105
aider/website/docs/llms/github.md
Normal file
|
@ -0,0 +1,105 @@
|
||||||
|
---
|
||||||
|
parent: Connecting to LLMs
|
||||||
|
nav_order: 510
|
||||||
|
---
|
||||||
|
|
||||||
|
# GitHub Copilot
|
||||||
|
|
||||||
|
Aider can connect to GitHub Copilot’s LLMs because Copilot exposes a standard **OpenAI-style**
|
||||||
|
endpoint at:
|
||||||
|
|
||||||
|
```
|
||||||
|
https://api.githubcopilot.com
|
||||||
|
```
|
||||||
|
|
||||||
|
First, install aider:
|
||||||
|
|
||||||
|
{% include install.md %}
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Configure your environment
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# macOS/Linux
|
||||||
|
export OPENAI_API_BASE=https://api.githubcopilot.com
|
||||||
|
export OPENAI_API_KEY=<oauth_token>
|
||||||
|
|
||||||
|
# Windows (PowerShell)
|
||||||
|
setx OPENAI_API_BASE https://api.githubcopilot.com
|
||||||
|
setx OPENAI_API_KEY <oauth_token>
|
||||||
|
# …restart the shell after setx commands
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Where do I get the token?
|
||||||
|
The easiest path is to sign in to Copilot from any JetBrains IDE (PyCharm, GoLand, etc).
|
||||||
|
After you authenticate a file appears:
|
||||||
|
|
||||||
|
```
|
||||||
|
~/.config/github-copilot/apps.json
|
||||||
|
```
|
||||||
|
|
||||||
|
Copy the `oauth_token` value – that string is your `OPENAI_API_KEY`.
|
||||||
|
|
||||||
|
*Note:* tokens created by the Neovim **copilot.lua** plugin (old `hosts.json`) sometimes lack the
|
||||||
|
needed scopes. If you see “access to this endpoint is forbidden”, regenerate the token with a
|
||||||
|
JetBrains IDE or the VS Code Copilot extension.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Discover available models
|
||||||
|
|
||||||
|
Copilot hosts many models (OpenAI, Anthropic, Google, etc).
|
||||||
|
List the models your subscription allows with:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s https://api.githubcopilot.com/models \
|
||||||
|
-H "Authorization: Bearer $OPENAI_API_KEY" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "Copilot-Integration-Id: vscode-chat" | jq -r '.data[].id'
|
||||||
|
```
|
||||||
|
|
||||||
|
Each returned ID can be used with aider by **prefixing it with `openai/`**:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
aider --model openai/gpt-4o
|
||||||
|
# or
|
||||||
|
aider --model openai/claude-3.7-sonnet-thought
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quick start
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# change into your project
|
||||||
|
cd /to/your/project
|
||||||
|
|
||||||
|
# talk to Copilot
|
||||||
|
aider --model openai/gpt-4o
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Optional config file (`~/.aider.conf.yml`)
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
openai-api-base: https://api.githubcopilot.com
|
||||||
|
openai-api-key: "<oauth_token>"
|
||||||
|
model: openai/gpt-4o
|
||||||
|
weak-model: openai/gpt-4o-mini
|
||||||
|
show-model-warnings: false
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## FAQ
|
||||||
|
|
||||||
|
* Calls made through aider are billed through your Copilot subscription
|
||||||
|
(aider will still print *estimated* costs).
|
||||||
|
* The Copilot docs explicitly allow third-party “agents” that hit this API – aider is playing by
|
||||||
|
the rules.
|
||||||
|
* Aider talks directly to the REST endpoint—no web-UI scraping or browser automation.
|
||||||
|
|
|
@ -10,13 +10,22 @@ The Llama 3 70B model works
|
||||||
well with aider and is comparable to GPT-3.5 in code editing performance.
|
well with aider and is comparable to GPT-3.5 in code editing performance.
|
||||||
You'll need a [Groq API key](https://console.groq.com/keys).
|
You'll need a [Groq API key](https://console.groq.com/keys).
|
||||||
|
|
||||||
To use **Llama3 70B**:
|
First, install aider:
|
||||||
|
|
||||||
|
{% include install.md %}
|
||||||
|
|
||||||
|
Then configure your API keys:
|
||||||
|
|
||||||
```
|
```
|
||||||
python -m pip install -U aider-chat
|
|
||||||
|
|
||||||
export GROQ_API_KEY=<key> # Mac/Linux
|
export GROQ_API_KEY=<key> # Mac/Linux
|
||||||
setx GROQ_API_KEY <key> # Windows, restart shell after setx
|
setx GROQ_API_KEY <key> # Windows, restart shell after setx
|
||||||
|
```
|
||||||
|
|
||||||
|
Start working with aider and Groq on your codebase:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Change directory into your codebase
|
||||||
|
cd /to/your/project
|
||||||
|
|
||||||
aider --model groq/llama3-70b-8192
|
aider --model groq/llama3-70b-8192
|
||||||
|
|
||||||
|
|
|
@ -5,11 +5,15 @@ nav_order: 400
|
||||||
|
|
||||||
# LM Studio
|
# LM Studio
|
||||||
|
|
||||||
To use LM Studio:
|
Aider can connect to models served by LM Studio.
|
||||||
|
|
||||||
|
First, install aider:
|
||||||
|
|
||||||
|
{% include install.md %}
|
||||||
|
|
||||||
|
Then configure your API key and endpoint:
|
||||||
|
|
||||||
```
|
```
|
||||||
python -m pip install -U aider-chat
|
|
||||||
|
|
||||||
# Must set a value here even if its a dummy value
|
# Must set a value here even if its a dummy value
|
||||||
export LM_STUDIO_API_KEY=dummy-api-key # Mac/Linux
|
export LM_STUDIO_API_KEY=dummy-api-key # Mac/Linux
|
||||||
setx LM_STUDIO_API_KEY dummy-api-key # Windows, restart shell after setx
|
setx LM_STUDIO_API_KEY dummy-api-key # Windows, restart shell after setx
|
||||||
|
@ -17,12 +21,19 @@ setx LM_STUDIO_API_KEY dummy-api-key # Windows, restart shell after setx
|
||||||
# LM Studio default server URL is http://localhost:1234/v1
|
# LM Studio default server URL is http://localhost:1234/v1
|
||||||
export LM_STUDIO_API_BASE=http://localhost:1234/v1 # Mac/Linux
|
export LM_STUDIO_API_BASE=http://localhost:1234/v1 # Mac/Linux
|
||||||
setx LM_STUDIO_API_BASE http://localhost:1234/v1 # Windows, restart shell after setx
|
setx LM_STUDIO_API_BASE http://localhost:1234/v1 # Windows, restart shell after setx
|
||||||
|
|
||||||
aider --model lm_studio/<your-model-name>
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**Note:** Even though LM Studio doesn't require an API Key out of the box the `LM_STUDIO_API_KEY` must have a dummy value like `dummy-api-key` set or the client request will fail trying to send an empty `Bearer` token.
|
**Note:** Even though LM Studio doesn't require an API Key out of the box the `LM_STUDIO_API_KEY` must have a dummy value like `dummy-api-key` set or the client request will fail trying to send an empty `Bearer` token.
|
||||||
|
|
||||||
|
Start working with aider and LM Studio on your codebase:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Change directory into your codebase
|
||||||
|
cd /to/your/project
|
||||||
|
|
||||||
|
aider --model lm_studio/<your-model-name>
|
||||||
|
```
|
||||||
|
|
||||||
See the [model warnings](warnings.html)
|
See the [model warnings](warnings.html)
|
||||||
section for information on warnings which will occur
|
section for information on warnings which will occur
|
||||||
when working with models that aider is not familiar with.
|
when working with models that aider is not familiar with.
|
||||||
|
|
|
@ -7,6 +7,19 @@ nav_order: 500
|
||||||
|
|
||||||
Aider can connect to local Ollama models.
|
Aider can connect to local Ollama models.
|
||||||
|
|
||||||
|
First, install aider:
|
||||||
|
|
||||||
|
{% include install.md %}
|
||||||
|
|
||||||
|
Then configure your Ollama API endpoint (usually the default):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export OLLAMA_API_BASE=http://127.0.0.1:11434 # Mac/Linux
|
||||||
|
setx OLLAMA_API_BASE http://127.0.0.1:11434 # Windows, restart shell after setx
|
||||||
|
```
|
||||||
|
|
||||||
|
Start working with aider and Ollama on your codebase:
|
||||||
|
|
||||||
```
|
```
|
||||||
# Pull the model
|
# Pull the model
|
||||||
ollama pull <model>
|
ollama pull <model>
|
||||||
|
@ -14,11 +27,8 @@ ollama pull <model>
|
||||||
# Start your ollama server, increasing the context window to 8k tokens
|
# Start your ollama server, increasing the context window to 8k tokens
|
||||||
OLLAMA_CONTEXT_LENGTH=8192 ollama serve
|
OLLAMA_CONTEXT_LENGTH=8192 ollama serve
|
||||||
|
|
||||||
# In another terminal window...
|
# In another terminal window, change directory into your codebase
|
||||||
python -m pip install -U aider-chat
|
cd /to/your/project
|
||||||
|
|
||||||
export OLLAMA_API_BASE=http://127.0.0.1:11434 # Mac/Linux
|
|
||||||
setx OLLAMA_API_BASE http://127.0.0.1:11434 # Windows, restart shell after setx
|
|
||||||
|
|
||||||
aider --model ollama_chat/<model>
|
aider --model ollama_chat/<model>
|
||||||
```
|
```
|
||||||
|
|
|
@ -7,10 +7,13 @@ nav_order: 500
|
||||||
|
|
||||||
Aider can connect to any LLM which is accessible via an OpenAI compatible API endpoint.
|
Aider can connect to any LLM which is accessible via an OpenAI compatible API endpoint.
|
||||||
|
|
||||||
```
|
First, install aider:
|
||||||
python -m pip install aider-install
|
|
||||||
aider-install
|
|
||||||
|
|
||||||
|
{% include install.md %}
|
||||||
|
|
||||||
|
Then configure your API key and endpoint:
|
||||||
|
|
||||||
|
```
|
||||||
# Mac/Linux:
|
# Mac/Linux:
|
||||||
export OPENAI_API_BASE=<endpoint>
|
export OPENAI_API_BASE=<endpoint>
|
||||||
export OPENAI_API_KEY=<key>
|
export OPENAI_API_KEY=<key>
|
||||||
|
@ -19,6 +22,13 @@ export OPENAI_API_KEY=<key>
|
||||||
setx OPENAI_API_BASE <endpoint>
|
setx OPENAI_API_BASE <endpoint>
|
||||||
setx OPENAI_API_KEY <key>
|
setx OPENAI_API_KEY <key>
|
||||||
# ... restart shell after setx commands
|
# ... restart shell after setx commands
|
||||||
|
```
|
||||||
|
|
||||||
|
Start working with aider and your OpenAI compatible API on your codebase:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Change directory into your codebase
|
||||||
|
cd /to/your/project
|
||||||
|
|
||||||
# Prefix the model name with openai/
|
# Prefix the model name with openai/
|
||||||
aider --model openai/<model-name>
|
aider --model openai/<model-name>
|
||||||
|
|
|
@ -10,27 +10,34 @@ To work with OpenAI's models, you need to provide your
|
||||||
either in the `OPENAI_API_KEY` environment variable or
|
either in the `OPENAI_API_KEY` environment variable or
|
||||||
via the `--api-key openai=<key>` command line switch.
|
via the `--api-key openai=<key>` command line switch.
|
||||||
|
|
||||||
Aider has some built in shortcuts for the most popular OpenAI models and
|
First, install aider:
|
||||||
has been tested and benchmarked to work well with them:
|
|
||||||
|
{% include install.md %}
|
||||||
|
|
||||||
|
Then configure your API keys:
|
||||||
|
|
||||||
```
|
```
|
||||||
python -m pip install -U aider-chat
|
export OPENAI_API_KEY=<key> # Mac/Linux
|
||||||
|
setx OPENAI_API_KEY <key> # Windows, restart shell after setx
|
||||||
|
```
|
||||||
|
|
||||||
|
Start working with aider and OpenAI on your codebase:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Change directory into your codebase
|
||||||
|
cd /to/your/project
|
||||||
|
|
||||||
# o3-mini
|
# o3-mini
|
||||||
aider --model o3-mini --api-key openai=<key>
|
aider --model o3-mini
|
||||||
|
|
||||||
# o1-mini
|
# o1-mini
|
||||||
aider --model o1-mini --api-key openai=<key>
|
aider --model o1-mini
|
||||||
|
|
||||||
# GPT-4o
|
# GPT-4o
|
||||||
aider --model gpt-4o --api-key openai=<key>
|
aider --model gpt-4o
|
||||||
|
|
||||||
# List models available from OpenAI
|
# List models available from OpenAI
|
||||||
aider --list-models openai/
|
aider --list-models openai/
|
||||||
|
|
||||||
# You can also store you API key in environment variables (or .env)
|
|
||||||
export OPENAI_API_KEY=<key> # Mac/Linux
|
|
||||||
setx OPENAI_API_KEY <key> # Windows, restart shell after setx
|
|
||||||
```
|
```
|
||||||
|
|
||||||
You can use `aider --model <model-name>` to use any other OpenAI model.
|
You can use `aider --model <model-name>` to use any other OpenAI model.
|
||||||
|
|
|
@ -8,11 +8,22 @@ nav_order: 500
|
||||||
Aider can connect to [models provided by OpenRouter](https://openrouter.ai/models?o=top-weekly):
|
Aider can connect to [models provided by OpenRouter](https://openrouter.ai/models?o=top-weekly):
|
||||||
You'll need an [OpenRouter API key](https://openrouter.ai/keys).
|
You'll need an [OpenRouter API key](https://openrouter.ai/keys).
|
||||||
|
|
||||||
```
|
First, install aider:
|
||||||
python -m pip install -U aider-chat
|
|
||||||
|
|
||||||
|
{% include install.md %}
|
||||||
|
|
||||||
|
Then configure your API keys:
|
||||||
|
|
||||||
|
```
|
||||||
export OPENROUTER_API_KEY=<key> # Mac/Linux
|
export OPENROUTER_API_KEY=<key> # Mac/Linux
|
||||||
setx OPENROUTER_API_KEY <key> # Windows, restart shell after setx
|
setx OPENROUTER_API_KEY <key> # Windows, restart shell after setx
|
||||||
|
```
|
||||||
|
|
||||||
|
Start working with aider and OpenRouter on your codebase:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Change directory into your codebase
|
||||||
|
cd /to/your/project
|
||||||
|
|
||||||
# Or any other open router model
|
# Or any other open router model
|
||||||
aider --model openrouter/<provider>/<model>
|
aider --model openrouter/<provider>/<model>
|
||||||
|
@ -23,16 +34,6 @@ aider --list-models openrouter/
|
||||||
|
|
||||||
In particular, many aider users access Sonnet via OpenRouter:
|
In particular, many aider users access Sonnet via OpenRouter:
|
||||||
|
|
||||||
```
|
|
||||||
python -m pip install -U aider-chat
|
|
||||||
|
|
||||||
export OPENROUTER_API_KEY=<key> # Mac/Linux
|
|
||||||
setx OPENROUTER_API_KEY <key> # Windows, restart shell after setx
|
|
||||||
|
|
||||||
aider --model openrouter/anthropic/claude-3.7-sonnet
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
{: .tip }
|
{: .tip }
|
||||||
If you get errors, check your
|
If you get errors, check your
|
||||||
[OpenRouter privacy settings](https://openrouter.ai/settings/privacy).
|
[OpenRouter privacy settings](https://openrouter.ai/settings/privacy).
|
||||||
|
|
|
@ -55,8 +55,8 @@ lines = run(
|
||||||
lines = ['- ' + line for line in lines.splitlines(keepends=True)]
|
lines = ['- ' + line for line in lines.splitlines(keepends=True)]
|
||||||
cog.out(''.join(lines))
|
cog.out(''.join(lines))
|
||||||
]]]-->
|
]]]-->
|
||||||
- ALEPHALPHA_API_KEY
|
|
||||||
- ALEPH_ALPHA_API_KEY
|
- ALEPH_ALPHA_API_KEY
|
||||||
|
- ALEPHALPHA_API_KEY
|
||||||
- ANTHROPIC_API_KEY
|
- ANTHROPIC_API_KEY
|
||||||
- ANYSCALE_API_KEY
|
- ANYSCALE_API_KEY
|
||||||
- AZURE_AI_API_KEY
|
- AZURE_AI_API_KEY
|
||||||
|
@ -66,18 +66,19 @@ cog.out(''.join(lines))
|
||||||
- CEREBRAS_API_KEY
|
- CEREBRAS_API_KEY
|
||||||
- CLARIFAI_API_KEY
|
- CLARIFAI_API_KEY
|
||||||
- CLOUDFLARE_API_KEY
|
- CLOUDFLARE_API_KEY
|
||||||
|
- CO_API_KEY
|
||||||
- CODESTRAL_API_KEY
|
- CODESTRAL_API_KEY
|
||||||
- COHERE_API_KEY
|
- COHERE_API_KEY
|
||||||
- CO_API_KEY
|
|
||||||
- DATABRICKS_API_KEY
|
- DATABRICKS_API_KEY
|
||||||
- DEEPINFRA_API_KEY
|
- DEEPINFRA_API_KEY
|
||||||
- DEEPSEEK_API_KEY
|
- DEEPSEEK_API_KEY
|
||||||
- FIREWORKSAI_API_KEY
|
|
||||||
- FIREWORKS_AI_API_KEY
|
- FIREWORKS_AI_API_KEY
|
||||||
- FIREWORKS_API_KEY
|
- FIREWORKS_API_KEY
|
||||||
|
- FIREWORKSAI_API_KEY
|
||||||
- GEMINI_API_KEY
|
- GEMINI_API_KEY
|
||||||
- GROQ_API_KEY
|
- GROQ_API_KEY
|
||||||
- HUGGINGFACE_API_KEY
|
- HUGGINGFACE_API_KEY
|
||||||
|
- INFINITY_API_KEY
|
||||||
- MARITALK_API_KEY
|
- MARITALK_API_KEY
|
||||||
- MISTRAL_API_KEY
|
- MISTRAL_API_KEY
|
||||||
- NLP_CLOUD_API_KEY
|
- NLP_CLOUD_API_KEY
|
||||||
|
|
|
@ -13,6 +13,10 @@ or service account with permission to use the Vertex AI API.
|
||||||
With your chosen login method, the gcloud CLI should automatically set the
|
With your chosen login method, the gcloud CLI should automatically set the
|
||||||
`GOOGLE_APPLICATION_CREDENTIALS` environment variable which points to the credentials file.
|
`GOOGLE_APPLICATION_CREDENTIALS` environment variable which points to the credentials file.
|
||||||
|
|
||||||
|
First, install aider:
|
||||||
|
|
||||||
|
{% include install.md %}
|
||||||
|
|
||||||
To configure Aider to use the Vertex AI API, you need to set `VERTEXAI_PROJECT` (the GCP project ID)
|
To configure Aider to use the Vertex AI API, you need to set `VERTEXAI_PROJECT` (the GCP project ID)
|
||||||
and `VERTEXAI_LOCATION` (the GCP region) [environment variables for Aider](/docs/config/dotenv.html).
|
and `VERTEXAI_LOCATION` (the GCP region) [environment variables for Aider](/docs/config/dotenv.html).
|
||||||
|
|
||||||
|
@ -27,13 +31,16 @@ VERTEXAI_PROJECT=my-project
|
||||||
VERTEXAI_LOCATION=us-east5
|
VERTEXAI_LOCATION=us-east5
|
||||||
```
|
```
|
||||||
|
|
||||||
Then you can run aider with the `--model` command line switch, like this:
|
Start working with aider and Vertex AI on your codebase:
|
||||||
|
|
||||||
```
|
```
|
||||||
|
# Change directory into your codebase
|
||||||
|
cd /to/your/project
|
||||||
|
|
||||||
aider --model vertex_ai/claude-3-5-sonnet@20240620
|
aider --model vertex_ai/claude-3-5-sonnet@20240620
|
||||||
```
|
```
|
||||||
|
|
||||||
Or you can use the [yaml config](/docs/config/aider_conf.html) to set the model to any of the
|
Or you can use the [YAML config](/docs/config/aider_conf.html) to set the model to any of the
|
||||||
models supported by Vertex AI.
|
models supported by Vertex AI.
|
||||||
|
|
||||||
Example `.aider.conf.yml` file:
|
Example `.aider.conf.yml` file:
|
||||||
|
|
|
@ -7,14 +7,22 @@ nav_order: 400
|
||||||
|
|
||||||
You'll need a [xAI API key](https://console.x.ai.).
|
You'll need a [xAI API key](https://console.x.ai.).
|
||||||
|
|
||||||
To use xAI:
|
First, install aider:
|
||||||
|
|
||||||
|
{% include install.md %}
|
||||||
|
|
||||||
|
Then configure your API keys:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
python -m pip install aider-install
|
|
||||||
aider-install
|
|
||||||
|
|
||||||
export XAI_API_KEY=<key> # Mac/Linux
|
export XAI_API_KEY=<key> # Mac/Linux
|
||||||
setx XAI_API_KEY <key> # Windows, restart shell after setx
|
setx XAI_API_KEY <key> # Windows, restart shell after setx
|
||||||
|
```
|
||||||
|
|
||||||
|
Start working with aider and xAI on your codebase:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Change directory into your codebase
|
||||||
|
cd /to/your/project
|
||||||
|
|
||||||
# Grok 3
|
# Grok 3
|
||||||
aider --model xai/grok-3-beta
|
aider --model xai/grok-3-beta
|
||||||
|
|
|
@ -58,6 +58,9 @@ cog.out(model_list)
|
||||||
- anthropic.claude-3-5-haiku-20241022-v1:0
|
- anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
- anthropic.claude-3-5-sonnet-20241022-v2:0
|
- anthropic.claude-3-5-sonnet-20241022-v2:0
|
||||||
- anthropic.claude-3-7-sonnet-20250219-v1:0
|
- anthropic.claude-3-7-sonnet-20250219-v1:0
|
||||||
|
- anthropic.claude-opus-4-20250514-v1:0
|
||||||
|
- anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
|
- azure_ai/mistral-medium-2505
|
||||||
- claude-3-5-haiku-20241022
|
- claude-3-5-haiku-20241022
|
||||||
- claude-3-5-haiku-latest
|
- claude-3-5-haiku-latest
|
||||||
- claude-3-5-sonnet-20240620
|
- claude-3-5-sonnet-20240620
|
||||||
|
@ -69,6 +72,8 @@ cog.out(model_list)
|
||||||
- claude-3-opus-20240229
|
- claude-3-opus-20240229
|
||||||
- claude-3-opus-latest
|
- claude-3-opus-latest
|
||||||
- claude-3-sonnet-20240229
|
- claude-3-sonnet-20240229
|
||||||
|
- claude-opus-4-20250514
|
||||||
|
- claude-sonnet-4-20250514
|
||||||
- codestral/codestral-2405
|
- codestral/codestral-2405
|
||||||
- codestral/codestral-latest
|
- codestral/codestral-latest
|
||||||
- databricks/databricks-claude-3-7-sonnet
|
- databricks/databricks-claude-3-7-sonnet
|
||||||
|
@ -77,15 +82,20 @@ cog.out(model_list)
|
||||||
- deepseek/deepseek-reasoner
|
- deepseek/deepseek-reasoner
|
||||||
- eu.anthropic.claude-3-5-haiku-20241022-v1:0
|
- eu.anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
- eu.anthropic.claude-3-5-sonnet-20241022-v2:0
|
- eu.anthropic.claude-3-5-sonnet-20241022-v2:0
|
||||||
|
- eu.anthropic.claude-3-7-sonnet-20250219-v1:0
|
||||||
|
- eu.anthropic.claude-opus-4-20250514-v1:0
|
||||||
|
- eu.anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
- mistral/codestral-2405
|
- mistral/codestral-2405
|
||||||
- mistral/codestral-latest
|
- mistral/codestral-latest
|
||||||
- mistral/codestral-mamba-latest
|
- mistral/codestral-mamba-latest
|
||||||
|
- mistral/devstral-small-2505
|
||||||
- mistral/mistral-large-2402
|
- mistral/mistral-large-2402
|
||||||
- mistral/mistral-large-2407
|
- mistral/mistral-large-2407
|
||||||
- mistral/mistral-large-2411
|
- mistral/mistral-large-2411
|
||||||
- mistral/mistral-large-latest
|
- mistral/mistral-large-latest
|
||||||
- mistral/mistral-medium
|
- mistral/mistral-medium
|
||||||
- mistral/mistral-medium-2312
|
- mistral/mistral-medium-2312
|
||||||
|
- mistral/mistral-medium-2505
|
||||||
- mistral/mistral-medium-latest
|
- mistral/mistral-medium-latest
|
||||||
- mistral/mistral-small
|
- mistral/mistral-small
|
||||||
- mistral/mistral-small-latest
|
- mistral/mistral-small-latest
|
||||||
|
@ -105,6 +115,8 @@ cog.out(model_list)
|
||||||
- us.anthropic.claude-3-5-haiku-20241022-v1:0
|
- us.anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
- us.anthropic.claude-3-5-sonnet-20241022-v2:0
|
- us.anthropic.claude-3-5-sonnet-20241022-v2:0
|
||||||
- us.anthropic.claude-3-7-sonnet-20250219-v1:0
|
- us.anthropic.claude-3-7-sonnet-20250219-v1:0
|
||||||
|
- us.anthropic.claude-opus-4-20250514-v1:0
|
||||||
|
- us.anthropic.claude-sonnet-4-20250514-v1:0
|
||||||
- vertex_ai/claude-3-5-haiku
|
- vertex_ai/claude-3-5-haiku
|
||||||
- vertex_ai/claude-3-5-haiku@20241022
|
- vertex_ai/claude-3-5-haiku@20241022
|
||||||
- vertex_ai/claude-3-5-sonnet
|
- vertex_ai/claude-3-5-sonnet
|
||||||
|
@ -118,6 +130,8 @@ cog.out(model_list)
|
||||||
- vertex_ai/claude-3-opus@20240229
|
- vertex_ai/claude-3-opus@20240229
|
||||||
- vertex_ai/claude-3-sonnet
|
- vertex_ai/claude-3-sonnet
|
||||||
- vertex_ai/claude-3-sonnet@20240229
|
- vertex_ai/claude-3-sonnet@20240229
|
||||||
|
- vertex_ai/claude-opus-4@20250514
|
||||||
|
- vertex_ai/claude-sonnet-4@20250514
|
||||||
<!--[[[end]]]-->
|
<!--[[[end]]]-->
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -17,6 +17,8 @@ First, aider will check which
|
||||||
[keys you have provided via the environment, config files, or command line arguments](https://aider.chat/docs/config/api-keys.html).
|
[keys you have provided via the environment, config files, or command line arguments](https://aider.chat/docs/config/api-keys.html).
|
||||||
Based on the available keys, aider will select the best model to use.
|
Based on the available keys, aider will select the best model to use.
|
||||||
|
|
||||||
|
## OpenRouter
|
||||||
|
|
||||||
If you have not provided any keys, aider will offer to help you connect to
|
If you have not provided any keys, aider will offer to help you connect to
|
||||||
[OpenRouter](http://openrouter.ai)
|
[OpenRouter](http://openrouter.ai)
|
||||||
which provides both free and paid access to most popular LLMs.
|
which provides both free and paid access to most popular LLMs.
|
||||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue