Compare commits

...

40 Commits

Author SHA1 Message Date
b5bf5d78ba 🤖 Repository layout updated to latest version
Some checks failed
Run Python tests (through Pytest) / Test (push) Failing after 25s
Verify Python project can be installed, loaded and have version checked / Test (push) Successful in 23s
This commit was automatically generated by [a script](https://gitfub.space/Jmaa/repo-manager)
2025-06-13 23:59:24 +02:00
0497fd3e26 No default evaluator models
Some checks failed
Run Python tests (through Pytest) / Test (push) Failing after 25s
Verify Python project can be installed, loaded and have version checked / Test (push) Successful in 23s
2025-06-09 13:07:00 +02:00
ea9b55e4c3 Misc
Some checks failed
Run Python tests (through Pytest) / Test (push) Failing after 25s
Verify Python project can be installed, loaded and have version checked / Test (push) Successful in 23s
2025-06-09 13:00:44 +02:00
d0d22a8ac4 🤖 Repository layout updated to latest version
This commit was automatically generated by [a script](https://gitfub.space/Jmaa/repo-manager)
2025-06-09 02:02:23 +02:00
5d28388bc3 Removed unneeded code 2025-06-09 01:55:36 +02:00
15fa6cef49 Update documentation for dual AI assistant support
All checks were successful
Run Python tests (through Pytest) / Test (push) Successful in 24s
Verify Python project can be installed, loaded and have version checked / Test (push) Successful in 23s
Extended usage section in module docstring to cover both Aider and Claude Code integration:
- Clear explanation of automatic model routing based on model names
- Comprehensive command line examples for both assistants
- Updated Python API examples with new function signatures
- Environment configuration organized by assistant type
- Model examples categorized by routing destination

Users now have complete guidance on using either Aider or Claude Code with appropriate model selection.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-09 01:23:05 +02:00
325c0767f1 Add Claude Code integration with automatic model routing
All checks were successful
Run Python tests (through Pytest) / Test (push) Successful in 26s
Verify Python project can be installed, loaded and have version checked / Test (push) Successful in 23s
Implemented complete Claude Code support alongside existing Aider integration:
- ClaudeCodeSolver strategy with programmatic Claude Code CLI usage
- Intelligent model detection to route Anthropic models to Claude Code
- Shared post-solver cleanup function to eliminate code duplication
- CLAUDE_CODE_MESSAGE_FORMAT constant for maintainable prompts
- Comprehensive test suite with 18 passing tests
- Automatic ANTHROPIC_API_KEY environment setup

Users can now specify any Anthropic model (claude, sonnet, haiku, opus) to use Claude Code, while other models continue using Aider.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-09 01:18:11 +02:00
0b9902c428 Refactor Aider integration using Strategy Pattern
Extracted Aider-specific functionality into AiderCodeSolver strategy class to enable future integration with Claude Code. This architectural change maintains backward compatibility while preparing for multi-AI-assistant support.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-09 01:02:45 +02:00
c476b1a37a 🤖 Bumped version to 0.1.10
Some checks failed
Build Python Container / release-image (push) Failing after 17s
Package Python / Package-Python-And-Publish (push) Successful in 25s
Run Python tests (through Pytest) / Test (push) Successful in 25s
Verify Python project can be installed, loaded and have version checked / Test (push) Successful in 23s
This commit was automatically generated by [a script](https://gitfub.space/Jmaa/repo-manager)
2025-06-04 21:30:57 +02:00
43100e4708 🤖 Repository layout updated to latest version
This commit was automatically generated by [a script](https://gitfub.space/Jmaa/repo-manager)
2025-06-04 21:30:19 +02:00
39a60fcc1b 🤖 Bumped version to 0.1.9
Some checks failed
Build Python Container / release-image (push) Failing after 18s
Package Python / Package-Python-And-Publish (push) Successful in 25s
Run Python tests (through Pytest) / Test (push) Successful in 25s
Verify Python project can be installed, loaded and have version checked / Test (push) Successful in 22s
This commit was automatically generated by [a script](https://gitfub.space/Jmaa/repo-manager)
2025-05-27 00:35:06 +02:00
17b6dd8026 🤖 Repository layout updated to latest version
This commit was automatically generated by [a script](https://gitfub.space/Jmaa/repo-manager)
2025-05-27 00:34:42 +02:00
12fbb3f5e0 🤖 Bumped version to 0.1.8
Some checks failed
Build Python Container / release-image (push) Failing after 18s
Package Python / Package-Python-And-Publish (push) Successful in 25s
Run Python tests (through Pytest) / Test (push) Successful in 24s
Verify Python project can be installed, loaded and have version checked / Test (push) Successful in 22s
This commit was automatically generated by [a script](https://gitfub.space/Jmaa/repo-manager)
2025-05-21 12:29:29 +02:00
5308a5339b 🤖 Repository layout updated to latest version
This commit was automatically generated by [a script](https://gitfub.space/Jmaa/repo-manager)
2025-05-21 12:29:13 +02:00
c83b0a4bc7 🤖 Bumped version to 0.1.7
Some checks failed
Build Python Container / Package-Container (push) Failing after 52s
Package Python / Package (push) Successful in 24s
Run Python tests (through Pytest) / Test (push) Successful in 24s
Verify Python project can be installed, loaded and have version checked / Test (push) Successful in 22s
This commit was automatically generated by [a script](https://gitfub.space/Jmaa/repo-manager)
2025-05-21 00:57:18 +02:00
1a08677ea2 🤖 Repository layout updated to latest version
This commit was automatically generated by [a script](https://gitfub.space/Jmaa/repo-manager)
2025-05-21 00:48:53 +02:00
7fcb9acb94 🤖 Repository layout updated to latest version
This commit was automatically generated by [a script](https://gitfub.space/Jmaa/repo-manager)
2025-05-21 00:47:36 +02:00
42937ece1b Fix weird loop
All checks were successful
Run Python tests (through Pytest) / Test (push) Successful in 25s
Verify Python project can be installed, loaded and have version checked / Test (push) Successful in 23s
2025-05-13 23:55:40 +02:00
224e195726 More flexible yes/no
All checks were successful
Run Python tests (through Pytest) / Test (push) Successful in 24s
Verify Python project can be installed, loaded and have version checked / Test (push) Successful in 22s
2025-05-11 20:37:51 +02:00
3e5b88a736 Avoid emitting thinking tokens
All checks were successful
Run Python tests (through Pytest) / Test (push) Successful in 25s
Verify Python project can be installed, loaded and have version checked / Test (push) Successful in 22s
2025-05-11 16:37:51 +02:00
95b38b506e Update 2025-05-11 15:36:30 +02:00
7da687ab3f Config edit formats
All checks were successful
Run Python tests (through Pytest) / Test (push) Successful in 25s
Verify Python project can be installed, loaded and have version checked / Test (push) Successful in 22s
2025-05-11 11:55:46 +02:00
ef8903c3d7 unsloth use diff also 2025-05-11 11:28:06 +02:00
835235f41b Comment on status 2025-05-11 10:49:38 +02:00
3b99ebdea9 Silent ruff runs initially
All checks were successful
Run Python tests (through Pytest) / Test (push) Successful in 25s
Verify Python project can be installed, loaded and have version checked / Test (push) Successful in 22s
2025-05-11 10:45:14 +02:00
71c3a85e05 🤖 Repository layout updated to latest version
This commit was automatically generated by [a script](https://gitfub.space/Jmaa/repo-manager)
2025-05-11 10:43:45 +02:00
56c70f6322 Ruff 2025-05-11 10:43:31 +02:00
32cdcde883 Messing around 2025-05-11 10:43:23 +02:00
4141eeb30c Always require model
All checks were successful
Run Python tests (through Pytest) / Test (push) Successful in 25s
Verify Python project can be installed, loaded and have version checked / Test (push) Successful in 23s
2025-05-10 19:53:47 +02:00
f72206365d 🤖 Repository layout updated to latest version
This commit was automatically generated by [a script](https://gitfub.space/Jmaa/repo-manager)
2025-05-01 00:02:08 +02:00
bd5788ecae Merge branch 'main' into issue-93-handle-failing-pipelines
All checks were successful
Run Python tests (through Pytest) / Test (push) Successful in 24s
Verify Python project can be installed, loaded and have version checked / Test (push) Successful in 22s
2025-04-28 23:15:46 +02:00
a8ce6102d2 fix: return true early in verify_solution if no evaluator model is set
All checks were successful
Run Python tests (through Pytest) / Test (push) Successful in 25s
Verify Python project can be installed, loaded and have version checked / Test (push) Successful in 22s
2025-04-24 12:07:08 +02:00
236d1c0a10 Ruff after aider
All checks were successful
Run Python tests (through Pytest) / Test (push) Successful in 24s
Verify Python project can be installed, loaded and have version checked / Test (push) Successful in 22s
2025-04-24 11:55:25 +02:00
7a35029a18 feat: add CLI options to override aider and evaluator models 2025-04-24 11:55:21 +02:00
7a73a1e3fc Initial ruff pass 2025-04-24 11:54:41 +02:00
54cddfde0b Removed useless functionality
All checks were successful
Run Python tests (through Pytest) / Test (push) Successful in 25s
Verify Python project can be installed, loaded and have version checked / Test (push) Successful in 22s
2025-04-23 23:11:05 +02:00
b9013b7b2a Fixing types 2025-04-23 22:24:41 +02:00
6db1cccaf8 Fix the status_code 2025-04-23 22:20:02 +02:00
bff022b806 Ruff after aider
All checks were successful
Run Python tests (through Pytest) / Test (push) Successful in 25s
Verify Python project can be installed, loaded and have version checked / Test (push) Successful in 23s
2025-04-23 08:59:27 +02:00
c524891168 feat: add automatic handling and resolution of failing pipelines in PRs 2025-04-23 08:59:23 +02:00
13 changed files with 869 additions and 193 deletions

View File

@ -1,3 +1,7 @@
# WARNING!
# THIS IS AN AUTOGENERATED FILE!
# MANUAL CHANGES CAN AND WILL BE OVERWRITTEN!
name: Build Python Container name: Build Python Container
on: on:
push: push:
@ -6,13 +10,72 @@ on:
paths-ignore: ['README.md', '.gitignore', 'LICENSE', 'CONVENTIONS.md', 'ruff.toml'] paths-ignore: ['README.md', '.gitignore', 'LICENSE', 'CONVENTIONS.md', 'ruff.toml']
jobs: jobs:
Package-Container: release-image:
uses: jmaa/workflows/.gitea/workflows/container.yaml@v6.21 runs-on: ubuntu-latest
with: container:
REGISTRY_DOMAIN: gitfub.space image: catthehacker/ubuntu:act-latest
REGISTRY_ORGANIZATION: jmaa env:
secrets: RUNNER_TOOL_CACHE: /toolcache
DOCKER_USERNAME: ${{ secrets.PIPY_REPO_USER }} steps:
DOCKER_PASSWORD: ${{ secrets.PIPY_REPO_PASS }} - run: apt-get update
PIPELINE_WORKER_SSH_KEY: ${{ secrets.PIPELINE_WORKER_SSH_KEY }} - name: Checkout
PIPELINE_WORKER_KNOWN_HOSTS: ${{ secrets.PIPELINE_WORKER_KNOWN_HOSTS }} uses: actions/checkout@v3
- name: Setting up SSH
if: ${{ hashFiles('requirements_private.txt') != '' }}
uses: https://github.com/shimataro/ssh-key-action@v2.5.1
with:
key: ${{ secrets.PIPELINE_WORKER_SSH_KEY }}
name: id_rsa
known_hosts: ${{ secrets.PIPELINE_WORKER_KNOWN_HOSTS }}
config: |
Host gitfub
HostName gitfub.space
User ${{ secrets.PIPY_REPO_USER }}
- name: Download private dependencies
if: ${{ hashFiles('requirements_private.txt') != '' }}
shell: bash
run: |
set -e
mkdir -p private_deps
cd private_deps
while IFS=$" " read -r -a dependency_spec
do
if test -n "${dependency_spec[1]}"
then
git clone -v --single-branch --no-tags "${dependency_spec[0]}" --branch "${dependency_spec[1]}"
else
git clone -v --single-branch --no-tags "${dependency_spec[0]}"
fi
done < ../requirements_private.txt
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker BuildX
uses: docker/setup-buildx-action@v2
- name: Login to Docker Registry
uses: docker/login-action@v2
with:
registry: gitfub.space
username: ${{ secrets.PIPY_REPO_USER }}
password: ${{ secrets.PIPY_REPO_PASS }}
- name: Get Meta
id: meta
run: |
echo REPO_NAME=$(echo ${GITHUB_REPOSITORY} | awk -F"/" '{print $2}') >> $GITHUB_OUTPUT
echo REPO_VERSION=$(git describe --tags --always | sed 's/^v//') >> $GITHUB_OUTPUT
- name: Build and push
uses: docker/build-push-action@v4
with:
context: .
file: ./Dockerfile
platforms: |
linux/amd64
push: true
tags: |
gitfub.space/jmaa/${{ steps.meta.outputs.REPO_NAME }}:${{ steps.meta.outputs.REPO_VERSION }}
gitfub.space/jmaa/${{ steps.meta.outputs.REPO_NAME }}:latest

View File

@ -1,3 +1,7 @@
# WARNING!
# THIS IS AN AUTOGENERATED FILE!
# MANUAL CHANGES CAN AND WILL BE OVERWRITTEN!
name: Package Python name: Package Python
on: on:
push: push:
@ -6,11 +10,24 @@ on:
paths-ignore: ['README.md', '.gitignore', 'LICENSE', 'CONVENTIONS.md', 'ruff.toml'] paths-ignore: ['README.md', '.gitignore', 'LICENSE', 'CONVENTIONS.md', 'ruff.toml']
jobs: jobs:
Package: Package-Python-And-Publish:
uses: jmaa/workflows/.gitea/workflows/python-package.yaml@v6.21 runs-on: ubuntu-latest
with: container:
REGISTRY_DOMAIN: gitfub.space image: node:21-bookworm
REGISTRY_ORGANIZATION: jmaa steps:
secrets: - name: Setting up Python ${{ env.PYTHON_VERSION }} for ${{runner.arch}} ${{runner.os}}
PIPY_REPO_USER: ${{ secrets.PIPY_REPO_USER }} run: |
PIPY_REPO_PASS: ${{ secrets.PIPY_REPO_PASS }} apt-get update
apt-get install -y python3 python3-pip
- name: Check out repository code
if: success()
uses: actions/checkout@v3
- name: Installing Python Dependencies
if: success()
run: python3 -m pip install --upgrade pip setuptools wheel build twine pytest --break-system-packages
- name: Build
if: success()
run: python3 -m build
- name: Publish
if: success()
run: python3 -m twine upload --repository-url "https://gitfub.space/api/packages/jmaa/pypi" -u ${{ secrets.PIPY_REPO_USER }} -p ${{ secrets.PIPY_REPO_PASS }} dist/*

View File

@ -1,3 +1,7 @@
# WARNING!
# THIS IS AN AUTOGENERATED FILE!
# MANUAL CHANGES CAN AND WILL BE OVERWRITTEN!
name: Run Python tests (through Pytest) name: Run Python tests (through Pytest)
on: on:

View File

@ -1,3 +1,7 @@
# WARNING!
# THIS IS AN AUTOGENERATED FILE!
# MANUAL CHANGES CAN AND WILL BE OVERWRITTEN!
name: Verify Python project can be installed, loaded and have version checked name: Verify Python project can be installed, loaded and have version checked
on: on:

View File

@ -1,3 +1,7 @@
<!-- WARNING! -->
<!-- THIS IS AN AUTOGENERATED FILE! -->
<!-- MANUAL CHANGES CAN AND WILL BE OVERWRITTEN! -->
# Conventions # Conventions
When contributing code to this project, you MUST follow the requirements When contributing code to this project, you MUST follow the requirements

101
README.md
View File

@ -1,23 +1,29 @@
<!--- WARNING ---> <!-- WARNING! -->
<!--- THIS IS AN AUTO-GENERATED FILE ---> <!-- THIS IS AN AUTOGENERATED FILE! -->
<!--- MANUAL CHANGES CAN AND WILL BE OVERWRITTEN ---> <!-- MANUAL CHANGES CAN AND WILL BE OVERWRITTEN! -->
# Aider Gitea # Aider Gitea
![Test program/library](https://gitfub.space/Jmaa/aider-gitea/actions/workflows/python-test.yml/badge.svg) ![Test program/library](https://gitfub.space/Jmaa/aider-gitea/actions/workflows/python-test.yml/badge.svg)
A code automation tool that integrates Gitea with Aider to automatically solve issues. A code automation tool that integrates Gitea with AI assistants to automatically solve issues.
This program monitors your [Gitea](https://about.gitea.com/) repository for issues with the 'aider' label. This program monitors your [Gitea](https://about.gitea.com/) repository for issues with the 'aider' label.
When such an issue is found, it: When such an issue is found, it:
1. Creates a new branch. 1. Creates a new branch.
2. Invokes [Aider](https://aider.chat/) to solve the issue using a Large-Language Model. 2. Invokes an AI assistant (Aider or Claude Code) to solve the issue using a Large-Language Model.
3. Runs tests and code quality checks. 3. Runs tests and code quality checks.
4. Creates a pull request with the solution. 4. Creates a pull request with the solution.
The tool automatically selects the appropriate AI assistant based on the specified model:
- **Aider**: Used for non-Anthropic models (e.g., GPT, Ollama, Gemini)
- **Claude Code**: Used for Anthropic models (e.g., Claude, Sonnet, Haiku, Opus)
Inspired by [the AI workflows](https://github.com/oscoreio/ai-workflows/)
project.
## Usage ## Usage
An application token must be supplied for the `gitea_token` secret. This must An application token must be supplied for the `gitea_token` secret. This must
@ -30,48 +36,109 @@ have the following permissions:
### Command Line ### Command Line
```bash ```bash
# Run with default settings # Run with default settings (uses Aider)
python -m aider_gitea python -m aider_gitea --aider-model gpt-4
# Use Claude Code with Anthropic models
python -m aider_gitea --aider-model claude-3-sonnet
python -m aider_gitea --aider-model claude-3-haiku
python -m aider_gitea --aider-model anthropic/claude-3-opus
# Use Aider with various models
python -m aider_gitea --aider-model gpt-4
python -m aider_gitea --aider-model ollama/llama3
python -m aider_gitea --aider-model gemini-pro
# Specify custom repository and owner # Specify custom repository and owner
python -m aider_gitea --owner myorg --repo myproject python -m aider_gitea --owner myorg --repo myproject --aider-model claude-3-sonnet
# Use a custom Gitea URL # Use a custom Gitea URL
python -m aider_gitea --gitea-url https://gitea.example.com python -m aider_gitea --gitea-url https://gitea.example.com --aider-model gpt-4
# Specify a different base branch # Specify a different base branch
python -m aider_gitea --base-branch develop python -m aider_gitea --base-branch develop --aider-model claude-3-haiku
``` ```
### AI Assistant Selection
The tool automatically routes to the appropriate AI assistant based on the model name:
**Claude Code Integration (Anthropic Models):**
- Model names containing: `claude`, `anthropic`, `sonnet`, `haiku`, `opus`
- Examples: `claude-3-sonnet`, `claude-3-haiku`, `anthropic/claude-3-opus`
- Requires: `ANTHROPIC_API_KEY` environment variable
**Aider Integration (All Other Models):**
- Any model not matching Anthropic patterns
- Examples: `gpt-4`, `ollama/llama3`, `gemini-pro`, `mistral-7b`
- Requires: `LLM_API_KEY` environment variable
### Python API ### Python API
```python ```python
from aider_gitea import solve_issue_in_repository from aider_gitea import solve_issue_in_repository, create_code_solver
from pathlib import Path from pathlib import Path
import argparse
# Solve an issue programmatically # Solve an issue programmatically with automatic AI assistant selection
args = argparse.Namespace( repository_config = RepositoryConfig(
gitea_url="https://gitea.example.com", gitea_url="https://gitea.example.com",
owner="myorg", owner="myorg",
repo="myproject", repo="myproject",
base_branch="main" base_branch="main"
) )
# Set the model to control which AI assistant is used
import aider_gitea
aider_gitea.CODE_MODEL = "claude-3-sonnet" # Will use Claude Code
# aider_gitea.CODE_MODEL = "gpt-4" # Will use Aider
code_solver = create_code_solver() # Automatically selects based on model
solve_issue_in_repository( solve_issue_in_repository(
args, repository_config,
Path("/path/to/repo"), Path("/path/to/repo"),
"issue-123-fix-bug", "issue-123-fix-bug",
"Fix critical bug", "Fix critical bug",
"The application crashes when processing large files", "The application crashes when processing large files",
"123" "123",
gitea_client,
code_solver
) )
``` ```
### Environment Configuration ### Environment Configuration
The tool uses environment variables for sensitive information: The tool uses environment variables for sensitive information:
**Required for all setups:**
- `GITEA_TOKEN`: Your Gitea API token - `GITEA_TOKEN`: Your Gitea API token
- `LLM_API_KEY`: API key for the language model used by Aider
**For Aider (non-Anthropic models):**
- `LLM_API_KEY`: API key for the language model (OpenAI, Ollama, etc.)
**For Claude Code (Anthropic models):**
- `ANTHROPIC_API_KEY`: Your Anthropic API key for Claude models
### Model Examples
**Anthropic Models (→ Claude Code):**
```bash
--aider-model claude-3-sonnet
--aider-model claude-3-haiku
--aider-model claude-3-opus
--aider-model anthropic/claude-3-sonnet
```
**Non-Anthropic Models (→ Aider):**
```bash
--aider-model gpt-4
--aider-model gpt-3.5-turbo
--aider-model ollama/llama3
--aider-model ollama/codellama
--aider-model gemini-pro
--aider-model mistral-7b
```
``` ```
## Dependencies ## Dependencies

View File

@ -1,15 +1,19 @@
"""Aider Gitea. """Aider Gitea.
A code automation tool that integrates Gitea with Aider to automatically solve issues. A code automation tool that integrates Gitea with AI assistants to automatically solve issues.
This program monitors your [Gitea](https://about.gitea.com/) repository for issues with the 'aider' label. This program monitors your [Gitea](https://about.gitea.com/) repository for issues with the 'aider' label.
When such an issue is found, it: When such an issue is found, it:
1. Creates a new branch. 1. Creates a new branch.
2. Invokes [Aider](https://aider.chat/) to solve the issue using a Large-Language Model. 2. Invokes an AI assistant (Aider or Claude Code) to solve the issue using a Large-Language Model.
3. Runs tests and code quality checks. 3. Runs tests and code quality checks.
4. Creates a pull request with the solution. 4. Creates a pull request with the solution.
The tool automatically selects the appropriate AI assistant based on the specified model:
- **Aider**: Used for non-Anthropic models (e.g., GPT, Ollama, Gemini)
- **Claude Code**: Used for Anthropic models (e.g., Claude, Sonnet, Haiku, Opus)
Inspired by [the AI workflows](https://github.com/oscoreio/ai-workflows/) Inspired by [the AI workflows](https://github.com/oscoreio/ai-workflows/)
project. project.
@ -25,48 +29,109 @@ have the following permissions:
### Command Line ### Command Line
```bash ```bash
# Run with default settings # Run with default settings (uses Aider)
python -m aider_gitea python -m aider_gitea --aider-model gpt-4
# Use Claude Code with Anthropic models
python -m aider_gitea --aider-model claude-3-sonnet
python -m aider_gitea --aider-model claude-3-haiku
python -m aider_gitea --aider-model anthropic/claude-3-opus
# Use Aider with various models
python -m aider_gitea --aider-model gpt-4
python -m aider_gitea --aider-model ollama/llama3
python -m aider_gitea --aider-model gemini-pro
# Specify custom repository and owner # Specify custom repository and owner
python -m aider_gitea --owner myorg --repo myproject python -m aider_gitea --owner myorg --repo myproject --aider-model claude-3-sonnet
# Use a custom Gitea URL # Use a custom Gitea URL
python -m aider_gitea --gitea-url https://gitea.example.com python -m aider_gitea --gitea-url https://gitea.example.com --aider-model gpt-4
# Specify a different base branch # Specify a different base branch
python -m aider_gitea --base-branch develop python -m aider_gitea --base-branch develop --aider-model claude-3-haiku
``` ```
### AI Assistant Selection
The tool automatically routes to the appropriate AI assistant based on the model name:
**Claude Code Integration (Anthropic Models):**
- Model names containing: `claude`, `anthropic`, `sonnet`, `haiku`, `opus`
- Examples: `claude-3-sonnet`, `claude-3-haiku`, `anthropic/claude-3-opus`
- Requires: `ANTHROPIC_API_KEY` environment variable
**Aider Integration (All Other Models):**
- Any model not matching Anthropic patterns
- Examples: `gpt-4`, `ollama/llama3`, `gemini-pro`, `mistral-7b`
- Requires: `LLM_API_KEY` environment variable
### Python API ### Python API
```python ```python
from aider_gitea import solve_issue_in_repository from aider_gitea import solve_issue_in_repository, create_code_solver
from pathlib import Path from pathlib import Path
import argparse
# Solve an issue programmatically # Solve an issue programmatically with automatic AI assistant selection
args = argparse.Namespace( repository_config = RepositoryConfig(
gitea_url="https://gitea.example.com", gitea_url="https://gitea.example.com",
owner="myorg", owner="myorg",
repo="myproject", repo="myproject",
base_branch="main" base_branch="main"
) )
# Set the model to control which AI assistant is used
import aider_gitea
aider_gitea.CODE_MODEL = "claude-3-sonnet" # Will use Claude Code
# aider_gitea.CODE_MODEL = "gpt-4" # Will use Aider
code_solver = create_code_solver() # Automatically selects based on model
solve_issue_in_repository( solve_issue_in_repository(
args, repository_config,
Path("/path/to/repo"), Path("/path/to/repo"),
"issue-123-fix-bug", "issue-123-fix-bug",
"Fix critical bug", "Fix critical bug",
"The application crashes when processing large files", "The application crashes when processing large files",
"123" "123",
gitea_client,
code_solver
) )
``` ```
### Environment Configuration ### Environment Configuration
The tool uses environment variables for sensitive information: The tool uses environment variables for sensitive information:
**Required for all setups:**
- `GITEA_TOKEN`: Your Gitea API token - `GITEA_TOKEN`: Your Gitea API token
- `LLM_API_KEY`: API key for the language model used by Aider
**For Aider (non-Anthropic models):**
- `LLM_API_KEY`: API key for the language model (OpenAI, Ollama, etc.)
**For Claude Code (Anthropic models):**
- `ANTHROPIC_API_KEY`: Your Anthropic API key for Claude models
### Model Examples
**Anthropic Models ( Claude Code):**
```bash
--aider-model claude-3-sonnet
--aider-model claude-3-haiku
--aider-model claude-3-opus
--aider-model anthropic/claude-3-sonnet
```
**Non-Anthropic Models ( Aider):**
```bash
--aider-model gpt-4
--aider-model gpt-3.5-turbo
--aider-model ollama/llama3
--aider-model ollama/codellama
--aider-model gemini-pro
--aider-model mistral-7b
```
``` ```
""" """
@ -103,7 +168,11 @@ class RepositoryConfig:
class IssueResolution: class IssueResolution:
success: bool success: bool
pull_request_url: str | None = None pull_request_url: str | None = None
pull_request_id: str | None = None pull_request_id: int | None = None
def __post_init__(self):
assert self.pull_request_id is None or isinstance(self.pull_request_id, int)
assert self.pull_request_url is None or isinstance(self.pull_request_url, str)
def generate_branch_name(issue_number: str, issue_title: str) -> str: def generate_branch_name(issue_number: str, issue_title: str) -> str:
@ -130,17 +199,21 @@ def bash_cmd(*commands: str) -> str:
AIDER_TEST = bash_cmd( AIDER_TEST = bash_cmd(
'echo "Setting up virtual environment"',
'virtualenv venv', 'virtualenv venv',
'echo "Activating virtual environment"',
'source venv/bin/activate', 'source venv/bin/activate',
'echo "Installing package"',
'pip install -e .', 'pip install -e .',
'echo "Testing package"',
'pytest test', 'pytest test',
) )
RUFF_FORMAT_AND_AUTO_FIX = bash_cmd( RUFF_FORMAT_AND_AUTO_FIX = bash_cmd(
'ruff format', 'ruff format --silent',
'ruff check --fix --ignore RUF022 --ignore PGH004', 'ruff check --fix --ignore RUF022 --ignore PGH004 --silent',
'ruff format', 'ruff format --silent',
'ruff check --fix --ignore RUF022 --ignore PGH004', 'ruff check --fix --ignore RUF022 --ignore PGH004 --silent',
) )
AIDER_LINT = bash_cmd( AIDER_LINT = bash_cmd(
@ -150,57 +223,212 @@ AIDER_LINT = bash_cmd(
) )
LLM_MESSAGE_FORMAT = ( LLM_MESSAGE_FORMAT = """{issue}
"""{issue}\nDo not wait for explicit approval before working on code changes."""
)
# CODE_MODEL = 'ollama/gemma3:4b' Go ahead with the changes you deem appropriate without waiting for explicit approval.
CODE_MODEL = 'o4-mini'
EVALUATOR_MODEL = 'ollama/gemma3:27b' Do not draft changes beforehand; produce changes only once prompted for a specific file.
"""
CLAUDE_CODE_MESSAGE_FORMAT = """{issue}
Please fix this issue by making the necessary code changes. Follow these guidelines:
1. Run tests after making changes to ensure they pass
2. Follow existing code style and conventions
3. Make minimal, focused changes to address the issue
4. Commit your changes with a descriptive message
The test command for this project is: {test_command}
The lint command for this project is: {lint_command}
"""
CODE_MODEL = None
EVALUATOR_MODEL = None
MODEL_EDIT_MODES = {
'ollama/qwen3:32b': 'diff',
'ollama/hf.co/unsloth/Qwen3-30B-A3B-GGUF:Q4_K_M': 'diff',
}
def create_aider_command(issue: str) -> list[str]: def run_post_solver_cleanup(repository_path: Path, solver_name: str) -> None:
l = [ """Run standard code quality fixes and commit changes after a code solver.
'aider',
'--chat-language', Args:
'english', repository_path: Path to the repository
'--no-stream', solver_name: Name of the solver (for commit message)
'--no-analytics', """
#'--no-check-update', # Auto-fix standard code quality stuff
'--test-cmd', run_cmd(['bash', '-c', RUFF_FORMAT_AND_AUTO_FIX], repository_path, check=False)
AIDER_TEST, run_cmd(['git', 'add', '.'], repository_path)
'--lint-cmd', run_cmd(
AIDER_LINT, ['git', 'commit', '-m', f'Ruff after {solver_name}'],
'--auto-test', repository_path,
'--no-auto-lint', check=False,
'--yes', )
@dataclasses.dataclass(frozen=True)
class CodeSolverStrategy:
"""Base interface for code solving strategies."""
def solve_issue_round(self, repository_path: Path, issue_content: str) -> bool:
"""Attempt to solve an issue in a single round.
Args:
repository_path: Path to the repository
issue_content: The issue description to solve
Returns:
True if the solution round completed without crashing, False otherwise
"""
raise NotImplementedError
@dataclasses.dataclass(frozen=True)
class AiderCodeSolver(CodeSolverStrategy):
"""Code solver that uses Aider for issue resolution."""
def _create_aider_command(self, issue: str) -> list[str]:
"""Create the Aider command with all necessary flags."""
l = [
'aider',
'--chat-language',
'english',
'--no-stream',
'--no-analytics',
'--test-cmd',
AIDER_TEST,
'--lint-cmd',
AIDER_LINT,
'--auto-test',
'--no-auto-lint',
'--yes',
'--disable-playwright',
'--timeout',
str(10_000),
]
if edit_format := MODEL_EDIT_MODES.get(CODE_MODEL):
l.append('--edit-format')
l.append(edit_format)
del edit_format
for key in secrets.llm_api_keys():
l += ['--api-key', key]
if False:
l.append('--read')
l.append('CONVENTIONS.md')
if True:
l.append('--cache-prompts')
if False:
l.append('--architect')
if CODE_MODEL:
l.append('--model')
l.append(CODE_MODEL)
if CODE_MODEL.startswith('ollama/') and False:
l.append('--auto-lint')
if True:
l.append('--message')
l.append(LLM_MESSAGE_FORMAT.format(issue=issue))
return l
def solve_issue_round(self, repository_path: Path, issue_content: str) -> bool:
"""Solve an issue using Aider."""
# Primary Aider command
aider_command = self._create_aider_command(issue_content)
aider_did_not_crash = run_cmd(
aider_command,
cwd=repository_path,
check=False,
)
if not aider_did_not_crash:
return aider_did_not_crash
# Run post-solver cleanup
run_post_solver_cleanup(repository_path, 'aider')
return True
@dataclasses.dataclass(frozen=True)
class ClaudeCodeSolver(CodeSolverStrategy):
"""Code solver that uses Claude Code for issue resolution."""
def _create_claude_command(self, issue: str) -> list[str]:
"""Create the Claude Code command for programmatic use."""
cmd = [
'claude',
'-p',
'--output-format',
'stream-json',
#'--max-turns', '100',
'--debug',
'--verbose',
'--dangerously-skip-permissions',
]
if CODE_MODEL:
cmd.extend(['--model', CODE_MODEL])
cmd.append(issue)
return cmd
def solve_issue_round(self, repository_path: Path, issue_content: str) -> bool:
"""Solve an issue using Claude Code."""
# Prepare the issue prompt for Claude Code
enhanced_issue = CLAUDE_CODE_MESSAGE_FORMAT.format(
issue=issue_content,
test_command=AIDER_TEST,
lint_command=AIDER_LINT,
)
# Create Claude Code command
claude_command = self._create_claude_command(enhanced_issue)
# Run Claude Code
run_cmd(
claude_command,
cwd=repository_path,
check=False,
)
# Run post-solver cleanup
run_post_solver_cleanup(repository_path, 'Claude Code')
return True
def is_anthropic_model(model: str) -> bool:
"""Check if the model string indicates an Anthropic/Claude model."""
if not model:
return False
anthropic_indicators = [
'claude',
'anthropic',
'sonnet',
'haiku',
'opus',
] ]
for key in secrets.llm_api_keys(): model_lower = model.lower()
l += ['--api-key', key] return any(indicator in model_lower for indicator in anthropic_indicators)
if False:
l.append('--read')
l.append('CONVENTIONS.md')
if True: def create_code_solver() -> CodeSolverStrategy:
l.append('--cache-prompts') """Create the appropriate code solver based on the configured model."""
if is_anthropic_model(CODE_MODEL):
if False: return ClaudeCodeSolver()
l.append('--architect') else:
return AiderCodeSolver()
if CODE_MODEL:
l.append('--model')
l.append(CODE_MODEL)
if CODE_MODEL.startswith('ollama/'):
l.append('--auto-lint')
if True:
l.append('--message')
l.append(LLM_MESSAGE_FORMAT.format(issue=issue))
return l
def get_commit_messages(cwd: Path, base_branch: str, current_branch: str) -> list[str]: def get_commit_messages(cwd: Path, base_branch: str, current_branch: str) -> list[str]:
@ -283,8 +511,8 @@ def push_changes(
# Extract PR number and URL if available # Extract PR number and URL if available
return IssueResolution( return IssueResolution(
True, True,
str(pr_response.get('number')),
pr_response.get('html_url'), pr_response.get('html_url'),
int(pr_response.get('number')),
) )
@ -322,29 +550,19 @@ def run_cmd(cmd: list[str], cwd: Path | None = None, check=True) -> bool:
return result.returncode == 0 return result.returncode == 0
def issue_solution_round(repository_path, issue_content): def remove_thinking_tokens(text: str) -> str:
# Primary Aider command text = re.sub(r'^\s*<think>.*?</think>', '', text, flags=re.MULTILINE | re.DOTALL)
aider_command = create_aider_command(issue_content) text = text.strip()
print(aider_command) return text
aider_did_not_crash = run_cmd(
aider_command,
repository_path,
check=False,
)
if not aider_did_not_crash:
return aider_did_not_crash
# Auto-fix standard code quality stuff after aider
run_cmd(['bash', '-c', RUFF_FORMAT_AND_AUTO_FIX], repository_path, check=False)
run_cmd(['git', 'add', '.'], repository_path)
run_cmd(['git', 'commit', '-m', 'Ruff after aider'], repository_path, check=False)
return True assert remove_thinking_tokens('<think>Hello</think>\nWorld\n') == 'World'
assert remove_thinking_tokens('<think>\nHello\n</think>\nWorld\n') == 'World'
assert remove_thinking_tokens('\n<think>\nHello\n</think>\nWorld\n') == 'World'
def run_ollama(cwd: Path, texts: list[str]) -> str: def run_ollama(cwd: Path, texts: list[str]) -> str:
cmd = ['ollama', 'run', EVALUATOR_MODEL.removeprefix('ollama/')] cmd = ['ollama', 'run', EVALUATOR_MODEL.removeprefix('ollama/')]
print(cmd)
process = subprocess.Popen( process = subprocess.Popen(
cmd, cmd,
cwd=cwd, cwd=cwd,
@ -354,14 +572,14 @@ def run_ollama(cwd: Path, texts: list[str]) -> str:
text=True, text=True,
) )
stdout, stderr = process.communicate('\n'.join(texts)) stdout, stderr = process.communicate('\n'.join(texts))
print(stdout) stdout = remove_thinking_tokens(stdout)
return stdout return stdout
def parse_yes_no_answer(text: str) -> bool | None: def parse_yes_no_answer(text: str) -> bool | None:
text = text.lower().strip() interword = '\n \t.,?-'
words = text.split('\n \t.,?-') text = text.lower().strip(interword)
print(words) words = text.split(interword)
if words[-1] in {'yes', 'agree'}: if words[-1] in {'yes', 'agree'}:
return True return True
if words[-1] in {'no', 'disagree'}: if words[-1] in {'no', 'disagree'}:
@ -369,6 +587,10 @@ def parse_yes_no_answer(text: str) -> bool | None:
return None return None
assert parse_yes_no_answer('Yes.') == True
assert parse_yes_no_answer('no') == False
def run_ollama_and_get_yes_or_no(cwd, initial_texts: list[str]) -> bool: def run_ollama_and_get_yes_or_no(cwd, initial_texts: list[str]) -> bool:
texts = list(initial_texts) texts = list(initial_texts)
texts.append('Think through your answer.') texts.append('Think through your answer.')
@ -383,6 +605,9 @@ def run_ollama_and_get_yes_or_no(cwd, initial_texts: list[str]) -> bool:
def verify_solution(repository_path: Path, issue_content: str) -> bool: def verify_solution(repository_path: Path, issue_content: str) -> bool:
if not EVALUATOR_MODEL:
return True
summary = run_ollama( summary = run_ollama(
repository_path, repository_path,
[ [
@ -421,6 +646,7 @@ def solve_issue_in_repository(
issue_description: str, issue_description: str,
issue_number: str, issue_number: str,
gitea_client, gitea_client,
code_solver: CodeSolverStrategy,
) -> IssueResolution: ) -> IssueResolution:
logger.info('### %s #####', issue_title) logger.info('### %s #####', issue_title)
@ -430,28 +656,31 @@ def solve_issue_in_repository(
run_cmd(['git', 'checkout', repository_config.base_branch], repository_path) run_cmd(['git', 'checkout', repository_config.base_branch], repository_path)
run_cmd(['git', 'checkout', '-b', branch_name], repository_path) run_cmd(['git', 'checkout', '-b', branch_name], repository_path)
# Run initial ruff pass before aider # Run initial ruff pass before code solver
run_cmd(['bash', '-c', RUFF_FORMAT_AND_AUTO_FIX], repository_path, check=False) run_cmd(['bash', '-c', RUFF_FORMAT_AND_AUTO_FIX], repository_path, check=False)
run_cmd(['git', 'add', '.'], repository_path) run_cmd(['git', 'add', '.'], repository_path)
run_cmd(['git', 'commit', '-m', 'Initial ruff pass'], repository_path, check=False) run_cmd(['git', 'commit', '-m', 'Initial ruff pass'], repository_path, check=False)
# Run aider # Run code solver
issue_content = f'# {issue_title}\n{issue_description}' issue_content = f'# {issue_title}\n{issue_description}'
while True: while True:
# Save the commit hash after ruff but before aider # Save the commit hash after ruff but before code solver
pre_aider_commit = get_head_commit_hash(repository_path) pre_aider_commit = get_head_commit_hash(repository_path)
# Run aider # Run code solver
aider_did_not_crash = issue_solution_round(repository_path, issue_content) solver_did_not_crash = code_solver.solve_issue_round(
if not aider_did_not_crash: repository_path,
logger.error('Aider invocation failed for issue #%s', issue_number) issue_content,
)
if not solver_did_not_crash:
logger.error('Code solver invocation failed for issue #%s', issue_number)
return IssueResolution(False) return IssueResolution(False)
# Check if aider made any changes beyond the initial ruff pass # Check if solver made any changes beyond the initial ruff pass
if not has_commits_on_branch(repository_path, pre_aider_commit, 'HEAD'): if not has_commits_on_branch(repository_path, pre_aider_commit, 'HEAD'):
logger.error( logger.error(
'Aider did not make any changes beyond the initial ruff pass for issue #%s', 'Code solver did not make any changes beyond the initial ruff pass for issue #%s',
issue_number, issue_number,
) )
return IssueResolution(False) return IssueResolution(False)
@ -502,31 +731,20 @@ def solve_issues_in_repository(
title = issue.get('title', f'Issue {issue_number}') title = issue.get('title', f'Issue {issue_number}')
if seen_issues_db.has_seen(issue_url): if seen_issues_db.has_seen(issue_url):
logger.info('Skipping already processed issue #%s: %s', issue_number, title) logger.info('Skipping already processed issue #%s: %s', issue_number, title)
continue else:
branch_name = generate_branch_name(issue_number, title)
branch_name = generate_branch_name(issue_number, title) code_solver = create_code_solver()
with tempfile.TemporaryDirectory() as repository_path: with tempfile.TemporaryDirectory() as repository_path:
issue_resolution = solve_issue_in_repository( issue_resolution = solve_issue_in_repository(
repository_config, repository_config,
Path(repository_path), Path(repository_path),
branch_name, branch_name,
title, title,
issue_description, issue_description,
issue_number, issue_number,
client, client,
) code_solver,
)
if issue_resolution.success:
# Handle unresolved pull request comments
handle_pr_comments(
repository_config,
issue_resolution.pull_request_id,
branch_name,
Path(repository_path),
client,
seen_issues_db,
issue_url,
)
seen_issues_db.mark_as_seen(issue_url, str(issue_number)) seen_issues_db.mark_as_seen(issue_url, str(issue_number))
seen_issues_db.update_pr_info( seen_issues_db.update_pr_info(
issue_url, issue_url,
@ -539,17 +757,42 @@ def solve_issues_in_repository(
issue_number, issue_number,
) )
# TODO: PR comment handling disabled for now due to missing functionality
if False:
# Handle unresolved pull request comments
handle_pr_comments(
repository_config,
issue_resolution.pull_request_id,
branch_name,
Path(repository_path),
client,
seen_issues_db,
issue_url,
code_solver,
)
# Handle failing pipelines
handle_failing_pipelines(
repository_config,
issue_resolution.pull_request_id,
branch_name,
Path(repository_path),
client,
code_solver,
)
def handle_pr_comments( def handle_pr_comments(
repository_config, repository_config,
pr_number, pr_number: int,
branch_name, branch_name,
repository_path, repository_path,
client, client,
seen_issues_db, seen_issues_db,
issue_url, issue_url,
code_solver: CodeSolverStrategy,
): ):
"""Fetch unresolved PR comments and resolve them via aider.""" """Fetch unresolved PR comments and resolve them via code solver."""
comments = client.get_pull_request_comments( comments = client.get_pull_request_comments(
repository_config.owner, repository_config.owner,
repository_config.repo, repository_config.repo,
@ -571,8 +814,8 @@ def handle_pr_comments(
f'Resolve the following reviewer comment:\n{body}\n\n' f'Resolve the following reviewer comment:\n{body}\n\n'
f'File: {path}\n\nContext:\n{context}' f'File: {path}\n\nContext:\n{context}'
) )
# invoke aider on the comment context # invoke code solver on the comment context
issue_solution_round(repository_path, issue) code_solver.solve_issue_round(repository_path, issue)
# commit and push changes for this comment # commit and push changes for this comment
run_cmd(['git', 'add', path], repository_path, check=False) run_cmd(['git', 'add', path], repository_path, check=False)
run_cmd( run_cmd(
@ -581,3 +824,43 @@ def handle_pr_comments(
check=False, check=False,
) )
run_cmd(['git', 'push', 'origin', branch_name], repository_path, check=False) run_cmd(['git', 'push', 'origin', branch_name], repository_path, check=False)
def handle_failing_pipelines(
repository_config: RepositoryConfig,
pr_number: str,
branch_name: str,
repository_path: Path,
client,
code_solver: CodeSolverStrategy,
) -> None:
"""Fetch failing pipelines for the given PR and resolve them via code solver."""
while True:
failed_runs = client.get_failed_pipelines(
repository_config.owner,
repository_config.repo,
pr_number,
)
if not failed_runs:
break
for run_id in failed_runs:
log = client.get_pipeline_log(
repository_config.owner,
repository_config.repo,
run_id,
)
lines = log.strip().split('\n')
context = '\n'.join(lines[-100:])
issue = f'Resolve the following failing pipeline run {run_id}:\n\n{context}'
code_solver.solve_issue_round(repository_path, issue)
run_cmd(['git', 'add', '.'], repository_path, check=False)
run_cmd(
['git', 'commit', '-m', f'Resolve pipeline {run_id}'],
repository_path,
check=False,
)
run_cmd(
['git', 'push', 'origin', branch_name],
repository_path,
check=False,
)

View File

@ -42,9 +42,19 @@ def parse_args():
parser.add_argument( parser.add_argument(
'--interval', '--interval',
type=int, type=int,
default=300, default=30,
help='Interval in seconds between checks in daemon mode (default: 300)', help='Interval in seconds between checks in daemon mode (default: 300)',
) )
parser.add_argument(
'--aider-model',
help='Model to use for generating code (overrides default)',
required=True,
)
parser.add_argument(
'--evaluator-model',
help='Model to use for evaluating code (overrides default)',
default=None,
)
return parser.parse_args() return parser.parse_args()
@ -52,6 +62,12 @@ def main():
logging.basicConfig(level='INFO') logging.basicConfig(level='INFO')
args = parse_args() args = parse_args()
# Override default models if provided
import aider_gitea as core
core.CODE_MODEL = args.aider_model
core.EVALUATOR_MODEL = args.evaluator_model
seen_issues_db = SeenIssuesDB() seen_issues_db = SeenIssuesDB()
client = GiteaClient(args.gitea_url, secrets.gitea_token()) client = GiteaClient(args.gitea_url, secrets.gitea_token())

View File

@ -1 +1 @@
__version__ = '0.1.6' __version__ = '0.1.10'

View File

@ -167,9 +167,11 @@ class GiteaClient:
response = self.session.post(url, json=json_data) response = self.session.post(url, json=json_data)
# If a pull request for this head/base already exists, return it instead of crashing # If a pull request for this head/base already exists, return it instead of crashing
if response.status_code == 422: if response.status_code == 409:
logger.warning( logger.warning(
'Pull request already exists for head %s and base %s', head, base, 'Pull request already exists for head %s and base %s',
head,
base,
) )
prs = self.get_pull_requests(owner, repo) prs = self.get_pull_requests(owner, repo)
for pr in prs: for pr in prs:
@ -183,19 +185,28 @@ class GiteaClient:
response.raise_for_status() response.raise_for_status()
return response.json() return response.json()
def get_pull_request_comments( def get_failed_pipelines(self, owner: str, repo: str, pr_number: str) -> list[int]:
self, """Fetch pipeline runs for a PR and return IDs of failed runs."""
owner: str, url = f'{self.gitea_url}/repos/{owner}/{repo}/actions/runs'
repo: str,
pr_number: str,
) -> list[dict]:
"""
Fetch comments for a pull request.
"""
url = f'{self.gitea_url}/repos/{owner}/{repo}/pulls/{pr_number}/comments'
response = self.session.get(url) response = self.session.get(url)
response.raise_for_status() response.raise_for_status()
return response.json() runs = response.json().get('workflow_runs', [])
failed = []
for run in runs:
if any(
pr.get('number') == int(pr_number)
for pr in run.get('pull_requests', [])
):
if run.get('conclusion') not in ('success',):
failed.append(run.get('id'))
return failed
def get_pipeline_log(self, owner: str, repo: str, run_id: int) -> str:
"""Download the logs for a pipeline run."""
url = f'{self.gitea_url}/repos/{owner}/{repo}/actions/runs/{run_id}/logs'
response = self.session.get(url)
response.raise_for_status()
return response.text
def get_pull_requests( def get_pull_requests(
self, self,

View File

@ -9,3 +9,7 @@ def llm_api_keys() -> list[str]:
def gitea_token() -> str: def gitea_token() -> str:
return SECRETS.load_or_fail('GITEA_TOKEN') return SECRETS.load_or_fail('GITEA_TOKEN')
def anthropic_api_key() -> str:
return SECRETS.load_or_fail('ANTHROPIC_API_KEY')

123
setup.py
View File

@ -1,10 +1,9 @@
# WARNING # WARNING!
# # THIS IS AN AUTOGENERATED FILE!
# THIS IS AN AUTOGENERATED FILE. # MANUAL CHANGES CAN AND WILL BE OVERWRITTEN!
#
# MANUAL CHANGES CAN AND WILL BE OVERWRITTEN.
import re import re
from pathlib import Path
from setuptools import setup from setuptools import setup
@ -13,16 +12,23 @@ PACKAGE_NAME = 'aider_gitea'
PACKAGE_DESCRIPTION = """ PACKAGE_DESCRIPTION = """
Aider Gitea. Aider Gitea.
A code automation tool that integrates Gitea with Aider to automatically solve issues. A code automation tool that integrates Gitea with AI assistants to automatically solve issues.
This program monitors your [Gitea](https://about.gitea.com/) repository for issues with the 'aider' label. This program monitors your [Gitea](https://about.gitea.com/) repository for issues with the 'aider' label.
When such an issue is found, it: When such an issue is found, it:
1. Creates a new branch. 1. Creates a new branch.
2. Invokes [Aider](https://aider.chat/) to solve the issue using a Large-Language Model. 2. Invokes an AI assistant (Aider or Claude Code) to solve the issue using a Large-Language Model.
3. Runs tests and code quality checks. 3. Runs tests and code quality checks.
4. Creates a pull request with the solution. 4. Creates a pull request with the solution.
The tool automatically selects the appropriate AI assistant based on the specified model:
- **Aider**: Used for non-Anthropic models (e.g., GPT, Ollama, Gemini)
- **Claude Code**: Used for Anthropic models (e.g., Claude, Sonnet, Haiku, Opus)
Inspired by [the AI workflows](https://github.com/oscoreio/ai-workflows/)
project.
## Usage ## Usage
An application token must be supplied for the `gitea_token` secret. This must An application token must be supplied for the `gitea_token` secret. This must
@ -35,63 +41,138 @@ have the following permissions:
### Command Line ### Command Line
```bash ```bash
# Run with default settings # Run with default settings (uses Aider)
python -m aider_gitea python -m aider_gitea --aider-model gpt-4
# Use Claude Code with Anthropic models
python -m aider_gitea --aider-model claude-3-sonnet
python -m aider_gitea --aider-model claude-3-haiku
python -m aider_gitea --aider-model anthropic/claude-3-opus
# Use Aider with various models
python -m aider_gitea --aider-model gpt-4
python -m aider_gitea --aider-model ollama/llama3
python -m aider_gitea --aider-model gemini-pro
# Specify custom repository and owner # Specify custom repository and owner
python -m aider_gitea --owner myorg --repo myproject python -m aider_gitea --owner myorg --repo myproject --aider-model claude-3-sonnet
# Use a custom Gitea URL # Use a custom Gitea URL
python -m aider_gitea --gitea-url https://gitea.example.com python -m aider_gitea --gitea-url https://gitea.example.com --aider-model gpt-4
# Specify a different base branch # Specify a different base branch
python -m aider_gitea --base-branch develop python -m aider_gitea --base-branch develop --aider-model claude-3-haiku
``` ```
### AI Assistant Selection
The tool automatically routes to the appropriate AI assistant based on the model name:
**Claude Code Integration (Anthropic Models):**
- Model names containing: `claude`, `anthropic`, `sonnet`, `haiku`, `opus`
- Examples: `claude-3-sonnet`, `claude-3-haiku`, `anthropic/claude-3-opus`
- Requires: `ANTHROPIC_API_KEY` environment variable
**Aider Integration (All Other Models):**
- Any model not matching Anthropic patterns
- Examples: `gpt-4`, `ollama/llama3`, `gemini-pro`, `mistral-7b`
- Requires: `LLM_API_KEY` environment variable
### Python API ### Python API
```python ```python
from aider_gitea import solve_issue_in_repository from aider_gitea import solve_issue_in_repository, create_code_solver
from pathlib import Path from pathlib import Path
import argparse
# Solve an issue programmatically # Solve an issue programmatically with automatic AI assistant selection
args = argparse.Namespace( repository_config = RepositoryConfig(
gitea_url="https://gitea.example.com", gitea_url="https://gitea.example.com",
owner="myorg", owner="myorg",
repo="myproject", repo="myproject",
base_branch="main" base_branch="main"
) )
# Set the model to control which AI assistant is used
import aider_gitea
aider_gitea.CODE_MODEL = "claude-3-sonnet" # Will use Claude Code
# aider_gitea.CODE_MODEL = "gpt-4" # Will use Aider
code_solver = create_code_solver() # Automatically selects based on model
solve_issue_in_repository( solve_issue_in_repository(
args, repository_config,
Path("/path/to/repo"), Path("/path/to/repo"),
"issue-123-fix-bug", "issue-123-fix-bug",
"Fix critical bug", "Fix critical bug",
"The application crashes when processing large files", "The application crashes when processing large files",
"123" "123",
gitea_client,
code_solver
) )
``` ```
### Environment Configuration ### Environment Configuration
The tool uses environment variables for sensitive information: The tool uses environment variables for sensitive information:
**Required for all setups:**
- `GITEA_TOKEN`: Your Gitea API token - `GITEA_TOKEN`: Your Gitea API token
- `LLM_API_KEY`: API key for the language model used by Aider
**For Aider (non-Anthropic models):**
- `LLM_API_KEY`: API key for the language model (OpenAI, Ollama, etc.)
**For Claude Code (Anthropic models):**
- `ANTHROPIC_API_KEY`: Your Anthropic API key for Claude models
### Model Examples
**Anthropic Models ( Claude Code):**
```bash
--aider-model claude-3-sonnet
--aider-model claude-3-haiku
--aider-model claude-3-opus
--aider-model anthropic/claude-3-sonnet
```
**Non-Anthropic Models ( Aider):**
```bash
--aider-model gpt-4
--aider-model gpt-3.5-turbo
--aider-model ollama/llama3
--aider-model ollama/codellama
--aider-model gemini-pro
--aider-model mistral-7b
```
``` ```
""".strip() """.strip()
PACKAGE_DESCRIPTION_SHORT = """ PACKAGE_DESCRIPTION_SHORT = """
A code automation tool that integrates Gitea with Aider to automatically solve issues.""".strip() A code automation tool that integrates Gitea with AI assistants to automatically solve issues.""".strip()
def parse_version_file(text: str) -> str: def parse_version_file(text: str) -> str:
match = re.match(r'^__version__\s*=\s*(["\'])([\d\.]+)\1$', text) text = re.sub('^#.*', '', text, flags=re.MULTILINE)
match = re.match(r'^\s*__version__\s*=\s*(["\'])([\d\.]+)\1$', text)
if match is None: if match is None:
msg = 'Malformed _version.py file!' msg = 'Malformed _version.py file!'
raise Exception(msg) raise Exception(msg)
return match.group(2) return match.group(2)
def find_python_packages() -> list[str]:
"""
Find all python packages. (Directories containing __init__.py files.)
"""
root_path = Path(PACKAGE_NAME)
packages: set[str] = set([PACKAGE_NAME])
# Search recursively
for init_file in root_path.rglob('__init__.py'):
packages.add(str(init_file.parent).replace('/', '.'))
return sorted(packages)
with open(PACKAGE_NAME + '/_version.py') as f: with open(PACKAGE_NAME + '/_version.py') as f:
version = parse_version_file(f.read()) version = parse_version_file(f.read())
@ -111,7 +192,7 @@ setup(
author='Jon Michael Aanes', author='Jon Michael Aanes',
author_email='jonjmaa@gmail.com', author_email='jonjmaa@gmail.com',
url='https://gitfub.space/Jmaa/' + PACKAGE_NAME, url='https://gitfub.space/Jmaa/' + PACKAGE_NAME,
packages=[PACKAGE_NAME], packages=find_python_packages(),
install_requires=REQUIREMENTS_MAIN, install_requires=REQUIREMENTS_MAIN,
extras_require={ extras_require={
'test': REQUIREMENTS_TEST, 'test': REQUIREMENTS_TEST,

View File

@ -0,0 +1,122 @@
import pytest
from aider_gitea import (
AIDER_LINT,
AIDER_TEST,
CLAUDE_CODE_MESSAGE_FORMAT,
AiderCodeSolver,
ClaudeCodeSolver,
create_code_solver,
is_anthropic_model,
)
class TestClaudeCodeIntegration:
"""Test Claude Code integration and model routing logic."""
def test_is_anthropic_model_detection(self):
"""Test that Anthropic models are correctly detected."""
# Anthropic models should return True
assert is_anthropic_model('claude-3-sonnet')
assert is_anthropic_model('claude-3-haiku')
assert is_anthropic_model('claude-3-opus')
assert is_anthropic_model('anthropic/claude-3-sonnet')
assert is_anthropic_model('Claude-3-Sonnet') # Case insensitive
assert is_anthropic_model('ANTHROPIC/CLAUDE')
assert is_anthropic_model('some-sonnet-model')
assert is_anthropic_model('haiku-variant')
# Non-Anthropic models should return False
assert not is_anthropic_model('gpt-4')
assert not is_anthropic_model('gpt-3.5-turbo')
assert not is_anthropic_model('ollama/llama')
assert not is_anthropic_model('gemini-pro')
assert not is_anthropic_model('mistral-7b')
assert not is_anthropic_model('')
assert not is_anthropic_model(None)
def test_create_code_solver_routing(self, monkeypatch):
"""Test that the correct solver is created based on model."""
import aider_gitea
# Test Anthropic model routing
monkeypatch.setattr(aider_gitea, 'CODE_MODEL', 'claude-3-sonnet')
solver = create_code_solver()
assert isinstance(solver, ClaudeCodeSolver)
# Test non-Anthropic model routing
monkeypatch.setattr(aider_gitea, 'CODE_MODEL', 'gpt-4')
solver = create_code_solver()
assert isinstance(solver, AiderCodeSolver)
# Test None model routing (should default to Aider)
monkeypatch.setattr(aider_gitea, 'CODE_MODEL', None)
solver = create_code_solver()
assert isinstance(solver, AiderCodeSolver)
def test_claude_code_solver_command_creation(self):
"""Test that Claude Code commands are created correctly."""
import aider_gitea
solver = ClaudeCodeSolver()
issue = 'Fix the bug in the code'
# Test without model
with pytest.MonkeyPatch().context() as m:
m.setattr(aider_gitea, 'CODE_MODEL', None)
cmd = solver._create_claude_command(issue)
expected = [
'claude',
'-p',
'--output-format',
'json',
'--max-turns',
'10',
issue,
]
assert cmd == expected
# Test with model
with pytest.MonkeyPatch().context() as m:
m.setattr(aider_gitea, 'CODE_MODEL', 'claude-3-sonnet')
cmd = solver._create_claude_command(issue)
expected = [
'claude',
'-p',
'--output-format',
'json',
'--max-turns',
'10',
'--model',
'claude-3-sonnet',
issue,
]
assert cmd == expected
def test_claude_code_message_format(self):
"""Test that Claude Code message format works correctly."""
issue_content = 'Fix the authentication bug'
formatted_message = CLAUDE_CODE_MESSAGE_FORMAT.format(
issue=issue_content,
test_command=AIDER_TEST,
lint_command=AIDER_LINT,
)
# Verify the issue content is included
assert issue_content in formatted_message
# Verify the test and lint commands are included
assert AIDER_TEST in formatted_message
assert AIDER_LINT in formatted_message
# Verify the guidelines are present
assert 'Run tests after making changes' in formatted_message
assert 'Follow existing code style' in formatted_message
assert 'Make minimal, focused changes' in formatted_message
assert 'Commit your changes' in formatted_message
# Verify the structure contains placeholders that got replaced
assert '{issue}' not in formatted_message
assert '{test_command}' not in formatted_message
assert '{lint_command}' not in formatted_message