Refactoroscope includes AI-powered code quality suggestions that provide intelligent insights on code readability, performance, potential bugs, and security issues.
The AI analysis feature uses multiple AI providers to analyze your code and provide suggestions for improvement. The analysis is performed on a file-by-file basis, with each file being analyzed separately by the AI model.
Refactoroscope supports multiple AI providers:
To analyze your project with AI:
uv run refactoroscope ai /path/to/your/project
To analyze with a specific AI provider:
uv run refactoroscope ai /path/to/your/project --provider openai
To enable AI suggestions during regular analysis:
uv run refactoroscope analyze . --ai
To enable AI suggestions during watching:
uv run refactoroscope watch . --ai
AI features are configured through the .refactoroscope.yml
configuration file:
# AI configuration
ai:
# Enable AI-powered code suggestions
enable_ai_suggestions: true
# Maximum file size to analyze with AI (in bytes)
max_file_size: 50000
# Whether to cache AI analysis results
cache_results: true
# Cache time-to-live in seconds
cache_ttl: 3600
# Preference order for AI providers
provider_preferences:
- "openai"
- "anthropic"
- "google"
- "ollama"
- "qwen"
# Provider configurations
providers:
openai:
# API key (can also be set via OPENAI_API_KEY environment variable)
# api_key: "your-openai-api-key"
# Model to use
model: "gpt-3.5-turbo"
# Whether this provider is enabled
enabled: true
anthropic:
# API key (can also be set via ANTHROPIC_API_KEY environment variable)
# api_key: "your-anthropic-api-key"
# Model to use
model: "claude-3-haiku-20240307"
# Whether this provider is enabled
enabled: true
google:
# API key (can also be set via GOOGLE_API_KEY environment variable)
# api_key: "your-google-api-key"
# Model to use
model: "gemini-pro"
# Whether this provider is enabled
enabled: true
ollama:
# Ollama doesn't require API keys
# Model to use
model: "llama2"
# Base URL for Ollama (default is localhost)
base_url: "http://localhost:11434"
# Whether this provider is enabled
enabled: true
qwen:
# Qwen doesn't require API keys when using local Ollama
# Model to use
model: "qwen2"
# Base URL for Qwen (default is localhost)
base_url: "http://localhost:11434"
# Whether this provider is enabled
enabled: true
For cloud-based providers, you can set API keys via environment variables:
OPENAI_API_KEY
for OpenAIANTHROPIC_API_KEY
for AnthropicGOOGLE_API_KEY
for GoogleAI analysis results are integrated into the regular analysis output and include:
Each AI suggestion includes a confidence level and a detailed explanation.