Integration Guide
LLM providers
This station supports 3 LLM providers with automatic fallback:
| Provider | Type | Cost | When used |
|---|---|---|---|
| Studio Ollama | Local (free) | Free | Primary — always preferred |
| DeepSeek Chat | Cloud API | $0.14/M input | Fallback when Studio offline |
| Claude Sonnet 4.5 | Cloud API | $3/M input | Last resort |
Adding a new provider
Edit pipeline.py LLM_CONFIG dict and fallback_llm() function. Add the API key to .env.
Extending the pipeline
The pipeline at pipeline.py is a single file with clear sections:
Git Operations— clone, branch, commit, push, PRLLM— provider selection + fallbackTelegram— notification sendingPipeline Flow— the main CrewAI flow
To add a new stage, insert it in run_pipeline().
Adding a new node
- Install Tailscale on the new machine
- Add its Tailscale IP to all
.envfiles - If it's a worker: install Docker + Ollama, add a compose profile
- If it's a client: install OpenCode, point to Studio's Ollama URL
Reusable agents
See Agent Rules for agent definitions, prompt templates, and skill configurations that work with any AI model (CrewAI, OpenCode, Claude, DeepSeek, etc.).