Skip to content

Integration Guide

LLM providers

This station supports 3 LLM providers with automatic fallback:

ProviderTypeCostWhen used
Studio OllamaLocal (free)FreePrimary — always preferred
DeepSeek ChatCloud API$0.14/M inputFallback when Studio offline
Claude Sonnet 4.5Cloud API$3/M inputLast resort

Adding a new provider

Edit pipeline.py LLM_CONFIG dict and fallback_llm() function. Add the API key to .env.

Extending the pipeline

The pipeline at pipeline.py is a single file with clear sections:

  • Git Operations — clone, branch, commit, push, PR
  • LLM — provider selection + fallback
  • Telegram — notification sending
  • Pipeline Flow — the main CrewAI flow

To add a new stage, insert it in run_pipeline().

Adding a new node

  1. Install Tailscale on the new machine
  2. Add its Tailscale IP to all .env files
  3. If it's a worker: install Docker + Ollama, add a compose profile
  4. If it's a client: install OpenCode, point to Studio's Ollama URL

Reusable agents

See Agent Rules for agent definitions, prompt templates, and skill configurations that work with any AI model (CrewAI, OpenCode, Claude, DeepSeek, etc.).

MIT Licensed | Built with AI Dev Station