Skip to content

Architecture & Network Flow

Stack: Linear → n8n → Pipeline (CrewAI) → Ollama/DeepSeek/Claude → GitHub PR Hosts: Mac Mini 2015 (orchestrator), Mac Studio M1 Max (worker), MacBook Air M1 (client) Network: Tailscale Mesh VPN — zero public ports

1. Node topology

NodeRoleHardwareAlways onServices
Mac MiniOrchestratorIntel i7, 16 GBYesDocker: postgres, redis, n8n, telegram-bridge, light-router
Mac StudioWorkerM1 Max, 64 GBYesOllama (brew), Docker: pipeline, opencode
MacBook AirClientM1, 8 GBOn-demandOpenCode CLI, VS Code, git
iPhoneRemote controlYesTelegram app

2. Trust zones

ZoneMembersReachability
MeshAll 3 Macs via Tailscale100.x.x.x (Tailscale IPs only)
LoopbackDocker containers on Mini/StudioLocalhost only
CloudDeepSeek, Claude, GitHub APIsOutbound HTTPS only

Zero public ports. No reverse proxy, no Cloudflare Tunnel, no open firewall rules. All inter-node communication is over Tailscale Mesh.

3. Flow diagram

4. LLM fallback chain

Studio Ollama (local, free) → DeepSeek ($0.14/M) → Claude Sonnet 4.5 ($3/M)

The pipeline tries each provider in order. If one fails or is unavailable, it proceeds to the next.

5. Self-healing properties

LayerMechanismRecovery time
Docker containersrestart: unless-stopped< 5 s
Ollamabrew services start ollama< 10 s
Tailscale meshmacOS managed extension< 10 s
Power failuresudo pmset -a autorestart 1< 60 s
Git push failurePipeline retries up to 3 timesPer attempt

MIT Licensed | Built with AI Dev Station