Skip to content

Mac Mini Machine Context

Auto-generated setup snapshot. Update after any configuration changes.

Identity

FieldValue
HostnameNhan's Mac Mini
macOS12 (Monterey)
Node typemini (orchestrator)
Tailscale IP100.87.151.9
Tailscale hostnamemini2015
Tailnetzhinnyshop

Docker Services (compose/mini.yml)

All services bind to Tailscale IP only (no public ports).

ServiceImagePortStatus
postgrespgvector/pg16:165432healthy
redisredis:7-alpine6379healthy
n8nn8nio/n8n:latest5678ready
telegram-bridgecustom build7700polling mode
light-routercustom (python:3.11-slim)4000active
grafanagrafana/grafana:latest3000up
prometheusprom/prometheus:latest9090up
iphone-pwacustom build8080up

Pipeline

FieldValue
Process/usr/local/bin/devstation-pipeline
PID file.run/pipeline.pid
Port8001
LLM modecloud-only (via light-router)
ProvidersDeepSeek V4 Flash (primary), NVIDIA Nemotron 70B (fallback)

Telegram

FieldValue
Bot pollingactive (no public URL needed)
Chat IDconfigured
Notify endpointPOST :7700/notify {message: string}
Commands/status, /pause, /resume, /approve, /reject, /llm, /help

Network

ServiceInternal URLExternal URL
n8nhttp://127.0.0.1:5678http://100.87.151.9:5678
Pipelinehttp://127.0.0.1:8001http://100.87.151.9:8001
Telegram bridgehttp://127.0.0.1:7700http://100.87.151.9:7700
Light Routerhttp://127.0.0.1:4000http://100.87.151.9:4000
Grafanahttp://127.0.0.1:3000http://100.87.151.9:3000
Prometheushttp://127.0.0.1:9090http://100.87.151.9:9090
PWAhttp://127.0.0.1:8080http://100.87.151.9:8080

Known Config

  • Tailscale CLI: Wrapper at /usr/local/bin/tailscale/Applications/Tailscale.app/Contents/MacOS/Tailscale (brew formula NOT used — builds from source, very slow)
  • Docker Desktop: Headless mode (openUIOnStartupDisabled: true), 2GB RAM, 2 CPUs, no K8s
  • Python venv for pipeline: /tmp/pipeline-venv (created by setup script)
  • n8n encryption key: Generated at setup, stored in .env (do not lose)
  • Postgres password: Generated at setup, stored in .env (do not lose)
  • DEEPSEEK_MODEL: deepseek-chat (set in .env)
  • Pipeline workspace: /tmp/devstation-workspace (set via WORKSPACE_DIR in .env)

Workflow Integration

Linear issue (tag: "plan" / "implement")
  → n8n Trigger (webhook)
  → POST :8001/webhook
  → Pipeline (6 agents, cloud LLMs)
  → GitHub PR
  → Telegram notification

CLI Commands

bash
./bin/devstation.sh setup       # One-command setup
./bin/devstation.sh status      # Check services
./bin/devstation.sh doctor      # Diagnostics
./bin/devstation.sh down        # Stop all
./bin/devstation.sh restart     # Restart all

Troubleshooting

SymptomFix
n8n "Mismatching encryption keys"Delete n8n volume and recreate: docker volume rm devstation-mini_n8n_data && docker compose up -d n8n
Postgres auth failuresDelete db volume: docker volume rm devstation-mini_pg_data && docker compose up -d postgres
Port binding errorsCheck Tailscale is connected: tailscale status
Pipeline not startingCheck .run/pipeline.log, run: /usr/local/bin/devstation-pipeline
light-router not startingRebuild and restart: docker compose -f compose/mini.yml up -d --build light-router
Pipeline "All LLM providers failed"Check .env has DEEPSEEK_API_KEY set and DEEPSEEK_MODEL=deepseek-chat
Pipeline "Invalid format specifier"Check for unescaped {} in f-strings inside pipeline.py agent prompt templates
LiteLLM (ai-router) crash with merged_lifespanRemove LiteLLM and use light-router instead (see compose/mini.yml and light-router/)

Setup Session Log (May 2, 2026)

Bugs Fixed

  1. devstation.sh parse_args(): Last line [ -z "$ACTION" ] && ... returned non-zero with set -e, causing immediate exit. Fix: appended ; true.
  2. devstation.sh run() function: Used eval "$*" which broke with spaces/special chars. Fix: replaced with "$@".
  3. Pipeline f-string format bug: Unescaped { in agent agent_pm prompt caused Invalid format specifier. Fix: escaped inner braces as HTML entities.
  4. Pipeline workspace: Default /Volumes/work didn't exist. Fix: set WORKSPACE_DIR=/tmp/devstation-workspace.

Issues Resolved

  1. Telegram CHAT_ID: Was wrong (87743713481121576537), causing 403 Forbidden.
  2. n8n Linear Trigger: IF node array contains operator incompatible with n8n version. Replaced with Code node.
  3. LiteLLM crash: ghcr.io/berriai/litellm:main-stable had upstream breaking changes (routing strategy renamed, Prisma P1012 on Wolfi Linux). Replaced with custom light-router (60-line Python proxy, 3.71 GB freed).

Pipeline Test

Files Changed

  • bin/devstation.sh: Fixed run() and parse_args() bugs
  • compose/mini.yml: Replaced ai-router (litellm) with light-router; added n8n env vars
  • ai-router/config.yaml: Fixed routing_strategy: usage-basedusage-based-routing (then replaced entirely)
  • pipeline.py: Fixed f-string format bug, added _safe_json_parse(), pre-computed files_md/code_files_md
  • projects.yaml: Hardcoded github_repo instead of env var with default
  • .env: Added DEEPSEEK_MODEL=deepseek-chat, WORKSPACE_DIR, fixed TELEGRAM_CHAT_ID
  • light-router/proxy.py: Created (new)
  • light-router/Dockerfile: Created (new)
  • telegram-bridge/bot.py: Added polling loop (was webhook-only)
  • bin/pipeline-wrapper.sh: Fixed env loading (export $(grep ...)set -a; source .env; set +a)
  • linear-pipeline/linear_to_opencode.json: Replaced IF node with Code node, hardcoded URLs

MIT Licensed | Built with AI Dev Station