Mac Mini Machine Context
Auto-generated setup snapshot. Update after any configuration changes.
Identity
| Field | Value |
|---|---|
| Hostname | Nhan's Mac Mini |
| macOS | 12 (Monterey) |
| Node type | mini (orchestrator) |
| Tailscale IP | 100.87.151.9 |
| Tailscale hostname | mini2015 |
| Tailnet | zhinnyshop |
Docker Services (compose/mini.yml)
All services bind to Tailscale IP only (no public ports).
| Service | Image | Port | Status |
|---|---|---|---|
| postgres | pgvector/pg16:16 | 5432 | healthy |
| redis | redis:7-alpine | 6379 | healthy |
| n8n | n8nio/n8n:latest | 5678 | ready |
| telegram-bridge | custom build | 7700 | polling mode |
| light-router | custom (python:3.11-slim) | 4000 | active |
| grafana | grafana/grafana:latest | 3000 | up |
| prometheus | prom/prometheus:latest | 9090 | up |
| iphone-pwa | custom build | 8080 | up |
Pipeline
| Field | Value |
|---|---|
| Process | /usr/local/bin/devstation-pipeline |
| PID file | .run/pipeline.pid |
| Port | 8001 |
| LLM mode | cloud-only (via light-router) |
| Providers | DeepSeek V4 Flash (primary), NVIDIA Nemotron 70B (fallback) |
Telegram
| Field | Value |
|---|---|
| Bot polling | active (no public URL needed) |
| Chat ID | configured |
| Notify endpoint | POST :7700/notify {message: string} |
| Commands | /status, /pause, /resume, /approve, /reject, /llm, /help |
Network
| Service | Internal URL | External URL |
|---|---|---|
| n8n | http://127.0.0.1:5678 | http://100.87.151.9:5678 |
| Pipeline | http://127.0.0.1:8001 | http://100.87.151.9:8001 |
| Telegram bridge | http://127.0.0.1:7700 | http://100.87.151.9:7700 |
| Light Router | http://127.0.0.1:4000 | http://100.87.151.9:4000 |
| Grafana | http://127.0.0.1:3000 | http://100.87.151.9:3000 |
| Prometheus | http://127.0.0.1:9090 | http://100.87.151.9:9090 |
| PWA | http://127.0.0.1:8080 | http://100.87.151.9:8080 |
Known Config
- Tailscale CLI: Wrapper at
/usr/local/bin/tailscale→/Applications/Tailscale.app/Contents/MacOS/Tailscale(brew formula NOT used — builds from source, very slow) - Docker Desktop: Headless mode (
openUIOnStartupDisabled: true), 2GB RAM, 2 CPUs, no K8s - Python venv for pipeline:
/tmp/pipeline-venv(created by setup script) - n8n encryption key: Generated at setup, stored in
.env(do not lose) - Postgres password: Generated at setup, stored in
.env(do not lose) - DEEPSEEK_MODEL:
deepseek-chat(set in.env) - Pipeline workspace:
/tmp/devstation-workspace(set viaWORKSPACE_DIRin.env)
Workflow Integration
Linear issue (tag: "plan" / "implement")
→ n8n Trigger (webhook)
→ POST :8001/webhook
→ Pipeline (6 agents, cloud LLMs)
→ GitHub PR
→ Telegram notificationCLI Commands
bash
./bin/devstation.sh setup # One-command setup
./bin/devstation.sh status # Check services
./bin/devstation.sh doctor # Diagnostics
./bin/devstation.sh down # Stop all
./bin/devstation.sh restart # Restart allTroubleshooting
| Symptom | Fix |
|---|---|
| n8n "Mismatching encryption keys" | Delete n8n volume and recreate: docker volume rm devstation-mini_n8n_data && docker compose up -d n8n |
| Postgres auth failures | Delete db volume: docker volume rm devstation-mini_pg_data && docker compose up -d postgres |
| Port binding errors | Check Tailscale is connected: tailscale status |
| Pipeline not starting | Check .run/pipeline.log, run: /usr/local/bin/devstation-pipeline |
| light-router not starting | Rebuild and restart: docker compose -f compose/mini.yml up -d --build light-router |
| Pipeline "All LLM providers failed" | Check .env has DEEPSEEK_API_KEY set and DEEPSEEK_MODEL=deepseek-chat |
| Pipeline "Invalid format specifier" | Check for unescaped {} in f-strings inside pipeline.py agent prompt templates |
| LiteLLM (ai-router) crash with merged_lifespan | Remove LiteLLM and use light-router instead (see compose/mini.yml and light-router/) |
Setup Session Log (May 2, 2026)
Bugs Fixed
- devstation.sh
parse_args(): Last line[ -z "$ACTION" ] && ...returned non-zero withset -e, causing immediate exit. Fix: appended; true. - devstation.sh
run()function: Usedeval "$*"which broke with spaces/special chars. Fix: replaced with"$@". - Pipeline f-string format bug: Unescaped
{in agentagent_pmprompt causedInvalid format specifier. Fix: escaped inner braces as HTML entities. - Pipeline workspace: Default
/Volumes/workdidn't exist. Fix: setWORKSPACE_DIR=/tmp/devstation-workspace.
Issues Resolved
- Telegram CHAT_ID: Was wrong (
8774371348→1121576537), causing 403 Forbidden. - n8n Linear Trigger: IF node
array containsoperator incompatible with n8n version. Replaced with Code node. - LiteLLM crash:
ghcr.io/berriai/litellm:main-stablehad upstream breaking changes (routing strategy renamed, Prisma P1012 on Wolfi Linux). Replaced with customlight-router(60-line Python proxy, 3.71 GB freed).
Pipeline Test
- PR #20 created successfully: https://github.com/thanhnhan2tn/mini-dev-station/pull/20
- Full flow: webhook → DeepSeek LLM → spec generation → git clone → branch → PR ✓
Files Changed
bin/devstation.sh: Fixedrun()andparse_args()bugscompose/mini.yml: Replacedai-router(litellm) withlight-router; added n8n env varsai-router/config.yaml: Fixedrouting_strategy: usage-based→usage-based-routing(then replaced entirely)pipeline.py: Fixed f-string format bug, added_safe_json_parse(), pre-computedfiles_md/code_files_mdprojects.yaml: Hardcodedgithub_repoinstead of env var with default.env: AddedDEEPSEEK_MODEL=deepseek-chat,WORKSPACE_DIR, fixedTELEGRAM_CHAT_IDlight-router/proxy.py: Created (new)light-router/Dockerfile: Created (new)telegram-bridge/bot.py: Added polling loop (was webhook-only)bin/pipeline-wrapper.sh: Fixed env loading (export $(grep ...)→set -a; source .env; set +a)linear-pipeline/linear_to_opencode.json: Replaced IF node with Code node, hardcoded URLs