Documentation Index
Fetch the complete documentation index at: https://docs.yourhq.ai/llms.txt
Use this file to discover all available pages before exploring further.
HQ — Architecture
HQ (the codebase atyourhq/yourhq, hosted at yourhq.ai) is a self-hostable operations platform for running a fleet of personal AI agents. One Next.js UI manages your work — CRM, tasks, docs, automations — and one or more gateway hosts run the agents themselves inside Docker containers you control. The target user is a single operator (founder, solo team, power user) who wants their agents to do real work on their behalf, in their own environment, against their own Supabase project, without handing that work to someone else’s multi-tenant cloud. This document is the top-to-bottom tour of how the pieces fit together. It assumes you’ve skimmed the README and want to know where things actually live.
For deeper dives, see:
- Features — product tour of implemented user-facing features.
- Networking — Tailscale, public HTTPS, and the bind-mode matrix.
- Agents — agents, templates, the OpenClaw integration, and the template library.
- Configuration — full environment variable reference.
- Database schema — table groups, migrations, queues, triggers, and RLS trust model.
db/migrations/— schema of record, applied in filename order.
1. Intro
An agent, in HQ, is a long-lived workspace consisting of a git branch, a Chrome profile, an OpenClaw session, and a messaging channel (Telegram, Discord, Slack, or none). The UI never speaks to agents directly. Every instruction the UI gives — “create this agent”, “edit this file”, “restart this gateway” — travels through Supabase, where it becomes a row in a queue. Python daemons on each gateway host subscribe to that queue over Supabase Realtime and execute the work locally. The result: the UI can run on your laptop, on a VPS, or in a browser tab from a hotel Wi-Fi, and the gateways can run anywhere you have Docker, with no inbound connections needed between them. This is a single-user design. RLS on Supabase is “authenticated full access”. Multi-tenant isolation is out of scope — each operator runs their own Supabase project. The hosted offering will layer account management on top of this same topology rather than redesigning it.2. System diagram
files_api.py. Nothing is peer-to-peer; there is no central control plane outside Supabase.
3. The four services
All four services live in the same monorepo (docker-compose.yml). The UI ships as one container and the three gateway services ship as three more, all bundled in a single Compose project by default. Larger installs can run the UI and gateway services on separate hosts against the same Supabase project.
| Service | Role | Key files | Ports | Reads from Supabase | Writes to Supabase |
|---|---|---|---|---|---|
ui | The Next.js admin dashboard. Renders the CRM, tasks, agents, docs, settings, automations. Issues server actions that enqueue commands and proxy file edits. | apps/ui/ — App Router under src/app/; server actions in src/app/dashboard/*/actions.ts; gateway proxy in src/lib/agent-repo/gateway-backend.ts; auth middleware in src/middleware.ts | 3000 (host) | everything (uses authenticated user’s session + service role for privileged inserts) | contacts, tasks, agents, agent_commands, agent_inbox_items, audit_log, automation_rules, everything |
gateway | The container that hosts the agents. Runs Xtigervnc + XFCE + Chrome + OpenClaw gateway + the files-API. Exposes a remote desktop (noVNC) and a file browser (files-api). | gateway/entrypoint.sh, gateway/files_api.py, gateway/Dockerfile | 6901 (noVNC), 18790 (files-API) | workspace.slug at boot to prefix branches | gateways row upsert at boot with reachable URLs |
dispatcher | Python daemon. Subscribes to agent_inbox_items INSERT events and wakes the owning agent via openclaw agent --agent …. Only wakes agents bound to this gateway. | gateway/daemons/inbox_dispatcher.py, gateway/dispatcher/Dockerfile | none | agent_inbox_items, agents, gateways, workspace | agent_inbox_items.last_wake_* |
runner | Python daemon. Subscribes to agent_commands INSERT events, leases via lease_command(gateway_slug=…), executes add-agent.sh / update-agent.sh / docker compose restart …, reports back stdout/stderr/exit code. | gateway/daemons/command_runner.py, gateway/runner/Dockerfile, gateway/scripts/add-agent.sh | none | agent_commands, workspace, gateways | agent_commands.status/stdout/stderr, gateways.last_seen_at (30 s heartbeat) |
gateway, dispatcher, and runner containers all share the Docker volume gateway-state (mounted at /home/openclaw/.openclaw). That volume is where OpenClaw’s config, the bare git repo, per-agent worktrees, browser profiles, and the VNC state live. The runner additionally mounts /var/run/docker.sock so it can restart sibling containers.
4. Data flow: creating an agent end-to-end
This is the most “I see how it all fits” path through the system. Everything else is a variation of it.-
User picks “New agent” in the UI (
Dashboard → Agents → New agent). The agent-create wizard (apps/ui/src/components/agents/agent-create-wizard.tsx, referenced but owned inapps/ui) has four steps: template, identity, channel (Telegram/Discord/Slack/None), and provisioning. -
Templates are fetched from
GET /api/agents/templates, which returns the list baked into the UI image at build time fromtemplates/. Each template carries itsagent.jsonand abranchfield liketemplate/cofounder. -
User submits. The UI runs the
createAgentWithBranchserver action inapps/ui/src/app/dashboard/agents/actions.ts. That action:- Validates the slug (2–40 chars,
[a-z0-9-], no reserved names). - Reads the
workspacesingleton for the owner profile +workspace.slug. - Confirms the slug is free in
agents. - Computes the branch name —
"${workspace.slug}/${slug}"(e.g.my-workspace/ricardo). - Inserts an
agentsrow withmeta.team,meta.template_branch,meta.emoji, andmeta.channelall derived from the wizard inputs. - Writes an
audit_logentry.
- Validates the slug (2–40 chars,
-
The UI enqueues a
provisioncommand viaenqueueAgentCommand(same file). That’s anINSERTintoagent_commandswithaction='provision',agent_slug=<slug>, andpayloadcontaining the channel type, channel-specific credentials (e.g.telegram_token,discord_token, orslack_app_token/slack_bot_token),source_template,name,description,emoji, and the owner profile fields. -
The runner wakes up. In
command_runner.py, the Realtime listener onagent_commandssees the new row and callsprocess_pending(), which calls thelease_command(p_gateway_slug=GATEWAY_ID)RPC to atomically claim it.lease_command(014_agent_commands.sql) doesFOR UPDATE SKIP LOCKEDso parallel runners on different gateways never steal each other’s work. -
The runner builds the shell command via
build_command('provision', …): it resolves the branch name fromworkspace.slug+agent_slug, then invokesgateway/scripts/add-agent.shwith--channel, channel-specific credential flags (e.g.--telegram-token,--discord-token,--slack-app-token/--slack-bot-token),--source-branch,--slug, and the owner-profile flags. -
add-agent.shdoes the real work in the gateway container’s state volume:- Creates the agent’s branch off the template (or
default) inside the bare repo at$HOME/.openclaw/repo.git. - Checks out the branch as a
git worktreeat$HOME/.openclaw/workspace-<branch>. - Patches
agent.jsonwith the wizard inputs viajq. - Fills
USER.mdplaceholder tokens (USER_NAME_HERE,PREFERRED_NAME_HERE,TIMEZONE_HERE) from the owner profile. - Rewrites
IDENTITY.md## Nameand## Emojisections. - Swaps
BROWSER_PROFILE_HEREinTOOLS.mdfor the agent’s slug. - Commits the init patches.
- Allocates a CDP port (18801+) for this agent’s Chrome.
- Patches
openclaw.jsonto register the agent, its channel binding (Telegram account, Discord bot, Slack socket, or none), and the new browser profile. - Creates an XFCE desktop shortcut for the agent’s Chrome.
- Links the shared Codex auth profile.
- Runs
openclaw gateway restartso the new agent is picked up.
- Creates the agent’s branch off the template (or
-
The runner reports back.
complete_commandwritesstatus='done',exit_code=0,stdout, andstderrto theagent_commandsrow; the UI’s subscription on that row renders the command as green in the command history view at/dashboard/settings/system. On success the runner also PATCHes thepayloadto scrub all channel credential tokens (any key ending in_token). -
The agent is online.
openclaw gateway runnow has the agent in its session list; the agent’s messaging channel is bound; its Chrome profile is ready; its git worktree is writable both from inside the container (by openclaw itself) and from outside via the files-API.
agents row only. Cleanup of orphaned branches is a remove command on the same queue.
5. Data flow: incoming channel message
This path is mostly owned by OpenClaw (the agent runtime; seeopenclaw). HQ’s integration surface is intentionally thin:
- The user sends a message through the agent’s configured channel (Telegram DM, Discord DM, Slack message).
- The channel provider delivers the message to openclaw inside the gateway container (Telegram via long-poll, Discord via gateway websocket, Slack via socket mode).
- openclaw matches the inbound message to an
agentIdvia thebindings[]array inopenclaw.json(written byadd-agent.shat provision time). - openclaw wakes that agent’s session, loads its workspace (the git worktree from
add-agent.sh), and runs the message through the agent’s prompt assembly (IDENTITY.md,SOUL.md,TOOLS.md,USER.md, and skills underskills/). - The agent’s response is streamed back through the same channel. If the agent writes to Supabase during the turn —
interactions,tasks,contacts,audit_log— it does so via theskills/hq/*Python scripts baked into every template, which talk to Supabase with the service role key. - HQ’s dispatcher is not involved in channel-originated messages. It only fires on the “background inbox” path (§4 above): when HQ itself (a trigger on
tasksorcontacts, or an @-mention in a task comment) enqueues anagent_inbox_itemsrow.
6. Data flow: UI edits an agent file
The file browser at/dashboard/agents/[id] is the one place the UI talks directly to a gateway. That direct link is scoped to a single HTTP call:
- User edits a file in the UI’s Monaco editor and saves.
- The UI’s server action calls
saveFile(branch, path, content, sha)fromapps/ui/src/lib/agent-repo/gateway-backend.ts. gateway-backend.tsmakes an authenticatedPUTto${GATEWAY_URL}/branches/<branch>/files/<path>withAuthorization: Bearer ${GATEWAY_AUTH_TOKEN}. On the default Compose stackGATEWAY_URL=http://gateway:18790, so the request stays inside Docker’s bridge network.files_api.pyon the gateway validates the token (constant-time), resolves the worktree path ($HOME/.openclaw/workspace-<branch>), does asafe_join()to refuse any..escape, and writes the file.- The files-API immediately runs
git add <path> && git commit -m "edit via UI: <path>"in the worktree. The file edit is now a git commit on the agent’s branch, locally. - After a successful write, the UI server action then enqueues an
updatecommand viaenqueueAgentCommand({ action: 'update', agentId }). - The runner leases that command and runs
gateway/scripts/update-agent.sh <branch>, which tells openclaw to reload the agent’s session so the changed file takes effect.
GATEWAY_AUTH_TOKEN) and one URL (GATEWAY_URL), not a GitHub PAT per operator plus a remote URL per gateway. Optional GitHub mirroring exists — set GIT_REMOTE_URL and the gateway will fetch on boot — but that’s for off-site backup, not the live edit path.
7. Gateway internals
Thegateway container is deliberately fat in Phase 1 — it’s one image that runs every user-facing process. The layout, from gateway/entrypoint.sh:
- Xtigervnc on
:1— a combined X server + VNC server (replaces the older Xvfb + x0vncserver pair because the scraping-server perl wrapper on Ubuntu 24.04 is broken). Listens onlocalhost:5901. - XFCE — a full desktop (panel, Whisker menu, Thunar, xfce4-terminal, xfce4-goodies). We ship the real desktop so the remote-desktop experience matches what the operator sees locally — no minimized WM, no surprises.
- Session D-Bus started explicitly at
$XDG_RUNTIME_DIR/buswith a deterministic socket path; we do not rely ondbus-launch --exit-with-sessionbecause it’s unreliable across glib/xfconf versions. - XDG dirs (
XDG_CONFIG_HOME,XDG_DATA_HOME,XDG_CACHE_HOME,XDG_RUNTIME_DIR) exported explicitly; some glib builds don’t apply$HOME/.configfallbacks in containers. - autocutsel × 2 — keeps CLIPBOARD and PRIMARY selections in sync so noVNC’s clipboard panel reaches Chrome and the terminal.
- Chrome (amd64) or Chromium (arm64) — started per-agent via desktop shortcuts (
$HOME/.openclaw/Desktop/Chrome-<slug>.desktop), one shortcut per agent, each with its own--user-data-dirand--remote-debugging-port(CDP port, 18801+). - websockify → noVNC on
0.0.0.0:6901in-container. The host port mapping (NOVNC_HOST_PORTin.env) decides whether 6901 is reachable on localhost only or on the host’s tailnet/public interface. - files_api.py on
0.0.0.0:18790in-container, gated byGATEWAY_AUTH_TOKEN. - openclaw gateway run as PID 1 under tini. Invoked with
execat the end ofentrypoint.shso signals propagate cleanly.
openclaw.json, all over Docker networking. Not impossible, but a lot of moving parts for little benefit when the whole stack is owned by one operator.
Volumes. Two named volumes per Compose project:
gateway-state(mounted at/home/openclaw/.openclawin gateway, dispatcher read-only, runner read-write): holdsopenclaw.json,repo.git(bare),workspace-<branch>/(worktrees),browser/<profile>/user-data/,Desktop/,plugins/,shared-auth/,.vnc-password.gateway-chrome-profile(mounted at/home/openclaw/.config/google-chrome): Chrome’s own profile dir. Kept separate so you can blow it away without destroying agent workspaces.
8. The agent workspace model
HQ does not push to GitHub. Each gateway has a local bare git repository at$HOME/.openclaw/repo.git, seeded on first boot from templates/ (bundled into the gateway image at /opt/templates, or cloned from $TEMPLATES_SOURCE if set). Each template becomes a branch called template/<dirname>. A default template becomes branch default, which is the bare repo’s HEAD.
When an agent is provisioned, add-agent.sh:
- Creates a new branch
${workspace.slug}/<agent-slug>off the selected template branch. - Adds a
git worktreefor it at$HOME/.openclaw/workspace-${workspace.slug}/<agent-slug>. - Commits the agent’s personality patches (see §4 step 7).
GIT_REMOTE_URL is set (and GIT_DEPLOY_KEY for SSH), the gateway adds it as origin and fetches on boot. Push on write is not automatic; the design intent is that the remote is a backup, not a source of truth. A nightly push is a reasonable operator cron; the platform does not assume one.
9. Networking model
Networking in HQ is intentionally boring: the containers publish ports to the host, and the host’s network configuration decides who can reach them. Tailscale, TLS, reverse proxies — all of it lives on the host, not in any container. Three modes are shipped:| Mode | HOST_REACHABLE_URL | Host port binds | Who can reach it |
|---|---|---|---|
local | http://localhost | 127.0.0.1:* | Only this machine. |
tailscale | http://<host-ts-ip> | 0.0.0.0:* | Anyone on your tailnet; loopback still works. Tailscale is installed on the host, not in the container. |
public | https://<your-domain> | 0.0.0.0:* (fronted by host Caddy/nginx) | The internet, via your host’s reverse proxy. |
installer/install.sh) asks once, sets NETWORKING_MODE and HOST_REACHABLE_URL in .env, and Compose port mappings (UI_HOST_PORT, NOVNC_HOST_PORT, FILES_API_HOST_PORT) do the rest.
Why do gateways register their own URLs? In step 9 of the entrypoint, the gateway upserts its row in the gateways table with meta.reachable_urls.{base,files_api,novnc} set from HOST_REACHABLE_URL. That’s how the UI — which may live on a completely different machine — knows what hostname to hit for this specific gateway’s file browser and desktop. The UI never hard-codes a gateway URL; it reads gateways.meta.reachable_urls.
Full details in Networking.
10. Multi-machine topologies
Because the only shared state is Supabase and gateways publish their own reachable URLs, the same code handles three deployment shapes with no conditionals: Single-host, everything local. UI and gateway on the same machine.GATEWAY_URL=http://gateway:18790 (Docker DNS). NOVNC_HOST_PORT=127.0.0.1:6901. docker compose up -d. This is what the installer sets up by default.
Split UI / gateway. UI on your laptop, gateway on an always-on host (Mac mini, VPS, Raspberry Pi). Both hosts have Tailscale. On the gateway host: docker compose up -d gateway dispatcher runner. On the laptop: docker compose up -d ui with GATEWAY_URL=http://100.x.y.z:18790 (the gateway host’s tailnet IP). Same GATEWAY_AUTH_TOKEN on both sides.
Multi-gateway. Multiple gateway hosts against the same Supabase. Each gets its own GATEWAY_ID (laptop, mac-mini, vps-eu) and registers its own row in the gateways table at boot. Each agent has a gateway_id FK; the runner filters lease_command by its own gateway slug (lease_command(p_gateway_slug=$GATEWAY_ID)), and the dispatcher filters inbox items by caching the set of local agent IDs (refresh_local_agents in inbox_dispatcher.py). No gateway ever picks up another gateway’s work.
Adding another gateway is normally UI-driven:
- Settings → Gateways → Add Gateway.
- The UI mints a single-use registration token and renders an installer command.
- The operator runs that command on the gateway host.
- The gateway writes its row to Supabase and starts heartbeating.
GATEWAY_ID, point the host at the same Supabase project, and run docker compose up -d gateway dispatcher runner.
11. Trust model / security boundaries
HQ is a single-operator platform. The security model reflects that — there is no per-user RLS, no multi-tenant isolation, no request-signing between services. The boundaries that do exist are:- Supabase service role key is fully trusted. It’s in the UI container, the gateway container, the dispatcher container, and the runner container. Anyone with it can read or write any row in your Supabase. Treat it as a database admin password.
- Supabase anon key + user session gate the UI itself. Auth is Supabase email/password;
middleware.tsredirects unauthenticated requests to/login. RLS on every table is"Authenticated full access"— there’s only one tenant, and they get everything. GATEWAY_AUTH_TOKENis the one pre-shared secret between UI and gateway. It gates the files-API exclusively. Constant-time compare infiles_api.py. Generate withopenssl rand -hex 32; rotate by updating.envon both sides and restarting.- noVNC password (
VNC_PASSWORD, auto-generated if unset) gates the remote desktop. Read it out of/home/openclaw/.openclaw/.vnc-passwordin the gateway-state volume. - Docker socket mount in the runner container. The runner binds
/var/run/docker.sockfrom the host so it can rundocker compose restart gatewayfor therestart_gateway/restart_dispatcher/update_allcommand actions. This is a full root-equivalent on the host. Treat it as such: anyone with command-queue access (i.e. the service role key) can execute arbitrary Docker operations on the gateway host. If the runner process is ever RCE’d, the attacker owns the host. Document, don’t hide. - Template content is trusted. Templates are seeded into the bare repo and their files are read by agents; the wizard only substitutes a few placeholders. A malicious template is equivalent to a malicious script running on the gateway with the service role key. Only ship templates from sources you trust.
- Per-agent Chrome profile isolation. Each agent has its own
--user-data-dir, so cookies and extensions don’t cross agents. There is no stronger sandbox; all agents share the container user (openclaw), the file system, and the D-Bus session.
12. Supabase as the backbone
Supabase is the backbone. Every piece of coordination — commands, inbox items, heartbeats, audit trails, realtime subscriptions, cross-agent observability — is a row in a Postgres table. Why not a custom backend? Three reasons. First, Realtime + Postgres triggers + RPCs cover the entire messaging surface HQ needs (enqueue, lease, complete, fail, subscribe). Second, hosting Postgres + auth + storage + realtime in one managed service that the operator already owns removes a huge ops burden versus running Redis, Postgres, a WebSocket gateway, and an auth service ourselves. Third, the operator-owned Supabase project is the natural tenant boundary — the multi-project UI is a registry of Supabase URLs, not a multi-tenant schema redesign. The structures to know:agent_commands— command queue consumed by the runner. Schema in014_agent_commands.sql. Action enum coversprovision,update,remove,approve_pairing,restart_gateway,restart_dispatcher,update_all,update_gateway, and theauth_*connection actions.lease_command(p_lease_seconds, p_gateway_slug)is the atomic claim;start_command,complete_command,fail_commandreport back. Rows persist forever (withstdout/stderr) for the command-history UI.agent_inbox_items— background-work queue consumed by agents, not the runner. Inserted by Postgres triggers ontasks(task assignment / reassignment),comments(@-mentions), andcontacts(viaautomation_rules). The dispatcher wakes agents; the agent’s own session claims work vialease_inbox_item(p_agent_id, p_lease_seconds)(see013_agent_inbox.sql) and reports viacomplete_inbox_item/fail_inbox_item.dedup_key+ unique constraint prevents duplicates.attempt_count < max_attemptsbounds retries before dead-lettering.gateways— registry of known gateway hosts. Each row hasslug,label,status,last_seen_at, andmeta.reachable_urls. Seeded with adefaultrow so single-gateway installs work without setup.gateway_registration_tokens— single-use token records minted by the UI when adding a gateway. The plaintext token is shown once in the installer command; the database stores only the hash and expiry metadata.agent_usage/agent_budgets— usage source-of-truth and per-agent current-period rollup from015_usage_budgets.sql. Runtime usage is logged by the HQ bootstrap OpenClaw plugin; hard budget cutoffs are enforced both before replies and before dispatcher wakes.agents.reports_to_id— lightweight org chart from007_agents.sql. Theagent_reports_chainRPC lets the UI prevent cycles before saving manager changes.- Realtime. Both daemons open a WebSocket to
/realtime/v1/websocketand subscribe topostgres_changesonagent_commands/agent_inbox_itemsINSERT. Fallback poll everyPOLL_INTERVAL(30 s runner) /RECONCILE_INTERVAL(120 s dispatcher) catches anything Realtime missed during a reconnect. - Triggers worth knowing:
enqueue_task_assignmentenqueues inbox items when a task is assigned;enqueue_comment_mentionsdoes the same for @-mentions;process_contact_automationrunsautomation_ruleson contact inserts/updates. All three are defined in013_agent_inbox.sql.
13. Extensibility
The UI is designed to be configured, not coded, for most everyday changes:- Agent templates. Add a directory under templates with
agent.json,IDENTITY.md,USER.md,TOOLS.md, and askills/dir. On next gateway boot it’s seeded astemplate/<dirname>. No UI deploy needed. Full guidance is in Templates and Agents. - Custom fields.
/dashboard/settings/fieldsadds/edits rows infield_definitions, keyed per entity type.DynamicFieldGroupsrenders them in contact/organization forms without code changes. - Pipeline stages.
/dashboard/settings/pipelinewrites topipeline_stages. Status dropdowns, kanban columns, and color swatches all read from this table viausePipelineStages(entityType). - Task streams. Streams are runtime-created — functional, project, or custom — and hold their own colors. Tasks reference them by FK.
- Automation rules.
/dashboard/automationscreatesautomation_rulesthat theprocess_contact_automationtrigger evaluates on every contact change, enqueuing inbox items when conditions match. - Provider connections. Settings → Connections enqueues auth commands that the runner executes through OpenClaw. API-key, OAuth paste, device-code, CLI-reuse, and local URL flows share the same command queue.
- Agent hierarchy. Manager/direct-report structure is a regular column on
agents. Runtime prompt context is assembled by the HQ bootstrap plugin, not hard-coded into templates. - Usage budgets. Budget config is stored in
agent_budgets; raw usage is append-only inagent_usage. New provider pricing support belongs in the bootstrap plugin’s pricing map and should degrade to unmetered calls when unknown. - New command actions. Add a case in
command_runner.py’sbuild_command(), extend thecommand_actionenum in the schema migration, and expose it as a server action call inapps/ui/src/app/dashboard/agents/actions.ts. This is the extension point for anything a runner needs to do on the gateway host.
14. Where things are going
- Self-hosted hardening. Better migration tooling, stronger validation around project setup, and clearer gateway health diagnostics.
- Hosted offering. Account management, automated Supabase/gateway provisioning, billing, and managed operations in front of the same core runtime.
- Integrations. More MCP-first integrations, richer provider auth UX, optional email/calendar/Slack/Notion flows, and deeper automation primitives.
- Docs site. The markdown docs in this repository should remain the source of truth and can later be rendered at
docs.yourhq.ai.
Next reads: Features for the product tour, Networking for the bind-mode / Tailscale / reverse-proxy details, Agents for the agent runtime and template authoring, Configuration for every environment variable, and
db/migrations/ for the schema of record.