Skip to main content

NemoClaw

Status: ๐ŸŸก Documented (inherits OpenClaw schema inside the NVIDIA sandbox) NemoClaw is NVIDIA's sandbox around OpenClaw and reuses OpenClaw's MCP schema and CLI verbatim inside the sandbox boundary. Promote to โœ… once a maintainer has walked the Phase 5.5 loop inside a NemoClaw sandbox specifically (the OpenClaw walkthrough does not transfer automatically โ€” sandbox networking and credential isolation differ).

Topologyโ€‹

NemoClaw is Topology A with sandbox boundary โ€” the agent runs in NVIDIA's NeMo sandbox, which wraps OpenClaw and adds its own credential / network isolation layer. From helmdeck's perspective, the wiring is the same as the OpenClaw sidecar pattern, but the sandbox imposes two extra constraints:

  1. Network: the sandbox may not give the OpenClaw process inside it access to the host's docker bridge by default. You may need a NemoClaw-specific network passthrough flag (consult NVIDIA's NemoClaw docs โ€” this section will gain a concrete recipe once a maintainer has run it).
  2. Credentials: the helmdeck JWT and any LLM provider key must be passed in via NemoClaw's secret-injection mechanism, not via plain env vars in the OpenClaw container.

NemoClaw is alpha at the time of helmdeck v0.6.0 and the configuration surface may shift. Treat this page as a pointer; the authoritative schema for the inner OpenClaw config is still openclaw.md.

What inherits from OpenClawโ€‹

  • ~/.openclaw/openclaw.json schema, including the agents.list[].mcp.servers[] section
  • openclaw mcp CLI commands
  • The two MCP transport options: stdio command and URL-based url

What does NOT inheritโ€‹

  • The ./scripts/docker/setup.sh flow โ€” NemoClaw has its own bootstrap
  • The host networking model โ€” the sandbox may require explicit egress rules
  • The credential storage path โ€” secrets live in NVIDIA's sandbox vault, not on the host filesystem

Prerequisitesโ€‹

  • An NVIDIA GPU host with the NeMo sandbox installed
  • NVIDIA NemoClaw / NeMo Agent CLI access
  • A running helmdeck stack on the same host (or reachable from the sandbox via configured egress)

Walkthroughโ€‹

Until a maintainer has run NemoClaw end-to-end, follow these steps as scaffolding:

  1. Install helmdeck on the host: git clone โ€ฆ && ./scripts/install.sh
  2. Install NemoClaw per NVIDIA's instructions (URL TBD โ€” see https://github.com/NVIDIA or NVIDIA developer docs for the current path).
  3. Inside the NemoClaw sandbox, register the helmdeck MCP server using the OpenClaw CLI: openclaw mcp set helmdeck '{"url":"http://<helmdeck-host>:3000/api/v1/mcp/sse","headers":{"Authorization":"Bearer <jwt>"}}'. Or hand-edit the inner openclaw.json per the schema in openclaw.md ยง4b.
  4. Pass the helmdeck JWT in via NemoClaw's secret-injection mechanism (NOT a plain env var).
  5. From inside the sandbox, verify the helmdeck control plane is reachable: curl http://<host-or-bridge>:3000/healthz.
  6. Walk the Phase 5.5 loop as documented in openclaw.md ยง6.

When the walkthrough lands, replace this section with the concrete NemoClaw-specific recipe and flip the status banner โœ….

Why NemoClaw is intentionally not a separate connect.go targetโ€‹

/api/v1/connect/openclaw returns the OpenClaw config shape. NemoClaw consumes that exact same shape inside its sandbox โ€” there is no NemoClaw-specific JSON to generate, only sandbox-specific network and credential plumbing that lives outside helmdeck's connect endpoint. This is a deliberate non-decision: keeping a separate target would imply a schema divergence that doesn't exist.

Referencesโ€‹