NemoClaw
Status: ๐ก Documented (inherits OpenClaw schema inside the NVIDIA sandbox) NemoClaw is NVIDIA's sandbox around OpenClaw and reuses OpenClaw's MCP schema and CLI verbatim inside the sandbox boundary. Promote to โ once a maintainer has walked the Phase 5.5 loop inside a NemoClaw sandbox specifically (the OpenClaw walkthrough does not transfer automatically โ sandbox networking and credential isolation differ).
Topologyโ
NemoClaw is Topology A with sandbox boundary โ the agent runs in NVIDIA's NeMo sandbox, which wraps OpenClaw and adds its own credential / network isolation layer. From helmdeck's perspective, the wiring is the same as the OpenClaw sidecar pattern, but the sandbox imposes two extra constraints:
- Network: the sandbox may not give the OpenClaw process inside it access to the host's docker bridge by default. You may need a NemoClaw-specific network passthrough flag (consult NVIDIA's NemoClaw docs โ this section will gain a concrete recipe once a maintainer has run it).
- Credentials: the helmdeck JWT and any LLM provider key must be passed in via NemoClaw's secret-injection mechanism, not via plain env vars in the OpenClaw container.
NemoClaw is alpha at the time of helmdeck v0.6.0 and the configuration surface may shift. Treat this page as a pointer; the authoritative schema for the inner OpenClaw config is still openclaw.md.
What inherits from OpenClawโ
~/.openclaw/openclaw.jsonschema, including theagents.list[].mcp.servers[]sectionopenclaw mcpCLI commands- The two MCP transport options: stdio
commandand URL-basedurl
What does NOT inheritโ
- The
./scripts/docker/setup.shflow โ NemoClaw has its own bootstrap - The host networking model โ the sandbox may require explicit egress rules
- The credential storage path โ secrets live in NVIDIA's sandbox vault, not on the host filesystem
Prerequisitesโ
- An NVIDIA GPU host with the NeMo sandbox installed
- NVIDIA NemoClaw / NeMo Agent CLI access
- A running helmdeck stack on the same host (or reachable from the sandbox via configured egress)
Walkthroughโ
Until a maintainer has run NemoClaw end-to-end, follow these steps as scaffolding:
- Install helmdeck on the host:
git clone โฆ && ./scripts/install.sh - Install NemoClaw per NVIDIA's instructions (URL TBD โ see https://github.com/NVIDIA or NVIDIA developer docs for the current path).
- Inside the NemoClaw sandbox, register the helmdeck MCP server using the OpenClaw CLI:
openclaw mcp set helmdeck '{"url":"http://<helmdeck-host>:3000/api/v1/mcp/sse","headers":{"Authorization":"Bearer <jwt>"}}'. Or hand-edit the inneropenclaw.jsonper the schema inopenclaw.mdยง4b. - Pass the helmdeck JWT in via NemoClaw's secret-injection mechanism (NOT a plain env var).
- From inside the sandbox, verify the helmdeck control plane is reachable:
curl http://<host-or-bridge>:3000/healthz. - Walk the Phase 5.5 loop as documented in
openclaw.mdยง6.
When the walkthrough lands, replace this section with the concrete NemoClaw-specific recipe and flip the status banner โ .
Why NemoClaw is intentionally not a separate connect.go targetโ
/api/v1/connect/openclaw returns the OpenClaw config shape. NemoClaw consumes that exact same shape inside its sandbox โ there is no NemoClaw-specific JSON to generate, only sandbox-specific network and credential plumbing that lives outside helmdeck's connect endpoint. This is a deliberate non-decision: keeping a separate target would imply a schema divergence that doesn't exist.
Referencesโ
- OpenClaw MCP schema (canonical)
- NVIDIA NeMo / NemoClaw docs: search https://github.com/NVIDIA and https://docs.nvidia.com