Early Preview — March 2026

Run Autonomous AI Agents
More Safely.

NVIDIA NemoClaw is an open-source reference stack that adds privacy and security controls to OpenClaw. Deploy always-on, self-evolving agents with policy-enforced guardrails — one command.

curl -fsSL https://www.nvidia.com/nemoclaw.sh | bash

Powered by

NVIDIA OpenClaw OpenShell Nemotron NVIDIA NIM

Built for Secure Agent Deployment

NemoClaw wraps OpenClaw in NVIDIA's security stack — so agents run in hardened sandboxes with policy-enforced behavior, not just hope.

🛡️

Policy-Enforced Security

Every network request, file access, and inference call is governed by declarative policy. Landlock, seccomp, and network namespaces by default.

🔒

Privacy Router

Route sensitive queries to local open models (Nemotron) and frontier tasks to cloud providers — all within defined security boundaries.

One-Command Deploy

Install, configure, and launch a sandboxed agent in one step. The installer handles Node.js, OpenShell, sandbox creation, and inference setup.

🧩

Any Model, Any Provider

NVIDIA Endpoints, OpenAI, Anthropic, or any OpenAI-compatible API. Switch providers without changing agent code.

📦

Sandboxed Execution

Agents run inside isolated OpenShell containers with controlled egress. No data leaks, no unexpected API calls, no filesystem escape.

🔄

Self-Evolving Agents

Agents can learn and adapt within guardrails. NemoClaw ensures evolution stays within policy boundaries — smart but contained.

How It Works

Four layers between your agent and the outside world.

🧩
LAYER 01

Plugin

TypeScript CLI commands for launch, connect, status, and logs.

📋
LAYER 02

Blueprint

Versioned Python artifact orchestrating sandbox, policy, and inference.

📦
LAYER 03

Sandbox

Isolated OpenShell container with policy-enforced egress & filesystem.

🧠
LAYER 04

Inference

Provider-routed model calls intercepted and governed by OpenShell gateway.

Supported Inference Providers

Connect agents to frontier models or run open models locally for maximum privacy.

💚

NVIDIA Endpoints

Nemotron, curated hosted models via NIM

🤖

OpenAI

GPT-4o, GPT-4.1, custom model entry

🧠

Anthropic

Claude Opus, Sonnet, custom endpoints

🔌

OpenAI-Compatible

Ollama, vLLM, LM Studio, any proxy

Get Running in 3 Steps

From zero to sandboxed agent in under 5 minutes.

1

Install NemoClaw

Downloads OpenShell, sets up Node.js if needed, and launches the guided onboarding wizard.

curl -fsSL https://www.nvidia.com/nemoclaw.sh | bash
2

Connect to Sandbox

Drop into a secure sandbox shell where your agent lives — Landlock + seccomp + network namespaces active.

nemoclaw my-assistant connect
3

Chat or Send a Task

Open the TUI for interactive chat, or fire off a single agent turn via CLI.

openclaw agent --agent main --local -m "hello" --session-id test

OpenClaw vs NemoClaw

Same agent power — NemoClaw adds the security layer that production deployments need.

OpenClaw

Open-source AI agent platform

  • Multi-channel (Telegram, Discord, Slack…)
  • Plugin-based agent skills
  • Multi-model support
  • Cron, memory, sub-agents
  • No sandbox isolation
  • No policy enforcement
  • No built-in privacy controls
Enhanced

NemoClaw

OpenClaw + NVIDIA security stack

  • Everything in OpenClaw
  • OpenShell sandbox isolation
  • Policy-enforced network/filesystem
  • Privacy routing (local + cloud)
  • Landlock + seccomp + netns
  • Local Nemotron model support
  • One-command deployment

System Requirements

Minimum and recommended specs for running NemoClaw.

Hardware

CPU4 vCPU (min)
RAM8 GB (16 GB rec.)
Disk20 GB free (40 GB rec.)
GPUOptional (for local models)

Software

OSUbuntu 22.04 LTS+
Node.js22.16+
ContainerDocker (or Colima)
OpenShellInstalled via setup

Join the Ecosystem

NemoClaw is open source and community-driven. Alpha software — feedback shapes the roadmap.