NVIDIA NemoClaw - Secure OpenShell Sandbox
Detailed Description of NVIDIA NemoClaw: OpenClaw Plugin for OpenShell
Introduction
NVIDIA NemoClaw is an innovative open-source framework designed to simplify the deployment and management of always-on, sandboxed AI assistants built on top of OpenClaw. It integrates seamlessly with NVIDIA OpenShell, a secure runtime environment for autonomous agents, ensuring that inference requests are routed through NVIDIA cloud services while maintaining strict security controls. This architecture is particularly useful for developers and enterprises looking to deploy advanced AI models in controlled, isolated environments without exposing them directly to untrusted networks.
Given its early-stage development status (marked as an alpha release), NemoClaw is still evolving, with interfaces, APIs, and behavior subject to change. However, it provides a robust foundation for securely hosting OpenClaw agents within a sandboxed infrastructure, leveraging NVIDIA’s proprietary cloud models like the nemotron-3-super-120b-a12b for high-performance inference.
Core Features and Architecture
1. Overview of NemoClaw
NemoClaw acts as an intermediary layer between OpenClaw agents and external inference providers, primarily hosted on NVIDIA’s cloud infrastructure. Its primary goals include:
- Sandboxing: Isolating AI agents from the host system to prevent unauthorized access or malicious behavior.
- Secure Inference Routing: Ensuring that all model requests are processed through a controlled network pipeline, reducing attack surfaces.
- Policy Enforcement: Implementing strict network and filesystem restrictions via OpenShell’s security mechanisms.
The project is part of the broader NVIDIA Agent Toolkit, which provides tools for developing autonomous AI agents with minimal risk. Unlike traditional cloud-based AI services, NemoClaw enforces a sandboxed execution model, where agents operate within restricted environments while still benefiting from high-performance inference capabilities.
2. Key Components and Workflow
NemoClaw’s architecture consists of several interconnected components:
A. Plugin (TypeScript CLI)
The plugin is the command-line interface layer that users interact with via the nemoclaw CLI. It provides commands for:
- Onboarding: Setting up a new sandbox environment.
- Deployment: Managing agent instances and their configurations.
- Connecting: Establishing interactive sessions with agents.
Example usage includes running:
curl -fsSL https://nvidia.com/nemoclaw.sh | bash # Install script
nemoclaw onboard # Interactive setup wizard
B. Blueprint (Python Artifact)
The blueprint is a Python-based configuration file that defines the sandbox’s lifecycle, including:
- Policy enforcement: Restricting network egress and filesystem access.
- Inference routing: Configuring how model requests are forwarded to NVIDIA cloud services.
This artifact is versioned and must be verified before deployment. Errors in the blueprint can lead to failures at either the NemoClaw or OpenShell level, requiring checks via:
nemoclaw status # Check NemoClaw health
openshell sandbox list # Inspect underlying sandbox state
C. Sandbox (Isolated OpenShell Container)
The sandbox is an isolated container running OpenClaw with enforced security policies. It restricts:
- Network access: Preventing unauthorized outbound connections.
- Filesystem operations: Limiting reads/writes to
/sandboxand/tmp. - Process privileges: Blocking privilege escalation via
seccompandLandlock.
When an agent attempts to reach an unlisted host, OpenShell blocks the request and prompts for manual approval in the TUI (Terminal User Interface).
D. Inference Provider (NVIDIA Cloud)
All inference requests from agents are routed through NVIDIA’s cloud infrastructure via the OpenShell gateway. The primary model used is:
nvidia/nemotron-3-super-120b-a12b(Production-grade, requires an API key).
To obtain an API key, users must register on build.nvidia.com and provide it during the nemoclaw onboard setup.
Installation and Quick Start
Prerequisites
Before installing NemoClaw, ensure the following are met:
- Operating System: Linux Ubuntu 22.04 LTS or later.
- Software:
- Node.js 20+ (recommended: Node.js 22) with npm 10+.
- Docker installed and running.
- NVIDIA OpenShell installed.
Installation Process
The installation is streamlined via a script that:
- Installs missing dependencies (e.g., Node.js).
- Runs an interactive wizard to configure the sandbox, inference policies, and security settings.
Command:
curl -fsSL https://nvidia.com/nemoclaw.sh | bash
After installation, the system generates a summary of the configured environment:
──────────────────────────────────────────────────
Sandbox my-assistant (Landlock + seccomp + netns)
Model nvidia/nemotron-3-super-120b-a12b (NVIDIA Cloud API)
──────────────────────────────────────────────────
Run: nemoclaw my-assistant connect
Status: nemoclaw my-assistant status
Logs: nemoclaw my-assistant logs --follow
[INFO] === Installation complete ===
Interacting with the Agent
1. Connecting to the Sandbox
Users can interact with their agent via two primary methods:
A. OpenClaw TUI (Terminal User Interface)
The TUI provides an interactive chat interface where users type messages and receive responses:
sandbox@my-assistant:~$ openclaw tui
Example interaction:
User: Hello, how are you?
Agent: I'm functioning well in my sandbox! How can I assist you today?
B. OpenClaw CLI (Command-Line Interface)
For programmatic interactions, users can send single messages and capture responses:
sandbox@my-assistant:~$ openclaw agent --agent main --local -m "hello" --session-id test
Output:
{"response": "Hello! This is a test message from the sandboxed agent."}
How NemoClaw Works: The Lifecycle
The blueprint lifecycle follows four stages:
- Resolve: Load and verify the artifact.
- Verify: Check its integrity (digest).
- Plan: Define required resources.
- Apply: Deploy via OpenShell CLI.
If an error occurs, it may stem from either NemoClaw or OpenShell. Users can debug by running:
nemoclaw status # Check NemoClaw-level health
openshell sandbox list # Inspect sandbox state
Inference and Security
1. Inference Routing
All model requests from agents are intercepted by OpenShell and routed to NVIDIA’s cloud provider transparently. This ensures:
- No direct exposure: Agents cannot access external APIs without approval.
- Controlled inference: Requests are validated before processing.
The primary model (nemotron-3-super-120b-a12b) is optimized for production use and requires an API key obtained from NVIDIA’s cloud platform.
2. Protection Layers
NemoClaw enforces multiple security layers:
| Layer | Protection | Application Time |
|----------------|-------------------------------------|------------------------|
| Network | Blocks unauthorized outbound traffic | Hot-reloadable at runtime |
| Filesystem | Restricts access to /sandbox//tmp | Locked at sandbox creation |
| Process | Prevents privilege escalation | Locked at sandbox creation |
| Inference | Routes model API calls to controlled backends | Hot-reloadable at runtime |
When an agent attempts to reach an unlisted host, OpenShell blocks the request and displays it in the TUI for manual approval.
Key Commands
1. Host Commands (nemoclaw)
These commands manage sandbox environments from the host machine:
| Command | Description |
|-----------------------------|-----------------------------------------------------------------------------|
| nemoclaw onboard | Interactive setup wizard (gateway, providers, sandbox). |
| nemoclaw deploy (experimental) | Deploy to a remote GPU instance via Brev. |
| nemoclaw connect | Open an interactive shell inside the sandbox. |
| openshell term | Launch OpenShell TUI for monitoring and approvals. |
| nemoclaw start/stop/status| Manage auxiliary services (Telegram bridge, tunnel). |
2. Plugin Commands (openclaw nemoclaw)
These commands are under active development but provide additional functionality:
| Command | Description |
|----------------------------------------------|-----------------------------------------------------------------------------|
| openclaw nemoclaw launch [--profile ...] | Bootstrap OpenClaw inside an OpenShell sandbox. |
| openclaw nemoclaw status | Show sandbox health, blueprint state, and inference. |
| openclaw nemoclaw logs [-f] | Stream execution and sandbox logs. |
Note: The openclaw nemoclaw plugin is still evolving; users should primarily rely on the nemoclaw host CLI for stability.
Limitations and Known Issues
Despite its promise, NemoClaw currently faces some limitations:
- Plugin Commands: Some
openclaw nemoclawcommands are not yet fully functional. - Setup Complexity: Manual workarounds may be required on certain platforms.
- Alpha Status: Interfaces, APIs, and behavior may change without notice.
Users are encouraged to report issues via GitHub discussions or contribute feedback to the project’s evolving development cycle.
Learning More
For deeper insights into NemoClaw’s architecture and capabilities, users can explore the official documentation:
- Overview: Overview of NemoClaw’s purpose.
- How It Works: Plugin, blueprint, and sandbox lifecycle.
- Architecture: Detailed plugin structure and sandbox environment.
- Inference Profiles: NVIDIA cloud inference configuration.
- Network Policies: Egress control and policy customization.
- CLI Commands: Full command reference.
Conclusion
NVIDIA NemoClaw represents a significant advancement in secure AI agent deployment, offering a sandboxed environment for OpenClaw assistants while leveraging NVIDIA’s cloud infrastructure for high-performance inference. By enforcing strict security policies and isolating agents from the host system, it minimizes risks associated with untrusted environments.
While still in its alpha phase, NemoClaw provides a robust foundation for developers and enterprises looking to experiment with autonomous AI agents in controlled settings. Its modular architecture allows for future extensions, including support for additional inference providers or custom policy enforcement rules.
For users seeking to deploy OpenClaw agents securely, NemoClaw offers a streamlined installation process and interactive tools for managing sandboxed environments. However, due to its early-stage development, users should approach it with caution and consult the documentation for updates on evolving features and limitations.
Visual Representation of Key Components
(Assuming hypothetical images from the input are not provided, here’s a conceptual breakdown of how NemoClaw integrates components:)
- User Interface (TUI/CLI):
- A terminal-based dashboard for interacting with sandboxed agents.
- Sandbox Architecture:
- Depicts the layered security model (Network, Filesystem, Process, Inference).
- Inference Pipeline:
- Shows how agent requests are routed through OpenShell to NVIDIA cloud models.
- Installation Flow:
- A flowchart of the installation process, from dependency checks to sandbox creation.
Note: Since actual images were not provided in this input, these descriptions serve as conceptual placeholders for visual representations that would typically accompany a detailed technical guide.
Enjoying this project?
Discover more amazing open-source projects on TechLogHub. We curate the best developer tools and projects.
Repository:https://github.com/NVIDIA/NemoClaw
GitHub - NVIDIA/NemoClaw: NVIDIA NemoClaw - Secure OpenShell Sandbox
NVIDIA NemoClaw is an open-source framework that provides a sandboxed environment for OpenClaw AI assistants, integrating with NVIDIA OpenShell and NVIDIA cloud...
github - nvidia/nemoclaw