NanoClaw

Your personal AI agent. Secure. Lightweight. Yours.

The official website for NanoClaw, the secure personal AI agent. NanoClaw runs securely in containers, built to be understood and customized for your own needs.

28.4k+ stars on GitHub

Why NanoClaw?

NanoClaw delivers the same core functionality in a codebase you can actually understand.

NanoClaw
OpenClaw
Source files 15 3,680
Lines of code ~3,900 434,453
Dependencies <10 70
Config files 0 53
Time to understand 8 minutes 1–2 weeks
Security model OS container isolation Application-level checks
Architecture Single process + isolated containers Single process, shared memory

What It Supports

Everything NanoClaw supports out of the box, and nothing you don't need.

💬

Multi-channel messaging

WhatsApp, Telegram, Discord, Slack, Microsoft Teams, iMessage, Matrix, Google Chat, Webex, Linear, GitHub, WeChat, and email. Installed on demand with /add-<channel> skills. Run one or many at the same time.

🔒

Container isolation

Agents are sandboxed in Docker (macOS, Linux, WSL2), with optional Docker Sandboxes micro-VM isolation or Apple Container as a macOS-native opt-in.

🔀

Flexible isolation V2

Connect each channel to its own agent for full privacy, share one agent across many channels for unified memory, or fold multiple channels into a single shared session. Pick per channel.

🧠

Per-agent workspace

Each agent group has its own CLAUDE.md, its own memory, its own container, and only the mounts you allow. Nothing crosses the boundary unless you wire it to.

Scheduled tasks

Recurring jobs that run Claude and message you back. Morning briefings, weekly reviews, and more.

🧩

Skills over features

Trunk ships the registry and infrastructure. Install channels and providers with /add-<name>; the skill copies exactly the module(s) you need into your fork.

AI-native, hybrid by design

A fast scripted install for the happy path; Claude Code takes over when a step needs judgment. No dashboards or debug UI beyond setup — describe the problem in chat.

🔑

Credential security

Agents never hold raw API keys. Outbound requests route through OneCLI's Agent Vault, which injects credentials at request time and enforces per-agent policies and rate limits.

Architecture

A single Node host orchestrates per-session agent containers. Two SQLite files per session, each with exactly one writer — no cross-mount contention, no IPC, no stdin piping.

Messaging Apps
WhatsApp, Slack, …
Router
inbound.db
Container
Bun + Agent SDK
Delivery
outbound.db
Back to channel
streamed reply

Single host process

One Node host routes inbound via the entity model (user → messaging group → agent group → session), writes to inbound.db, and wakes the container. No microservices, no message brokers.

Per-agent-group containers

Each agent group runs in its own container with its own CLAUDE.md, memory, skills, and only the mounts you allow. Nothing crosses the boundary unless you wire it to.

Credential isolation

Outbound HTTPS routes through OneCLI's Agent Vault, which injects credentials at request time and enforces per-agent policies and rate limits. Agents never hold raw API keys.

Self-registering extensions

Channels and alternative providers self-register at startup. Trunk ships the registry and the Chat SDK bridge; adapters themselves are skill-installed per fork.

Key Files

src/index.ts — Entry point — DB init, channel adapters, delivery polls, sweep
src/container-runner.ts — Spawns per-agent-group containers, OneCLI credential injection
src/router.ts — Inbound routing: messaging group → agent group → session → inbound.db
src/delivery.ts — Polls outbound.db, delivers via adapter, handles system actions
src/db/ — Central DB — users, roles, agent groups, messaging groups, wiring, migrations
src/host-sweep.ts — 60s sweep — stale detection, due-message wake, recurrence

Philosophy

The principles that shape every NanoClaw decision.

🔍

Small enough to understand

One process, a few source files, no microservices. If you want to understand the full codebase, ask Claude Code to walk you through it.

🛡️

Secure by isolation

Agents run in Linux containers and can only see what's explicitly mounted. Bash access is safe because commands run inside the container, not on your host.

👤

Built for the individual user

NanoClaw isn't a monolithic framework; it's software that fits each user's exact needs. Instead of becoming bloatware, it's designed to be bespoke. Fork it and have Claude Code shape it to you.

🤖

AI-native, hybrid by design

The install and onboarding flow is scripted, fast, and deterministic. When a step needs judgment — a failed install, a guided decision, a customization — control hands off to Claude Code seamlessly.

🧩

Skills over features

Trunk ships the registry and infrastructure, not channel adapters or alternative providers. Channels live on a channels branch, providers on providers. Run /add-telegram, /add-opencode, etc. and the skill copies exactly what you need into your fork.

Best harness, best model

Natively runs Claude Code via the official Claude Agent SDK. Other providers are drop-in options: /add-codex (OpenAI), /add-opencode (OpenRouter, Google, DeepSeek, and more), /add-ollama-provider (local open-weight models). Provider is configurable per agent group.

Get Started with NanoClaw in 3 Lines

Clone NanoClaw, enter the directory, and run the install script.

Terminal
$ git clone https://github.com/qwibitai/nanoclaw.git nanoclaw-v2
$ cd nanoclaw-v2
$ bash nanoclaw.sh

nanoclaw.sh walks you from a fresh machine to a named agent you can message. It installs Node, pnpm, and Docker if missing, registers your Anthropic credential with OneCLI, builds the agent container, and pairs your first channel. If a step fails, Claude Code is invoked automatically to diagnose and resume.

Requirements

macOS, Linux, or Windows (WSL2) Node.js 20+ & pnpm 10+ Claude Code Docker (Desktop or Engine)

In the News

FAQ

What is NanoClaw?

NanoClaw is a lightweight, open-source personal AI agent that runs on your own machine. It connects to messaging apps like WhatsApp, Telegram, Slack, Discord, Microsoft Teams, and more, runs every agent group inside an isolated Docker container, and routes credentials through OneCLI's Agent Vault so agents never hold raw API keys. Built on the Claude Agent SDK and designed for people who want to own and fully control their AI assistant.

How is NanoClaw different from OpenClaw?

OpenClaw is a monolithic framework with thousands of source files and dozens of dependencies. NanoClaw takes the opposite approach: ~15 source files, a single Node.js process, and real OS-level container isolation instead of application-level permission checks. If you want something you can fully audit, understand, and customize, NanoClaw is built for that.

Is NanoClaw secure?

Security is a core design principle. Each agent group runs inside an isolated Linux container with its own filesystem and process space, accessing only directories you explicitly mount. Credentials never enter the container — outbound API requests route through OneCLI's Agent Vault, which injects authentication at the proxy level and supports rate limits and per-agent policies. The codebase is small enough that you can realistically audit everything it does.

Do I need to know how to code to use NanoClaw?

No. Run bash nanoclaw.sh and the installer handles dependencies, credentials, containers, and pairing your first channel. When something needs judgment, Claude Code takes over to diagnose and resume. If you want to customize behavior later, describe what you want in plain language and Claude Code modifies the codebase for you. Comfort with a terminal and git clone is expected.

Is NanoClaw free?

NanoClaw itself is completely free and open source under the MIT license. However, it runs on the Claude Agent SDK, which requires a Claude API key or a Claude Code subscription. The cost depends on how much you use it. NanoClaw is designed to be lightweight in token usage, but the underlying AI usage is billed by Anthropic.

What messaging apps does NanoClaw support?

WhatsApp, Telegram, Discord, Slack, Microsoft Teams, iMessage, Matrix, Google Chat, Webex, Linear, GitHub, WeChat, and email via Resend. Channels are installed on demand with /add-<channel> skills, so you only carry the adapters you actually use. Run one or many at the same time, each wired to its own agent or sharing one — your choice per channel.

What container runtimes are supported?

Docker is the default on macOS, Linux, and Windows (via WSL2). For additional isolation, Docker Sandboxes run each container inside a micro VM. On macOS you can optionally switch to Apple Container via /convert-to-apple-container for a lighter-weight native runtime.

Can I run NanoClaw on Linux or Windows?

Yes. Docker works on macOS, Linux, and Windows (via WSL2). The requirements are Node.js 20+, pnpm 10+, Docker, and Claude Code — the installer (bash nanoclaw.sh) will install Node, pnpm, and Docker for you if they're missing.

How do I set up and configure NanoClaw?

Clone the repo and run bash nanoclaw.sh. The script walks you from a fresh machine to a named agent you can message — installing dependencies, registering your Anthropic credential with OneCLI, building the agent container, and pairing your first channel. If a step fails, Claude Code is invoked automatically to diagnose and resume from where it broke. No manual configuration files. For ongoing changes, describe what you want and Claude Code modifies the codebase directly.

How does NanoClaw compare to other AI agent frameworks?

Most AI agent frameworks are designed for teams building products. They're large, complex, and require significant investment to understand. NanoClaw is designed for individuals who want a personal AI assistant that they fully own and control. It runs as a single Node.js process, uses real container isolation rather than application-level sandboxing, and is small enough to understand completely. It runs on the Claude Agent SDK, giving you direct access to Claude's capabilities without abstraction layers.

Is NanoClaw open source?

Yes. Nano Claw is fully open source under the MIT license. The entire codebase is available on GitHub, and contributions are welcome. The project encourages forking and customization. The philosophy is that your personal AI agent should be working software tailored to your exact needs, not a generic framework you configure.