Deploying OpenClaw on NVIDIA Jetson Orin Nano

Overview

This guide walks through deploying OpenClaw, an open-source self-hosted AI assistant, on an NVIDIA Jetson Orin Nano. The Jetson Orin Nano delivers 67 TOPS of AI performance at just 15 watts, making it ideal for an always-on personal AI assistant.

OpenClaw connects to messaging platforms like Telegram, WhatsApp, Discord, and Slack, and can run local AI models via Ollama with full GPU acceleration. This guide covers both cloud API and local model configurations, based on real deployment experience with OpenClaw 2026.2.16.

Hardware Requirements

Component Specification
Board NVIDIA Jetson Orin Nano (8 GB recommended)
AI Performance 67 TOPS · 1024 CUDA cores
Storage 1T GB NVMe SSD (recommended) or MicroSD
Power Draw ~15 W under full AI load
Operating System JetPack 6.x (Ubuntu-based)
Network Ethernet or WiFi

Step 1 — Flash JetPack OS

Download the latest JetPack image from developer.nvidia.com/jetpack and flash it to your storage medium.

Option A: MicroSD Card

  • Download the JetPack SD card image
  • Flash using Balena Etcher or Raspberry Pi Imager
  • Insert the card and power on the Jetson

Option B: NVMe SSD (Recommended)

  • Use NVIDIA SDK Manager on a Linux host machine
  • Connect Jetson via USB-C and flash directly to NVMe

Complete the initial Ubuntu setup (username, password, network configuration) on first boot.

Step 2 — System Preparation

Update the system and install core dependencies:

sudo apt update && sudo apt upgrade -y
sudo apt install -y curl git build-essential

Set the Jetson to maximum performance mode:

sudo nvpmodel -m 0       # MAXN power mode
sudo jetson_clocks        # lock clocks at max frequency

Step 3 — Install Node.js 22+

OpenClaw requires Node.js 22.12.0 or later. Install via nvm:

curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh | bash
source ~/.bashrc
nvm install 22
nvm use 22
node --version           # confirm v22.x.x

Install pnpm (required to build the Control UI):

npm install -g pnpm

Step 4 — Install OpenClaw

git clone https://github.com/OpenClaw/openclaw.git
cd openclaw
npm install
pnpm ui:build             # builds the Control UI frontend

Step 5 — Configure OpenClaw

Run the onboard wizard for guided setup:

npm start -- onboard

Or configure manually:

npm start -- config set gateway.mode local

Authentication — Choose One

Option A: Claude Pro/Max Subscription (Setup Token)

Recommended if you have an existing Claude subscription. No separate API costs.

npm install -g @anthropic-ai/claude-code
claude setup-token
npm start -- models auth paste-token --provider anthropic

Option B: Anthropic API Key (Pay-as-you-go)

  1. Create an account at console.anthropic.com
  2. Navigate to Settings → API Keys → Create Key
  3. Copy the key immediately (shown only once)
  4. Add billing credits under Billing (minimum $5)
  5. Enter the key via the onboard wizard

ImportantThe API key is stored in the auth store at ~/.openclaw/agents/main/agent/, not in the main config. Donotuse config set anthropic.apiKey — that path is not recognized.

Option C: Local Models via Ollama

For fully offline, cloud-free operation, see Step 7 below.

Gateway Auth Token

Protect access to the Control UI with a secret token:

npm start -- config set gateway.auth.token <YOUR_SECRET_TOKEN>

Never share your token or API keys.If exposed, revoke them immediately at the provider’s console.

Step 6 — Create systemd Service

First, identify your exact Node.js binary path:

which node
ls ~/.nvm/versions/node/    # verify exact version directory

Create the service file (replace placeholders with your actual values):

sudo tee /etc/systemd/system/openclaw.service > /dev/null <<'EOF'
[Unit]
Description=OpenClaw AI Assistant
After=network-online.target
Wants=network-online.target

[Service]
Type=simple
User=<YOUR_USERNAME>
WorkingDirectory=/home/<YOUR_USERNAME>/openclaw
ExecStart=/home/<YOUR_USERNAME>/.nvm/versions/node/v22.x.x/bin/node \
    /home/<YOUR_USERNAME>/openclaw/scripts/run-node.mjs \
    gateway --bind lan --port 18789
Restart=on-failure
RestartSec=10

[Install]
WantedBy=multi-user.target
EOF

Critical notes: User= must match the owner of the openclaw directory (wrong value → status=217). ExecStart= must use the exact, literal Node.js path — no globs, no ~, no nvm aliases (wrong path → status=203). The gateway subcommand is required. --bind lan listens on 0.0.0.0; without it, only localhost is reachable. There is no --host flag — use --bind.

Enable and start the service:

sudo systemctl daemon-reload
sudo systemctl enable openclaw
sudo systemctl start openclaw

Verify:

sudo systemctl status openclaw
ss -tlnp | grep 18789
journalctl -u openclaw -n 20 --no-pager

Step 7 — Local Models with Ollama (Optional)

The Jetson Orin Nano can run local LLMs with GPU acceleration via Ollama, eliminating cloud dependency entirely.

Install Ollama

curl -fsSL https://ollama.com/install.sh | sh

Pull a Model

ollama pull llama3         # or: mistral, phi3, qwen3
ollama list                # verify installed models

Recommended Models for Jetson 8 GB

Model Size Best For
llama3 (8B Q4) ~4.5 GB General assistant, reasoning
mistral (7B Q4) ~4.1 GB Fast general purpose
phi3 (3.8B) ~2.2 GB Lightweight, fast responses
qwen3-vl (2B) ~1.3 GB Vision + language, very light
codellama (7B Q4) ~3.8 GB Code generation

Configure OpenClaw for Ollama

Edit ~/.openclaw/openclaw.json to add the Ollama provider:

{
  "models": {
    "providers": {
      "ollama": {
        "baseUrl": "http://127.0.0.1:11434/v1",
        "apiKey": "ollama-local",
        "api": "openai-completions"
      }
    }
  },
  "agents": {
    "defaults": {
      "model": {
        "primary": "ollama/llama3"
      }
    }
  }
}

Restart OpenClaw after configuration:

sudo systemctl restart openclaw

Step 8 — Access the Control UI

The Control UI requires HTTPS or localhost (secure context). Accessing via a plain HTTP LAN IP will fail.

SSH Tunnel (Recommended for LAN)

From your client machine:

ssh -N -L 18789:127.0.0.1:18789 <user>@<jetson-ip>

Then open in your browser:

http://localhost:18789/#token=<YOUR_SECRET_TOKEN>

Generate Dashboard URL

On the Jetson host:

cd ~/openclaw
npm start -- dashboard --no-open

This prints the full authenticated URL. Use it through the SSH tunnel.

Firewall

If using UFW:

sudo ufw allow 18789

Step 9 — Connect WhatsApp

OpenClaw connects to WhatsApp via the Baileys library (WhatsApp Web multi-device protocol). Your phone stays the primary device; the Gateway acts as a linked companion.

Configure the Channel

npm start -- config set channels.whatsapp.dmPolicy allowlist
npm start -- config set channels.whatsapp.allowFrom '["+15551234567"]'
sudo systemctl restart openclaw

Replace the phone number with your actual number including country code (E.164 format).

Dedicated Number RecommendedOpenClaw recommends using a separate phone number rather than your personal one. This isolates risk—if anything goes wrong, your personal account is unaffected.

Link via QR Code

npm start -- channels login

A QR code appears in the terminal. On your phone, open WhatsApp → Settings → Linked Devices → Link a Device and scan it. The code expires after ~60 seconds—scan promptly.

Verify the connection:

npm start -- status

Send Your First Message

OpenClaw does not initiate conversations—you text it first from WhatsApp.

  1. Open WhatsApp on a phone whose number is in your allowFrom list
  2. Start a chat with the number linked to OpenClaw
  3. Send any message, e.g. “Hi” or “What can you do?”
  4. OpenClaw replies in the same chat

If you’re using your personal number (same number as the linked device), enable self-chat mode to message yourself:

npm start -- config set channels.whatsapp.selfChatMode true
sudo systemctl restart openclaw

DM Access Policies

Policy Behavior
allowlist Only numbers in allowFrom can message the bot
pairing Unknown senders get a pairing code; you approve via CLI
open Anyone can message (not recommended)
disabled Ignore all inbound DMs

To switch to pairing mode and approve a new sender:

npm start -- config set channels.whatsapp.dmPolicy pairing
npm start -- pairing approve whatsapp <code>

Groups

In group chats, OpenClaw only responds when @mentioned by default. It will not jump into every conversation. Group sessions are isolated per group.

Useful Chat Commands

Command Action
/new or /reset Start a fresh conversation session
/model sonnet Switch AI model mid-conversation

You can also send images, PDFs, and voice notes—OpenClaw processes them if the model supports multimodal input.

Keep your phone online.WhatsApp Linked Devices requires periodic phone connectivity. If your phone is offline for ~14 days, WhatsApp will unlink the session. Credentials are stored at ~/.openclaw/credentials/whatsapp/.


Useful Commands

All commands run from ~/openclaw using npm start --:

Task Command
Interactive setup npm start -- onboard
Set config value npm start -- config set <key> <value>
Set model auth npm start -- models auth paste-token --provider anthropic
List devices npm start -- devices list
Dashboard URL npm start -- dashboard --no-open
WhatsApp login npm start -- channels login
Channel status npm start -- status
View live logs journalctl -u openclaw -f
Restart service sudo systemctl restart openclaw
GPU status jtop (install: sudo pip3 install jetson-stats)

Troubleshooting

Symptom Cause Fix
status=217/USER Invalid User= in service file Set User= to your actual username
status=203/EXEC Node.js path doesn’t exist Use exact path from ls ~/.nvm/versions/node/
Help text then exit No gateway subcommand Add gateway to ExecStart
unknown option '--host' Flag doesn’t exist Use --bind lan instead
Missing config error gateway.mode not set config set gateway.mode local
Connection refused on LAN Bound to localhost only Add --bind lan to ExecStart
HTTPS or localhost error UI needs secure context Use SSH tunnel to access via localhost
No API key for anthropic Model auth not configured Run npm start -- onboard
ENOENT spawn pnpm pnpm not installed npm install -g pnpm
Control UI assets not found UI not built Run pnpm ui:build
Ollama no GPU detected CUDA paths misconfigured Verify JetPack install; check jtop
gateway.token ignored Legacy config key Use gateway.auth.token instead
WhatsApp linked: false Session expired or not linked Run npm start -- channels login and re-scan QR
WhatsApp disconnected Phone offline or session conflict Keep phone online; remove unused linked devices; run npm start -- doctor

 

← Previous Post

Leave a Comment