Bits Beyond

Securely Running OpenClaw with Ollama via Tailscale

Cover Image for Securely Running OpenClaw with Ollama via Tailscale

Introduction

OpenClaw has rapidly become one of the most popular open-source AI frameworks by transforming standard LLMs into highly capable, autonomous personal assistants. By integrating with messaging platforms, OpenClaw can actively execute commands, manage files, and automate your workflow.

However, running an autonomous agent directly on your primary workstation—or even on an unrestricted local network—presents unique challenges. This guide will walk you through a professional, secure architecture: running local models via Ollama on your main host, while isolating OpenClaw in a separate environment using Tailscale to strictly limit access to a single port.

The Security Warning: Look Beyond Tailscale

When giving an AI agent access to execute terminal commands, security must be your top priority.

While Tailscale is fantastic for securing the connection between the sandbox and your Ollama host, it does not restrict the sandbox's default local network interface. If you put your OpenClaw VM on your standard home network, a compromised or hallucinating agent could scan your local IP range (e.g., 192.168.1.x), potentially reaching your router, NAS, or smart home devices.

To truly secure this setup, OpenClaw must not be able to reach devices on your home network. You can achieve this in two main ways:

  1. Cloud Hosting (Recommended): Host the OpenClaw client on a cheap external VPS (like Hetzner). Because it's physically off-site, it has zero access to your home network, and bridges back to your local Ollama host safely via Tailscale.
  2. Strict VLAN Segregation: If hosting locally (e.g., on a Raspberry Pi or Proxmox), place the OpenClaw machine on an isolated VLAN or Guest Network that explicitly drops all traffic to other local subnets.

Architecture Overview

To achieve maximum security without sacrificing performance, we separate the execution layer from the intelligence layer:

  1. The Host (Ollama Server): Runs Ollama locally, leveraging your hardware for fast inference.
  2. The Sandbox (OpenClaw Agent): An isolated Hetzner VPS or strict-VLAN local machine where OpenClaw lives.
  3. The Bridge (Tailscale): Securely connects the two. We use Tailscale ACLs to guarantee OpenClaw can only see Ollama's API port (11434) and absolutely nothing else on your host.

Step 1: Prepare the Host (Ollama)

First, ensure your host machine is running Ollama and is connected to your Tailscale network. You will need a model capable of agentic reasoning, such as qwen3:8b or gpt-oss:20b.

  1. Install and start Ollama on your host.
  2. Pull your preferred model: ollama pull qwen3:8b
  3. By default, Ollama only listens on localhost. To allow Tailscale traffic, configure Ollama to bind to your Tailscale IP or all interfaces (0.0.0.0) by setting the OLLAMA_HOST environment variable before starting the service.

Step 2: Configure Tailscale Tags and ACLs

The magic happens in your Tailscale admin console. We need to create a zero-trust policy. Instead of referencing IP addresses, we will use Tags.

Setting Up Tags

Before modifying the ACLs, assign tags to your devices:

  1. Open the Tailscale Admin Console.
  2. Go to the Machines tab.
  3. Find your Ollama host, click the three dots (...), select Edit ACL tags, and apply tag:ollama-server.
  4. Do the same for your OpenClaw sandbox (the Pi or Hetzner VPS) and apply tag:openclaw-agent. (Alternatively, you can authenticate the machines via CLI using sudo tailscale up --advertise-tags=tag:openclaw-agent).

Applying the ACL Configuration

Navigate to the Access Controls tab in the Tailscale admin panel and update your configuration. Here is the exact JSON structure to define tag owners, allow you full admin access, and strictly limit the agent:

{
    // Define who can assign tags
    "tagOwners": {
        "tag:ollama-server":  ["your_email@gmail.com"],
        "tag:openclaw-agent": ["your_email@gmail.com"]
    },
 
    "acls": [
        // 1. Allow you (the admin) to still access all your machines
        {
            "action": "accept",
            "src":    ["your_email@gmail.com"],
            "dst":    ["*:*"]
        },
 
        // 2. The Restricted Rule:
        // Allow ONLY the agent to talk to the Ollama server ONLY on port 11434
        {
            "action": "accept",
            "src":    ["tag:openclaw-agent"],
            "dst":    ["tag:ollama-server:11434"]
        }
    ]
}

Note: Replace your_email@gmail.com with your actual Tailscale account email.

This single rule is the core of our security perimeter over the VPN. The sandbox is now cryptographically restricted from mapping your host's SSH, file shares, or any other ports.

Step 3: Set Up OpenClaw in the Sandbox

Boot up your isolated VPS (like a Hetzner Cloud instance) or your VLAN-restricted VM. Ensure Tailscale is running and the tag:openclaw-agent is actively applied to the machine.

  1. Install OpenClaw using the official script:
   curl -fsSL [https://openclaw.ai/install.sh](https://openclaw.ai/install.sh) | bash
  1. Configure OpenClaw to point to your host's Tailscale IP rather than a local instance. This directs all LLM requests across the secure Tailscale tunnel.
# Set a placeholder key since Ollama doesn't require one by default, but OpenClaw might expect the variable
export OLLAMA_API_KEY="ollama-local"
 
# Point the baseUrl to your Ollama Server's Tailscale IP
openclaw config set models.providers.ollama.baseUrl "http://<YOUR_TAILSCALE_HOST_IP>:11434"
  1. Launch the gateway daemon:
openclaw onboard --install-daemon

You now have a highly resilient, enterprise-grade AI architecture. When you chat with your assistant, the workflow reasoning and code execution happen safely in a locked-down, network-isolated sandbox (preventing any lateral movement into your home network), while the heavy lifting of token generation is securely routed to your main host's hardware.

Read Next

Cover Image for The Invisible Key: Understanding Keyless Entry Relay Attacks

The Invisible Key: Understanding Keyless Entry Relay Attacks

Learn how keyless entry relay attacks work using an SDR and a Raspberry Pi, and discover practical ways to protect your vehicle.

Cover Image for Daily Bugle TryHackMe Write-Up

Daily Bugle TryHackMe Write-Up

The Daily Bugle room on TryHackMe is a hard room that requires you to compromise a Joomla CMS account.

Cover Image for Glove 80 - Per-Key Coloring

Glove 80 - Per-Key Coloring

The Glove 80 is a mechanical keyboard with per-key RGB lighting. In this post we will explore how to set the colors of the keys.