Most teams that try Openclaw start with a single VPS and a Telegram bot. That works fine for one person. The moment you need five people using it across departments with audit trails and API key isolation, the default setup falls apart. The gap between “Openclaw running” and “Openclaw running safely for a team” is where most enterprise pilots stall.
This guide covers the architecture decisions, security configuration, and operational setup required to deploy Openclaw on-premise for enterprise use. By the end, you will have a clear deployment plan based on your team size, compliance requirements, and infrastructure preferences.
Why On-Premise Matters for Enterprise Openclaw
Openclaw runs as a Node.js process on infrastructure you control. Unlike cloud-hosted AI assistants, every prompt, response, file access, and tool execution stays within your network boundary. No data leaves your perimeter unless you explicitly route it to a cloud LLM provider.
For regulated industries, this is the deciding factor. A financial services firm running Openclaw with Ollama and a local Qwen or Llama model achieves zero third-party data disclosure. Prompts containing customer data, internal strategy documents, or proprietary code never touch an external API. GDPR compliance teams in the EU have deployed Openclaw on Hetzner’s German data centers specifically for this reason.
The tradeoff is operational responsibility. You own uptime, security patching, backup, and access control. That responsibility is manageable if you plan for it upfront.
Choosing Your Deployment Architecture
The right architecture depends on team size and isolation requirements. Here is how to decide.
Single Instance with systemd (Teams of 1-5)
Run Openclaw as a systemd service on a dedicated VPS. This is the simplest production setup: one process, one config directory, one set of credentials. Systemd handles process restarts, and you manage updates with standard package tooling.
This works well for a small technical team where everyone shares one Openclaw instance and trusts each other with the same access level. The moment you need per-user permissions or isolated workspaces, move to multi-instance.
Multi-Instance with Docker Compose (Teams of 5-50)
Each department or team gets its own Openclaw container with isolated configuration, credentials, and session storage. Docker Compose orchestrates the instances, and an Nginx reverse proxy handles SSL termination and routing.
This is the sweet spot for most enterprise deployments. You get strong isolation without Kubernetes complexity. Each instance runs independently, so a misconfigured agent in marketing cannot access engineering’s codebase.
Minimum requirements per instance: 2 GB RAM (the build process OOM-kills on 1 GB hosts), dedicated bind mounts for ~/.openclaw config and workspace directories, and uid 1000 ownership on mounted volumes.
Kubernetes (Teams of 50+, Multi-Region)
The community-maintained openclaw-operator encodes deployment concerns into a single OpenClawInstance custom resource. It handles network isolation, secret management, persistent storage with ReadWriteMany access, health monitoring, and rolling config updates.
Use Kubernetes when you need horizontal scaling, automatic failover across availability zones, or centralized management of dozens of instances. For most organizations, Docker Compose handles the workload without the operational overhead Kubernetes introduces.
Security Hardening Baseline
Openclaw’s security model assumes one trust boundary per gateway. Enterprise deployment means tightening every default.
Network Isolation
Bind the gateway to loopback only: gateway.bind: "loopback" in your config. Access it remotely through SSH tunnels or Tailscale Serve. Never expose port 18789 to the public internet. If you run a reverse proxy, configure gateway.trustedProxies with the proxy IP and ensure it overwrites (not appends) X-Forwarded-For headers.
Disable mDNS discovery in production: discovery.mdns.mode: "off". The default “minimal” mode is acceptable, but “full” mode broadcasts filesystem paths and SSH ports on your network.
Tool Execution Controls
Default to deny for shell execution: security: "deny" with ask: "always" for tools that need approval. Lock down filesystem access with tools.deny: ["group:automation", "group:runtime", "group:fs"] and grant targeted overrides per agent.
Enable the Docker sandbox for any agent processing external inputs: sandbox.mode: "all" runs tool execution inside isolated containers while the gateway stays on the host. This is critical for multi-user deployments where one user’s agent should not be able to affect another’s environment.
CVE-2026-25253 Mitigation
This CVSS 8.8 command injection vulnerability allows attackers to execute arbitrary system commands through crafted input. It was patched in Openclaw version 1.2.3. If you are running anything older, upgrade before deploying to production. Run openclaw security audit --deep after upgrading to verify the fix.
API Key Management and RBAC
Enterprise deployments cannot share a single API key across the organization. Openclaw’s SecretRef system supports three provider types: env, file, and exec. For enterprise, use exec providers that call out to your existing secrets manager (HashiCorp Vault, AWS Secrets Manager, or Azure Key Vault) so keys rotate without restarting instances.
Implement role-based access with three tiers:
- Admin: Full configuration access, user management, all audit logs
- Developer: Agent usage and own-session visibility, no system config changes
- Auditor: Read-only access to all audit logs and session transcripts, no agent execution
Rotate API keys every 90 days. Store encryption keys in environment variables, never in code or config files committed to version control.
Audit Logging and Compliance
Enable the built-in command-logger hook to capture every action: user ID, timestamp, action type, command details, result status, and source IP. Store logs on a separate system that the Openclaw agent cannot modify. An ELK Stack (Elasticsearch, Logstash, Kibana) or your existing SIEM platform works here.
Apply sensitive data masking: passwords, API keys, and JWT tokens should appear as *** in log output. Configure logging.redactSensitive: "tools" (the default) and add custom patterns via logging.redactPatterns for environment-specific values.
Retain audit logs for a minimum of 90 days to meet ISO 27001 requirements. Session transcripts live in ~/.openclaw/agents/<agentId>/sessions/*.jsonl and should be included in your backup rotation.
Local LLM Routing for Full Data Sovereignty
For maximum data isolation, route all Openclaw inference through a local model via Ollama. Set the primary model to ollama/qwen3:8b or ollama/llama3.1:70b depending on your hardware, and every agent task stays on your metal.
The practical approach for most enterprises: use a hybrid routing configuration. Route sensitive tasks (document analysis, internal data queries) through the local Ollama model. Route complex reasoning tasks that involve only public information to a cloud provider like OpenAI or Anthropic via OpenRouter. This gives you data sovereignty where it matters and model capability where you need it.
NVIDIA’s NemoClaw add-on (released March 2026) automates this split with a privacy router that classifies prompts and routes them accordingly. If your compliance requirements demand formal data classification before model routing, NemoClaw is worth evaluating.
Monitoring and Backup
Health Checks
Openclaw exposes two unauthenticated health endpoints: /healthz (liveness) and /readyz (readiness). Wire these into your existing monitoring stack. Target 99.9% uptime with P95 response latency under 2 seconds and error rate below 0.1%.
Set resource alerts at 70% CPU, 80% memory, and 85% disk utilization. The media/ directory, session JSONL files, and /tmp/openclaw/ logs are the primary disk growth drivers.
Backup Strategy
Back up daily: configuration (~/.openclaw/), session data, and audit logs. Keep 30 days of local retention and archive to cold storage for one year. Test restores quarterly. A corrupt backup you have never tested is not a backup.
Frequently Asked Questions
Is Openclaw safe enough for enterprise use out of the box?
Not without hardening. The defaults are designed for personal use with a single operator. Enterprise deployment requires binding to loopback, enabling token authentication, setting tool execution to deny-by-default, configuring audit logging, and running openclaw security audit --deep before going live. With those changes applied, the security posture is solid for production use.
Can I run Openclaw without any cloud LLM dependency?
Yes. Pair Openclaw with Ollama running a local model like Qwen3 8B or Llama 3.1 70B, and no data leaves your server. The tradeoff is model capability: local models handle most tasks well, but complex multi-step reasoning may produce weaker results than GPT-4 or Claude. Hybrid routing solves this for most teams.
How much infrastructure do I need per Openclaw instance?
Each instance needs a minimum of 2 GB RAM (the build process will OOM-kill on 1 GB). For production with a local Ollama model, plan for 16-32 GB RAM depending on model size, 4+ CPU cores, and an SSD with at least 50 GB of free space. A dedicated VPS from Hetzner or Contabo at $20-40/month handles a single team instance comfortably.
Do I need Kubernetes for enterprise Openclaw?
Probably not. Docker Compose with proper network isolation handles teams of up to 50 users across multiple instances without Kubernetes operational overhead. Kubernetes makes sense when you need multi-region failover, centralized management of dozens of instances, or integration with an existing Kubernetes-based platform team. Start with Docker Compose and migrate only when you hit its limits.
What does NVIDIA NemoClaw add that native Openclaw security does not?
NemoClaw adds three controls: a kernel-level deny-by-default sandbox, an out-of-process policy engine that a compromised agent cannot override, and a privacy router for automatic model routing based on data sensitivity. If your compliance team requires formal data classification before LLM routing, or you need sandbox isolation stronger than Docker containers, NemoClaw fills those gaps. For most enterprise deployments, native Openclaw security with proper hardening is sufficient.
Key Takeaways
- Bind the gateway to loopback, enable token auth, and set tool execution to deny-by-default before any enterprise deployment
- Docker Compose with multi-instance isolation covers teams of 5-50 without Kubernetes complexity
- Use Openclaw’s SecretRef system with an
execprovider to integrate your existing secrets manager for automatic key rotation - Route sensitive data through local Ollama models and complex public-data tasks through cloud providers for the best balance of sovereignty and capability
- Run
openclaw security audit --deepafter every configuration change and before every production release
If your team is evaluating Openclaw for enterprise deployment and needs help with architecture planning, security hardening, or production rollout, SFAI Labs offers enterprise AI deployment consulting tailored to self-hosted agent infrastructure.
SFAI Labs