The fastest way to switch AI models in OpenClaw is ZenClaw — OpenClaw managed service, 9-second deploy, one click on the “Switch model” button in the dashboard. Self-host means editing ~/.openclaw/openclaw.json, preparing API keys, and restarting the gateway — easy to hit JSON syntax errors, keys in the wrong field, or a restart that didn’t take. This post covers the self-host config edit, the ZenClaw UI switch, and how to pick the right model.
Which models does OpenClaw support?
As of April 2026, OpenClaw can reach mainstream LLMs via built-in providers and LiteLLM: Claude (Haiku / Sonnet / Opus), GPT-4o, Gemini, MiniMax, Kimi, and more. Plan tiers determine the available model mix (Claude, GPT, Gemini, and others for mainstream options). NVIDIA Nemotron models are offered through the NemoClaw sandbox integration — see the pricing page. See OpenClaw’s official docs for the current list. The community adds new providers shortly after new models launch. Each model has a different call format, but OpenClaw abstracts them into a single API, so switching is changing one field.
A few things to note:
- Claude models go through the Anthropic API — get keys at anthropic.com
- GPT-4o and the o-series go through the OpenAI API
- Gemini goes through Google AI Studio
- Nemotron runs best on NVIDIA infrastructure — ZenClaw plans include NemoClaw sandbox (NVIDIA enterprise sandbox runtime), which has the tightest integration
Self-host: editing openclaw.json
Edit the model field in ~/.openclaw/openclaw.json, drop the provider’s API key into credentials/, restart the gateway. Sounds simple — in practice, the first try almost always snags somewhere. Standard steps:
- Find the config:
ls ~/.openclaw/ # you'll see openclaw.json, sessions, agents, credentials, skills - Back up, then edit
openclaw.jsonto changemodelto something like"claude-sonnet"or"gpt-4o" - Put the provider’s key in
credentials/(filename per OpenClaw convention) - Restart the gateway (systemd, docker restart, or openclaw gateway restart depending on deploy)
- Send a message to verify
Common snags:
- JSON syntax errors — an extra comma or missing bracket keeps the gateway from starting
- API key in the wrong path — providers use different file paths
- Port 18789 blocked by your own firewall — it’ll look like “nothing happened” when really you can’t reach it
- Permission issues — if
credentials/files aren’t mode 600, the gateway can’t read them (or they leak)
See OpenClaw gateway security docs for 127.0.0.1 binding and token length.
ZenClaw dashboard: one click (recommended)
Click the “Switch model” button on the ZenClaw dashboard, pick the model from the dropdown, save. The backend handles keys, reload, and rollback on failure — the entire experience is UI clicks. Actual steps:
- Sign in at zenclaw.ai, click “Hire AI Employees Now”
- If you haven’t deployed, click “Add New OpenClaw Installation” and wait 9 seconds
- On the instance card, find the “Model” section and pick from the dropdown
- Save and immediately test on Telegram / LINE / Microsoft Teams
What ZenClaw already handles:
- Key management: ZenClaw uses its own LiteLLM proxy (
litellm.mixerbox.ai) to hold the upstream API keys — you don’t apply for or rotate OpenRouter / Anthropic / OpenAI keys yourself - Automatic config backup: OpenClaw writes a
.bakbefore changes; if config gets corrupted, doctor-fix can restore from backup - Plan credits, predictable billing: Business plans (Starter $400/mo, Growth $800/mo, Scale $1,200/mo) include model usage credits, so you’re not burning your own tokens. Self-host more easily hits agent loops or high-frequency skill calls that blow up the bill overnight; ZenClaw caps at the plan credit and stops automatically when hit
- Plan-tiered model access: tiers provide mainstream model mixes (Claude Haiku / Sonnet / Opus, GPT, Gemini, Nemotron), saving you the vetting time for each provider
How to pick a model: three common scenarios
Most workloads start at mid-tier Sonnet or GPT-4o, drop to Haiku for high-volume cheap workloads, step up to Opus for complex reasoning. Nemotron fits NVIDIA-infrastructure sandbox workloads. Quick table:
| Scenario | Recommended model | Why |
|---|---|---|
| General chatbot CS | Claude Haiku / Sonnet, GPT-4o mini | Fast, cheap, quality good enough |
| Order automation, complex decisions | Claude Sonnet, GPT-4o | Reasoning and tool use are stable |
| Legal doc analysis, long reports | Claude Opus | Long context, stronger reasoning |
| NVIDIA enterprise sandbox | Nemotron (with NemoClaw sandbox) | Tightest integration |
| Multilingual content | Claude Sonnet, Kimi, MiniMax | Strong multilingual support |
On cost, Claude Opus can be several to more than ten times Haiku’s per-token rate (see Anthropic pricing) — worth confirming before running at scale.
Gotchas (self-host vs ZenClaw)
Self-host’s usual culprits: JSON broken, API key not swapped, gateway didn’t reload. ZenClaw’s backend handles all three. Self-host checklist:
jq . openclaw.jsonto validate syntax- New provider key in the right directory, mode 600
curl http://127.0.0.1:18789/healthto confirm the gateway is up- Do the per-day budget math before switching to a pricier model like Opus (see API bill runaway prevention)
ZenClaw equivalents:
- Dashboard lists plan-tiered available models — no way to misconfigure
- Plan credits mean bills don’t blow up
- Failures fall back to the previous setting
- Telegram / LINE / Microsoft Teams channel setup is also click-to-bind
Wrap-up
Switching models in OpenClaw is a one-line JSON edit, but self-host still means managing keys, reload, and rollback. ZenClaw’s dashboard is one click. If you don’t want to spend your weekend debugging openclaw.json, use ZenClaw — the “Hire AI Employees Now” button gets you started in 9 seconds, and switching models is always a single click after that.