Frequently Asked Questions

Answers to common questions about OpenClaw

Formerly known as Clawdbot / Moltbot —— you may have seen these names elsewhere

Basics

What is OpenClaw?

Not an App. OpenClaw is an open-source AI Agent framework / execution AI assistant that runs on your computer or server to execute real tasks, not chat. **Real Talk:** It's basically a junior engineer with sudo privileges.

What's the essential difference from ChatGPT / Claude?

In short: ChatGPT 'thinks', OpenClaw 'does'. ChatGPT: Answers questions, gives advice OpenClaw: Reads files, runs commands, modifies code, executes workflows **Real Talk:** ChatGPT is a consultant. OpenClaw is an intern who actually does the work.

Does OpenClaw have its own LLM?

No. It's a 'scheduler' that needs you to connect: OpenAI, Claude or local models. šŸ‘‰ It doesn't sell models, just makes models 'work'. **Real Talk:** You're the DJ. OpenClaw is just the mixer.

Is my data private with local models?

**Short answer:** If you use Ollama locally, yes. The model runs on your machine. **How to verify:** Don't take my word for it. Block outbound traffic (except localhost) via Little Snitch or `ufw`. If OpenClaw still talks to your local Ollama, it's local. If it hangs, check your `base_url`. **Caveat:** If you use API providers (DeepSeek, OpenAI, Anthropic), your prompts go to their servers. Read their privacy policies.

Does it support DeepSeek API?

āœ… Yes. Set `LLM_PROVIDER=openai` and `BASE_URL=https://api.deepseek.com`. **Config example (.env)**: ```bash LLM_PROVIDER="openai" LLM_BASE_URL="https://api.deepseek.com" LLM_API_KEY="sk-your-key-here" LLM_MODEL="deepseek-reasoner" ``` šŸ‘‰ See our **[DeepSeek Config Guide](/guides/how-to-use-deepseek-with-openclaw)** for full setup.

Does it support local DeepSeek (Ollama)?

āœ… Yes. Use `provider: ollama`. **Config example (.env)**: ```bash # Install Ollama & pull model curl -fsSL https://ollama.com/install.sh | sh ollama run deepseek-r1:8b # Configure OpenClaw LLM_PROVIDER="ollama" LLM_BASE_URL="http://localhost:11434/v1" LLM_MODEL="deepseek-r1:8b" ``` āš ļø **Warning:** Requires heavy hardware. See **[Hardware Reality Check](/guides/fix-openclaw-cuda-oom-errors)**.

What is the relationship between OpenClaw and Ollama?

**Ollama is the engine; OpenClaw is the driver.** Ollama runs the DeepSeek model (loads it into VRAM, handles inference). OpenClaw tells it what to do (reads files, runs commands, executes workflows). If Ollama is down, OpenClaw is useless. If OpenClaw isn't running, Ollama is just a chatbot. **Analogy:** Ollama = Engine, OpenClaw = Driver. You need both to drive the car.

Usage & Installation

Do I need to know programming?

Basic use: No coding needed, but need basic logic Advanced use: Knowing some command line/project structure helps šŸ‘‰ It's not 'zero barrier', but 'low barrier, high ceiling'. **Real Talk:** If you don't know what `chmod +x` means, you're going to have a bad time.

Can it run on Windows / Mac / Linux?

āœ… Mac: Most friendly āœ… Linux / Server: First choice for production āš ļø Windows: Usually via WSL2 (strongly recommended) **Symptom:** `Error: connect ECONNREFUSED 127.0.0.1:11434` (Networking issue) **Real Talk:** Mac users suffer slowly (3.2 tokens/sec). Windows users suffer dramatically (WSL2 drama). Linux users just suffer.

Can OpenClaw run continuously?

Yes. It can: run long-term, retry on failure, save intermediate state, stop by rules. This is why it's called an autonomous agent. **Real Talk:** That's also why it's called a 'security risk'. It doesn't know when to quit.

Why am I getting JSON parsing errors?

DeepSeek R1 wraps responses in `` tags before the actual JSON. OpenClaw's JSON parser fails. **Symptom:** `SyntaxError: Unexpected token <` (The model is 'thinking' out loud) šŸ‘‰ **Fix it here:** **[JSON Parsing Fix](/guides/fix-openclaw-json-mode-errors)**.

Security & Risks

Is OpenClaw safe? How to prevent Prompt Injection?

**Think of OpenClaw as a junior engineer with sudo privileges.** If you wouldn't trust a junior intern with root access to this folder, don't trust the agent. **Real incidents I've stopped:** - Agent tried to `rm -rf .` to "clean build artifacts" - Agent attempted `curl unknown.sh | bash` because it needed a tool **Mitigation**: - Run in Docker container with read-only filesystem - Use dedicated device (Mac Mini, cheap server) - Block dangerous commands (rm, format, dd, etc.) - Review EVERY execution log šŸ‘‰ **Read the full autopsy:** **[CVE-2026-25253 Analysis](/guides/openclaw-security-rce-cve-2026-25253)**.

Will it 'go rogue'?

If you give too many permissions, yes. OpenClaw's capabilities ā‰ˆ permissions you give Correct approach: read-only by default, specify writable directories, block dangerous commands **Real Talk:** 'Going rogue' is just a fancy way of saying 'it did exactly what you told it to do, not what you meant'.

Suitable for production?

**Short answer:** Yes. **Honest answer:** Only if you have strict guardrails. Otherwise, expect to wake up at 3 AM. **Production requirements**: - You know exactly what the agent can and cannot do - You have tested EVERY workflow in staging - You have permission isolation (read-only by default) - You have logging AND rollback mechanisms - You have a human reviewing every action If you're missing any of these, you're not ready for production. šŸ‘‰ Beginners should NOT start with production.

šŸ“–

Read the DeepSeek R1 Guide

Learn how to deploy OpenClaw with DeepSeek R1 locally without running into OOM errors.

Read the Guide