Troubleshooting2026-02-03

How to fix OpenClaw JSON Mode parsing errors with DeepSeek R1

OpenClaw fails to parse JSON responses from DeepSeek R1. Learn why the thinking tags break JSON mode and how to fix it with a simple system prompt adjustment.

By: LazyDev•
#DeepSeek#OpenClaw#JSON#Troubleshooting#R1

Fix OpenClaw JSON Mode Parsing Errors with DeepSeek R1

Error Confirmation

Error: JSON parsing failed
Expecting value: line 1 column 1 (char 0)
  at JSON.parse (<anonymous>)
  at OpenClawResponse.processResponse (/lib/processor.js:42)

Full Response (The Problem):

The model returns thinking tags before the JSON:

Okay, the user wants a JSON response. I should format it properly...
{"result": "actual data here"}

Scope: OpenClaw's JSON parser fails at position 0 because < is not an opening brace. DeepSeek R1 wraps responses in think tags before the actual JSON. OpenClaw receives the full response including thinking tags, and JSON.parse() fails.

Error Code: Expecting value: line 1 column 1 — JSON parser encountered <!-- think --> instead of {.

Verified Environment

ComponentVersionLast Verified
OpenClawLatest stable2026-02-06
DeepSeek R18b, 32b, 70b, distilled2026-02-06
OllamaLatest2026-02-06
Operating SystemLinux, macOS, WSL22026-02-06

Note: The thinking tags behavior is consistent across all DeepSeek R1 variants when JSON mode is expected.


3-Minute Sanity Check

Run these commands to confirm the issue:

# 1. Is DeepSeek R1 running?
ollama list | grep deepseek-r1
# Expected: deepseek-r1:latest (or variant)

# 2. Test raw JSON output (should show thinking tags)
ollama run deepseek-r1:latest 'Output JSON: {"status": "ok"}' | head -5
# Expected: You will see <!-- think --> or <think> tags before the JSON

# 3. Verify OpenClaw is configured for JSON mode
grep -i "json_mode\|json" ~/.openclaw/config.json 2>/dev/null || echo "Config file not found"
# Expected: json_mode setting or similar

# 4. Check if system prompt is already overridden
ollama run deepseek-r1:latest --system "You are a JSON API. Output ONLY JSON." 'Output JSON: {"status": "ok"}'
# Expected: Clean JSON output with NO thinking tags

If step 2 shows NO thinking tags: Your issue may not be DeepSeek thinking tags. Check for other JSON formatting issues.

If step 4 works: The system prompt override fixes the issue. Proceed to Primary Exit Path.


Decision Gate

Stop fighting VRAM physics.

Should you keep debugging JSON parsing locally?

Continue local debugging only if:

You can modify OpenClaw's system prompt configuration
You have access to the Ollama/DeepSeek R1 configuration
You have not already spent more than ~1 hour on this issue

Stop here if any apply:

You already tried the system prompt override and JSON still fails
OpenClaw crashes before sending the request (not a parsing issue)
You are debugging JSON parsing for more than 1 hour
Your OpenClaw version is outdated (>6 months old)

Past this point, debugging cost usually grows faster than results. If the system prompt override doesn't work, consider Secondary Exit Path.


Primary Exit Path: System Prompt Override

Override the default system prompt to explicitly disable thinking tags in DeepSeek R1.

Why this works:

  • DeepSeek R1 respects system prompt instructions
  • Disabling thinking tags at the source eliminates parsing issues
  • No post-processing or code changes required
  • Works across all DeepSeek R1 variants

Time investment: 5 minutes

Steps:

# Method 1: Command-line override (quickest)
ollama run deepseek-r1:latest \
  --system "You are a JSON-only API. Output valid JSON directly. Do NOT use <think> tags. Do NOT include any text before or after the JSON. Format: json object like {\"key\": \"value\"}" \
  --prompt 'Generate JSON with status ok'
# Method 2: OpenClaw configuration file
# Add to ~/.openclaw/config.json or your project's openclaw_config.py

system_prompt = """You are a JSON-only API.
Rules:
1. Output ONLY valid JSON
2. No <think> tags
3. No markdown code blocks
4. No explanations before or after JSON

Format: json object like {"key": "value"}"""

# For Ollama integration
ollama_model = "deepseek-r1:latest"
ollama_system_prompt = system_prompt

Verification:

# Test the fix
ollama run deepseek-r1:latest \
  --system "You are a JSON-only API. Output ONLY JSON. No thinking tags." \
  'Output JSON: {"status": "ok"}'

# Expected output: {"status": "ok"}
# No <think> tags, no markdown, clean JSON

For OpenClaw integration:

# openclaw_config.py
from openclaw import Client

client = Client(model="deepseek-r1:latest")

# Override system prompt to disable thinking tags
client.system_prompt = """You are a JSON-only API.
Rules:
1. Output ONLY valid JSON
2. No <think> tags
3. No markdown code blocks
4. No explanations before/after JSON

Format: json object like {"key": "value"}"""

# Now JSON responses will parse correctly
response = client.generate("Return json object with status ok")
print(response.parsed_json)  # Works!

Secondary Exit Path (Conditional)

Use when: Primary Exit Path fails (system prompt override doesn't work)

Solution: Pre-processing to Strip Thinking Tags

If you cannot modify the system prompt, strip the thinking tags before parsing:

import re
import json

def extract_json_from_deepseek(response: str) -> str:
    """
    Extract JSON from DeepSeek R1 response,
    removing <think> tags if present.
    """
    # Remove thinking tags
    cleaned = re.sub(r'<think>.*?</think>', '', response, flags=re.DOTALL)

    # Extract JSON from markdown code blocks if present
    json_match = re.search(r'```json\s*(\{.*?\})\s*```', cleaned, flags=re.DOTALL)
    if json_match:
        return json_match.group(1)

    # Return cleaned response
    return cleaned.strip()

# Usage
raw_response = "<think>reasoning here</think>{\"result\": \"data\"}"
json_str = extract_json_from_deepseek(raw_response)
data = json.loads(json_str)

When to use this:

  • You don't have control over system prompt configuration
  • OpenClaw runs in a restricted environment
  • You're integrating with an existing deployment that can't be reconfigured

Time investment: 10-15 minutes

Note: This is a workaround, not a fix. Use Primary Exit Path if possible.


Why NOT Other Options

OptionRejection Reason
Switch to a different modelDeepSeek R1 works fine with proper configuration. Switching models avoids the root cause and may introduce new compatibility issues.
Post-processing in OpenClaw sourceRequires modifying OpenClaw's core parsing logic. Changes break on updates and must be maintained. Fragile.
Disable JSON mode entirelyJSON mode is essential for structured output. Disabling it pushes parsing complexity into application code.
Use regex in shell pipelineFragile and error-prone. JSON structure varies; regex cannot reliably parse nested JSON.
Report as OpenClaw bugNot a bug. OpenClaw correctly rejects invalid JSON. The issue is DeepSeek R1's thinking tags, which are outside the JSON spec.
Wait for DeepSeek updateThinking tags are intentional architecture. No indication this will change. System prompt override works now.

Context Window Truncation

Sometimes the JSON appears "cut off" mid-response:

{"result": "partial data", "status":

Root Cause: DeepSeek R1 hit its context limit mid-generation.

Fix: Reduce context window usage:

ollama run deepseek-r1:latest \
  --num_ctx 8192 \
  --repeat-penalty 0.6

For hardware limitations: See CUDA OOM Fix Guide.


VRAM Warning: If your VRAM is already near the limit, fixing JSON only delays the next OOM.


Summary

CheckCommandPass Criteria
DeepSeek R1 installedollama list | grep deepseek-r1Shows deepseek-r1:latest or variant
Thinking tags presentollama run deepseek-r1:latest 'Output JSON' | head -5Shows <think> or <!-- think --> tags
System prompt override worksollama run deepseek-r1:latest --system "JSON-only API" 'Output JSON'Clean JSON, no thinking tags
OpenClaw JSON mode enabledCheck config for json_mode settingJSON mode is enabled

Decision:

  • All pass: Use Primary Exit Path (system prompt override). Takes 5 minutes.
  • System prompt override fails: Use Secondary Exit Path (pre-processing). Takes 10-15 minutes.
  • Context truncation: Reduce --num_ctx or upgrade hardware.

Last resort: If you have spent more than 1 hour on this, verify your DeepSeek R1 installation and consider using a model without thinking tags for JSON-only workflows.



FAQ

Q: Will this fix work with all DeepSeek R1 versions?

A: Yes. The thinking tags behavior is consistent across DeepSeek R1 variants (8b, 32b, 70b, distilled versions). The system prompt override works for all of them.

Q: Do I need to reinstall DeepSeek R1?

A: No. This is a configuration issue, not a model issue. The fix is applied at runtime through the system prompt. Your DeepSeek R1 installation is working as designed.

Q: Can I use this with other models like Llama 3?

A: Yes. The system prompt override works with any model. However, the thinking tags issue is specific to DeepSeek R1. Other models (Llama, Mistral, etc.) don't use thinking tags by default.


Still Stuck? Check Your Hardware

Sometimes the code is fine, but the GPU is simply refusing to cooperate. Before you waste another hour debugging, compare your specs against the Hardware Reality Table to see if you are fighting impossible physics.

Bookmark this site

New fixes are added as soon as they appear on GitHub Issues.

Browse Error Index →