[GH-ISSUE #51] v1.3.1: MITM replaces system prompt with dummy, breaking custom personas/context #43

Closed
opened 2026-02-27 15:38:06 +03:00 by kerem · 4 comments
Owner

Originally created by @terryops on GitHub (Feb 22, 2026).
Original GitHub issue: https://github.com/NikkeTryHard/zerogravity/issues/51

Problem

After upgrading from v1.2.x to v1.3.1, the MITM modify step replaces the original system prompt content with a "dummy prompt" in the USER_REQUEST wrapper. This strips all custom context injected by the API client (persona definitions, workspace paths, user preferences, tool instructions, etc.).

Evidence from logs

MITM: request modified [remove 4/5 content messages,
  replace dummy prompt in USER_REQUEST wrapper (132984 chars),
  preserved generate_image tool, strip all 20 LS tools,
  inject 1 custom tool group(s), override toolConfig VALIDATED → AUTO,
  append 96 tool round(s) as functionCall/Response pairs (no model turns found),
  inject thinkingBudget=2048 includeThoughts=true,
  inject generationConfig: maxOutputTokens=32000]

The original system prompt was 132,984 chars of custom context (persona, workspace config, user preferences, memory files). After MITM modification, this was replaced with a generic dummy prompt.

Impact

  • The model loses all custom persona/character instructions → responds with generic behavior instead of the configured personality
  • Workspace paths are lost → model generates wrong file paths (e.g. /home/user/ instead of the actual workspace)
  • Tool usage instructions are lost → model cannot follow client-specific tool conventions
  • This worked correctly in v1.2.x where the system prompt content was passed through

Reproduction

  1. Configure a client (e.g. OpenClaw) with a custom system prompt containing persona instructions
  2. Send a request through ZG v1.3.1 MITM proxy
  3. Observe the model responds without any awareness of the custom system prompt content

Expected behavior

The original system prompt / USER_REQUEST content from the client should be preserved (or at least an option to preserve it). The MITM layer can still do its other modifications (tool injection, thinking config, etc.) without replacing the user's custom context.

Environment

  • ZeroGravity: v1.3.1 (Docker ghcr.io/nikketryhard/zerogravity:latest)
  • Client: OpenClaw via OpenAI-compatible /v1/chat/completions endpoint
  • Model: opus-4.6
Originally created by @terryops on GitHub (Feb 22, 2026). Original GitHub issue: https://github.com/NikkeTryHard/zerogravity/issues/51 ## Problem After upgrading from v1.2.x to v1.3.1, the MITM modify step replaces the original system prompt content with a "dummy prompt" in the `USER_REQUEST` wrapper. This strips all custom context injected by the API client (persona definitions, workspace paths, user preferences, tool instructions, etc.). ## Evidence from logs ``` MITM: request modified [remove 4/5 content messages, replace dummy prompt in USER_REQUEST wrapper (132984 chars), preserved generate_image tool, strip all 20 LS tools, inject 1 custom tool group(s), override toolConfig VALIDATED → AUTO, append 96 tool round(s) as functionCall/Response pairs (no model turns found), inject thinkingBudget=2048 includeThoughts=true, inject generationConfig: maxOutputTokens=32000] ``` The original system prompt was **132,984 chars** of custom context (persona, workspace config, user preferences, memory files). After MITM modification, this was replaced with a generic dummy prompt. ## Impact - The model loses all custom persona/character instructions → responds with generic behavior instead of the configured personality - Workspace paths are lost → model generates wrong file paths (e.g. `/home/user/` instead of the actual workspace) - Tool usage instructions are lost → model cannot follow client-specific tool conventions - This worked correctly in v1.2.x where the system prompt content was passed through ## Reproduction 1. Configure a client (e.g. OpenClaw) with a custom system prompt containing persona instructions 2. Send a request through ZG v1.3.1 MITM proxy 3. Observe the model responds without any awareness of the custom system prompt content ## Expected behavior The original system prompt / USER_REQUEST content from the client should be preserved (or at least an option to preserve it). The MITM layer can still do its other modifications (tool injection, thinking config, etc.) without replacing the user's custom context. ## Environment - ZeroGravity: v1.3.1 (Docker `ghcr.io/nikketryhard/zerogravity:latest`) - Client: OpenClaw via OpenAI-compatible `/v1/chat/completions` endpoint - Model: opus-4.6
kerem closed this issue 2026-02-27 15:38:06 +03:00
Author
Owner

@DarKWinGTM commented on GitHub (Feb 22, 2026):

Maybe we need to attach context (Multi source) from dumb and from custom user + do not cut content from another source of context.

<!-- gh-comment-id:3940511795 --> @DarKWinGTM commented on GitHub (Feb 22, 2026): Maybe we need to attach context (Multi source) from dumb and from custom user + do not cut content from another source of context.
Author
Owner

@NikkeTryHard commented on GitHub (Feb 22, 2026):

on it

<!-- gh-comment-id:3940517865 --> @NikkeTryHard commented on GitHub (Feb 22, 2026): on it
Author
Owner

@terryops commented on GitHub (Feb 22, 2026):

Workaround: System-prompt-to-user proxy

We found a workaround by placing a small HTTP proxy between our API client and ZeroGravity. The proxy moves the system message content into the first user message (wrapped in <system_context> tags) before forwarding to ZG.

Since ZG preserves the USER_REQUEST wrapper content but replaces the system prompt, embedding our context in the user message ensures it survives the MITM modification.

Verification

Before proxy (system prompt stripped):

replace dummy prompt in USER_REQUEST wrapper (673 chars)

→ Model only saw the bare user message, lost all persona/workspace context

After proxy (system prompt injected into user message):

replace dummy prompt in USER_REQUEST wrapper (154255 chars)

→ Model saw the full 154k chars of context and correctly followed persona instructions, used correct file paths, etc.

Proxy code (Node.js)

const http = require("http");
const LISTEN_PORT = 8740;
const ZG_HOST = "127.0.0.1";
const ZG_PORT = 8741;

const server = http.createServer((req, res) => {
  if (req.method === "POST" && req.url?.includes("/v1/chat/completions")) {
    let body = "";
    req.on("data", (chunk) => (body += chunk));
    req.on("end", () => {
      try {
        const data = JSON.parse(body);
        const systemMsgs = [];
        const otherMsgs = [];
        for (const msg of data.messages || []) {
          if (msg.role === "system") {
            const content = typeof msg.content === "string" ? msg.content : "";
            if (content) systemMsgs.push(content);
          } else {
            otherMsgs.push(msg);
          }
        }
        if (systemMsgs.length > 0 && otherMsgs.length > 0 && otherMsgs[0].role === "user") {
          const sysContent = systemMsgs.join("\n\n");
          const userContent = typeof otherMsgs[0].content === "string" ? otherMsgs[0].content : "";
          otherMsgs[0].content = `<system_context>\n${sysContent}\n</system_context>\n\n${userContent}`;
          data.messages = otherMsgs;
        }
        body = JSON.stringify(data);
      } catch (e) {
        // forward raw on parse error
      }
      const proxyReq = http.request({
        hostname: ZG_HOST, port: ZG_PORT, path: req.url, method: req.method,
        headers: { ...req.headers, "content-length": Buffer.byteLength(body), host: `${ZG_HOST}:${ZG_PORT}` },
      }, (proxyRes) => { res.writeHead(proxyRes.statusCode, proxyRes.headers); proxyRes.pipe(res); });
      proxyReq.on("error", (e) => { res.writeHead(502); res.end("Bad Gateway"); });
      proxyReq.write(body);
      proxyReq.end();
    });
  } else {
    const proxyReq = http.request({
      hostname: ZG_HOST, port: ZG_PORT, path: req.url, method: req.method,
      headers: { ...req.headers, host: `${ZG_HOST}:${ZG_PORT}` },
    }, (proxyRes) => { res.writeHead(proxyRes.statusCode, proxyRes.headers); proxyRes.pipe(res); });
    proxyReq.on("error", (e) => { res.writeHead(502); res.end("Bad Gateway"); });
    req.pipe(proxyReq);
  }
});
server.listen(LISTEN_PORT, "127.0.0.1");

Then point your API client's baseUrl from http://localhost:8741/v1 to http://localhost:8740/v1.

Downsides

  • The request body inflates significantly (saved_pct=-453, i.e. 4.5x larger) since the system prompt is now duplicated in the user message
  • ZG still replaces the system prompt with its own "Antigravity" prompt on top, adding redundant instructions
  • Adds an extra proxy hop

A native option in ZG to preserve the original system prompt / USER_REQUEST content would be the ideal fix.

<!-- gh-comment-id:3940520900 --> @terryops commented on GitHub (Feb 22, 2026): ## Workaround: System-prompt-to-user proxy We found a workaround by placing a small HTTP proxy between our API client and ZeroGravity. The proxy moves the `system` message content into the first `user` message (wrapped in `<system_context>` tags) before forwarding to ZG. Since ZG preserves the `USER_REQUEST` wrapper content but replaces the system prompt, embedding our context in the user message ensures it survives the MITM modification. ### Verification **Before proxy** (system prompt stripped): ``` replace dummy prompt in USER_REQUEST wrapper (673 chars) ``` → Model only saw the bare user message, lost all persona/workspace context **After proxy** (system prompt injected into user message): ``` replace dummy prompt in USER_REQUEST wrapper (154255 chars) ``` → Model saw the full 154k chars of context and correctly followed persona instructions, used correct file paths, etc. ### Proxy code (Node.js) ```javascript const http = require("http"); const LISTEN_PORT = 8740; const ZG_HOST = "127.0.0.1"; const ZG_PORT = 8741; const server = http.createServer((req, res) => { if (req.method === "POST" && req.url?.includes("/v1/chat/completions")) { let body = ""; req.on("data", (chunk) => (body += chunk)); req.on("end", () => { try { const data = JSON.parse(body); const systemMsgs = []; const otherMsgs = []; for (const msg of data.messages || []) { if (msg.role === "system") { const content = typeof msg.content === "string" ? msg.content : ""; if (content) systemMsgs.push(content); } else { otherMsgs.push(msg); } } if (systemMsgs.length > 0 && otherMsgs.length > 0 && otherMsgs[0].role === "user") { const sysContent = systemMsgs.join("\n\n"); const userContent = typeof otherMsgs[0].content === "string" ? otherMsgs[0].content : ""; otherMsgs[0].content = `<system_context>\n${sysContent}\n</system_context>\n\n${userContent}`; data.messages = otherMsgs; } body = JSON.stringify(data); } catch (e) { // forward raw on parse error } const proxyReq = http.request({ hostname: ZG_HOST, port: ZG_PORT, path: req.url, method: req.method, headers: { ...req.headers, "content-length": Buffer.byteLength(body), host: `${ZG_HOST}:${ZG_PORT}` }, }, (proxyRes) => { res.writeHead(proxyRes.statusCode, proxyRes.headers); proxyRes.pipe(res); }); proxyReq.on("error", (e) => { res.writeHead(502); res.end("Bad Gateway"); }); proxyReq.write(body); proxyReq.end(); }); } else { const proxyReq = http.request({ hostname: ZG_HOST, port: ZG_PORT, path: req.url, method: req.method, headers: { ...req.headers, host: `${ZG_HOST}:${ZG_PORT}` }, }, (proxyRes) => { res.writeHead(proxyRes.statusCode, proxyRes.headers); proxyRes.pipe(res); }); proxyReq.on("error", (e) => { res.writeHead(502); res.end("Bad Gateway"); }); req.pipe(proxyReq); } }); server.listen(LISTEN_PORT, "127.0.0.1"); ``` Then point your API client's baseUrl from `http://localhost:8741/v1` to `http://localhost:8740/v1`. ### Downsides - The request body inflates significantly (`saved_pct=-453`, i.e. 4.5x larger) since the system prompt is now duplicated in the user message - ZG still replaces the system prompt with its own "Antigravity" prompt on top, adding redundant instructions - Adds an extra proxy hop A native option in ZG to preserve the original system prompt / USER_REQUEST content would be the ideal fix.
Author
Owner

@NikkeTryHard commented on GitHub (Feb 22, 2026):

should be fixed in the latest version v1.3.2 please retest

<!-- gh-comment-id:3940721073 --> @NikkeTryHard commented on GitHub (Feb 22, 2026): should be fixed in the latest version v1.3.2 please retest
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/zerogravity#43
No description provided.