[GH-ISSUE #23] [BUG] MITM custom tool injection can omit tools[].custom.input_schema (400 INVALID_ARGUMENT) #20

Closed
opened 2026-02-27 15:37:58 +03:00 by kerem · 6 comments
Owner

Originally created by @DarKWinGTM on GitHub (Feb 19, 2026).
Original GitHub issue: https://github.com/NikkeTryHard/zerogravity/issues/23

Summary

During /v1/messages processing, the MITM rewrite path injects custom tools but can send an invalid tool definition to upstream.

Upstream rejects the request with:

  • HTTP 400 INVALID_ARGUMENT
  • tools.0.custom.input_schema: Field required

This breaks tool-enabled requests even when token/auth is otherwise valid.


Observed error

2026-02-19T09:42:07.611278Z  WARN zerogravity::mitm::proxy: MITM: upstream error response domain="daily-cloudcode-pa.googleapis.com" status=400 body={
  "error": {
    "code": 400,
    "message": "{\"type\":\"error\",\"error\":{\"type\":\"invalid_request_error\",\"message\":\"tools.0.custom.input_schema: Field required\"},\"request_id\":\"req_vrtx_011CYHD5fxSsCWVyHuVpsTqY\"}",
    "status": "INVALID_ARGUMENT"
  }
}

Correlated log lines just before the failure:

2026-02-19T09:42:07.015378Z  INFO zerogravity::mitm::proxy: MITM: forwarding LLM request ...
2026-02-19T09:42:07.016186Z  INFO zerogravity::mitm::modify: MITM: request modified [... inject 1 custom tool group(s), override toolConfig VALIDATED → AUTO, inject thinkingLevel=high]
2026-02-19T09:42:07.611248Z  INFO zerogravity::mitm::proxy: MITM: streaming response ... status=400

Why this is a bug

tools[].custom.input_schema is required by upstream validation. If MITM injects custom tools, each injected custom tool must include a valid input_schema.

Current behavior indicates injected tool payload can be incomplete.


Impact

  • Tool-enabled /v1/messages requests fail hard with 400
  • Intermittent behavior depending on rewrite path/tool injection
  • Appears as proxy instability for valid client requests

Repro (high level)

  1. Run zerogravity with MITM/tool injection path active
  2. Send /v1/messages request that triggers custom tool injection
  3. Observe upstream 400: tools.0.custom.input_schema: Field required

Expected behavior

  • Any custom tool injected by MITM includes a valid input_schema
  • Upstream accepts request schema (no 400 for missing field)

Info Diagram

Client /v1/messages
  → zerogravity::api::messages
  → zerogravity::mitm::modify
      (inject custom tool group)
      [BUG: tools[0].custom missing input_schema]
  → upstream validation
  → 400 INVALID_ARGUMENT
      "tools.0.custom.input_schema: Field required"
  → proxy returns failure

Suggested fix direction

  • Validate all injected tool objects before forward:
    • if custom tool exists, enforce presence + shape of input_schema
  • Add schema guard in MITM modify stage
  • Add regression test for tool injection payload validity
Originally created by @DarKWinGTM on GitHub (Feb 19, 2026). Original GitHub issue: https://github.com/NikkeTryHard/zerogravity/issues/23 ## Summary During `/v1/messages` processing, the MITM rewrite path injects custom tools but can send an invalid tool definition to upstream. Upstream rejects the request with: - `HTTP 400 INVALID_ARGUMENT` - `tools.0.custom.input_schema: Field required` This breaks tool-enabled requests even when token/auth is otherwise valid. --- ## Observed error ```text 2026-02-19T09:42:07.611278Z WARN zerogravity::mitm::proxy: MITM: upstream error response domain="daily-cloudcode-pa.googleapis.com" status=400 body={ "error": { "code": 400, "message": "{\"type\":\"error\",\"error\":{\"type\":\"invalid_request_error\",\"message\":\"tools.0.custom.input_schema: Field required\"},\"request_id\":\"req_vrtx_011CYHD5fxSsCWVyHuVpsTqY\"}", "status": "INVALID_ARGUMENT" } } ``` Correlated log lines just before the failure: ```text 2026-02-19T09:42:07.015378Z INFO zerogravity::mitm::proxy: MITM: forwarding LLM request ... 2026-02-19T09:42:07.016186Z INFO zerogravity::mitm::modify: MITM: request modified [... inject 1 custom tool group(s), override toolConfig VALIDATED → AUTO, inject thinkingLevel=high] 2026-02-19T09:42:07.611248Z INFO zerogravity::mitm::proxy: MITM: streaming response ... status=400 ``` --- ## Why this is a bug `tools[].custom.input_schema` is required by upstream validation. If MITM injects custom tools, each injected custom tool must include a valid `input_schema`. Current behavior indicates injected tool payload can be incomplete. --- ## Impact - Tool-enabled `/v1/messages` requests fail hard with 400 - Intermittent behavior depending on rewrite path/tool injection - Appears as proxy instability for valid client requests --- ## Repro (high level) 1. Run zerogravity with MITM/tool injection path active 2. Send `/v1/messages` request that triggers custom tool injection 3. Observe upstream 400: `tools.0.custom.input_schema: Field required` --- ## Expected behavior - Any custom tool injected by MITM includes a valid `input_schema` - Upstream accepts request schema (no 400 for missing field) --- ## Info Diagram ```text Client /v1/messages → zerogravity::api::messages → zerogravity::mitm::modify (inject custom tool group) [BUG: tools[0].custom missing input_schema] → upstream validation → 400 INVALID_ARGUMENT "tools.0.custom.input_schema: Field required" → proxy returns failure ``` --- ## Suggested fix direction - Validate all injected tool objects before forward: - if `custom` tool exists, enforce presence + shape of `input_schema` - Add schema guard in MITM modify stage - Add regression test for tool injection payload validity
kerem closed this issue 2026-02-27 15:37:58 +03:00
Author
Owner

@NikkeTryHard commented on GitHub (Feb 19, 2026):

v1.1.6-beta.1 includes fixes to the MITM tool injection pipeline (thought_signature persistence, parallel call recording, and cache cleanup). While this issue is specifically about missing input_schema, the fixes touch the same MITM modify/proxy paths that handle tool injection.

Please test with the beta to see if this is resolved:

Binary:

curl -L -o zerogravity https://github.com/NikkeTryHard/zerogravity/releases/download/v1.1.6-beta.1/zerogravity-linux-x86_64
chmod +x zerogravity

Docker:

docker pull ghcr.io/nikketryhard/zerogravity:v1.1.6-beta.1
docker run -p 8741:8741 ghcr.io/nikketryhard/zerogravity:v1.1.6-beta.1

If the issue persists after testing, please share the full trace (zg trace) so we can see the exact payload being sent upstream.

<!-- gh-comment-id:3928176941 --> @NikkeTryHard commented on GitHub (Feb 19, 2026): v1.1.6-beta.1 includes fixes to the MITM tool injection pipeline (thought_signature persistence, parallel call recording, and cache cleanup). While this issue is specifically about missing `input_schema`, the fixes touch the same MITM modify/proxy paths that handle tool injection. Please test with the beta to see if this is resolved: **Binary:** ```bash curl -L -o zerogravity https://github.com/NikkeTryHard/zerogravity/releases/download/v1.1.6-beta.1/zerogravity-linux-x86_64 chmod +x zerogravity ``` **Docker:** ```bash docker pull ghcr.io/nikketryhard/zerogravity:v1.1.6-beta.1 docker run -p 8741:8741 ghcr.io/nikketryhard/zerogravity:v1.1.6-beta.1 ``` If the issue persists after testing, please share the full trace (`zg trace`) so we can see the exact payload being sent upstream.
Author
Owner

@DarKWinGTM commented on GitHub (Feb 19, 2026):

Update after retesting on v1.1.6-beta.1 with zg trace (as requested). I converted the evidence into redacted text/JSON and inlined it here (no binary attachment).

Environment (redacted)

  • Container: zerogravity
  • Image: zerogravity:1.1.6-beta.1-local
  • Binary checksum proof (inside container):
    • d89bfeb2ae205247dd2148cc5631c86349b8c4592b408d868508fad1db5999c7 /usr/local/bin/zerogravity
    • e0826647d237bd22726582f8fcb43fcf3fd5512de4b2768e9b68581b27936c33 /usr/local/bin/zg
  • Matches published v1.1.6-beta.1 x86_64 release checksums.

1) zg trace ls (latest window)

Traces today (2026-02-19): 17 total, showing last 10
8. 18-56-38.859_0bba0693 cascade=0bba0693 endpoint=POST /v1/messages model=opus-4.6 outcome=upstream_error duration=309ms stream=false
9. 18-56-48.611_fc0575fd cascade=fc0575fd endpoint=POST /v1/messages model=opus-4.6 outcome=upstream_error duration=311ms stream=false
10. 18-57-35.704_de8fc971 cascade=de8fc971 endpoint=POST /v1/messages model=opus-4.6 outcome=end_turn duration=2365ms stream=true
11. 18-57-35.707_640a4ca9 cascade=640a4ca9 endpoint=POST /v1/messages model=opus-4.6 outcome=tool_use duration=8009ms stream=true
12. 18-57-43.867_245cc37f cascade=245cc37f endpoint=POST /v1/messages model=opus-4.6 outcome=upstream_error duration=235ms stream=true
13. 18-57-44.167_d29bcfde cascade=d29bcfde endpoint=POST /v1/messages model=opus-4.6 outcome=upstream_error duration=963ms stream=false
14. 18-57-45.695_94aa2a67 cascade=94aa2a67 endpoint=POST /v1/messages model=opus-4.6 outcome=upstream_error duration=300ms stream=false
15. 18-57-47.147_6e745e3f cascade=6e745e3f endpoint=POST /v1/messages model=opus-4.6 outcome=upstream_error duration=292ms stream=false
16. 18-57-49.635_98eddbc6 cascade=98eddbc6 endpoint=POST /v1/messages model=opus-4.6 outcome=upstream_error duration=315ms stream=false
17. 18-57-54.479_01d1bc9a cascade=01d1bc9a endpoint=POST /v1/messages model=opus-4.6 outcome=upstream_error duration=319ms stream=false

2) zg trace (latest failing trace)

Latest trace: /root/.config/zerogravity/traces/2026-02-19/18-57-54.479_01d1bc9a
# Trace: 01d1bc9a — POST /v1/messages
Outcome: upstream_error
Duration: 319ms

Errors:
- upstream 400: ... invalid_request_error ... tools.0.custom.input_schema: Field required

Injected Tools (1):
| # | Name       | Params |
|---|------------|--------|
| 0 | web_search | 0      |

3) zg trace errors (multiple repeated failures)

-- 18-56-48.611_fc0575fd --
cascade=fc0575fd ... outcome=upstream_error ...
errors.json:
"... invalid_request_error ... tools.0.custom.input_schema: Field required ..."

-- 18-57-43.867_245cc37f --
cascade=245cc37f ... outcome=upstream_error ...
errors.json:
"... invalid_request_error ... tools.0.custom.input_schema: Field required ..."

-- 18-57-54.479_01d1bc9a --
cascade=01d1bc9a ... outcome=upstream_error ...
errors.json:
"... invalid_request_error ... tools.0.custom.input_schema: Field required ..."

13 error trace(s) found.

4) Runtime log evidence (/tmp/zerogravity-issue23.log)

... MITM: request modified [... inject 1 custom tool group(s) ...]
... MITM: streaming response ... status=400
... MITM: upstream error response ...
"message": "{\"type\":\"error\",\"error\":{\"type\":\"invalid_request_error\",\"message\":\"tools.0.custom.input_schema: Field required\"},...}"

This 400 pattern repeats across many cascades (144be912, d32da2fd, 9b9d5100, b175b8c2, 245cc37f, d29bcfde, 94aa2a67, 6e745e3f, 98eddbc6, 01d1bc9a, etc.).

5) Redacted payload evidence (schema omitted)

Trace 18-56-48.611_fc0575fd

{
  "request": {
    "tools": [
      {
        "functionDeclarations": [
          { "name": "web_search", "description": "" }
        ]
      }
    ],
    "toolConfig": { "functionCallingConfig": { "mode": "AUTO" } }
  },
  "tools_summary": [
    { "name": "web_search", "param_count": 0 }
  ],
  "error_excerpt": "tools.0.custom.input_schema: Field required"
}

Trace 18-57-54.479_01d1bc9a

{
  "request": {
    "tools": [
      {
        "functionDeclarations": [
          { "name": "web_search", "description": "" }
        ]
      }
    ],
    "toolConfig": { "functionCallingConfig": { "mode": "AUTO" } }
  },
  "tools_summary": [
    { "name": "web_search", "param_count": 0 }
  ],
  "error_excerpt": "tools.0.custom.input_schema: Field required"
}

Conclusion

Issue is still reproducible on v1.1.6-beta.1 in my environment.

The traces consistently show injected tool declarations with no schema (param_count: 0) and upstream rejects with tools.0.custom.input_schema: Field required.

I kept this update fully text-based and redacted (no full raw trace bundle) to avoid leaking local/private context.

<!-- gh-comment-id:3929348829 --> @DarKWinGTM commented on GitHub (Feb 19, 2026): Update after retesting on `v1.1.6-beta.1` with `zg trace` (as requested). I converted the evidence into **redacted text/JSON** and inlined it here (no binary attachment). ## Environment (redacted) - Container: `zerogravity` - Image: `zerogravity:1.1.6-beta.1-local` - Binary checksum proof (inside container): - `d89bfeb2ae205247dd2148cc5631c86349b8c4592b408d868508fad1db5999c7 /usr/local/bin/zerogravity` - `e0826647d237bd22726582f8fcb43fcf3fd5512de4b2768e9b68581b27936c33 /usr/local/bin/zg` - Matches published `v1.1.6-beta.1` x86_64 release checksums. ## 1) `zg trace ls` (latest window) ```text Traces today (2026-02-19): 17 total, showing last 10 8. 18-56-38.859_0bba0693 cascade=0bba0693 endpoint=POST /v1/messages model=opus-4.6 outcome=upstream_error duration=309ms stream=false 9. 18-56-48.611_fc0575fd cascade=fc0575fd endpoint=POST /v1/messages model=opus-4.6 outcome=upstream_error duration=311ms stream=false 10. 18-57-35.704_de8fc971 cascade=de8fc971 endpoint=POST /v1/messages model=opus-4.6 outcome=end_turn duration=2365ms stream=true 11. 18-57-35.707_640a4ca9 cascade=640a4ca9 endpoint=POST /v1/messages model=opus-4.6 outcome=tool_use duration=8009ms stream=true 12. 18-57-43.867_245cc37f cascade=245cc37f endpoint=POST /v1/messages model=opus-4.6 outcome=upstream_error duration=235ms stream=true 13. 18-57-44.167_d29bcfde cascade=d29bcfde endpoint=POST /v1/messages model=opus-4.6 outcome=upstream_error duration=963ms stream=false 14. 18-57-45.695_94aa2a67 cascade=94aa2a67 endpoint=POST /v1/messages model=opus-4.6 outcome=upstream_error duration=300ms stream=false 15. 18-57-47.147_6e745e3f cascade=6e745e3f endpoint=POST /v1/messages model=opus-4.6 outcome=upstream_error duration=292ms stream=false 16. 18-57-49.635_98eddbc6 cascade=98eddbc6 endpoint=POST /v1/messages model=opus-4.6 outcome=upstream_error duration=315ms stream=false 17. 18-57-54.479_01d1bc9a cascade=01d1bc9a endpoint=POST /v1/messages model=opus-4.6 outcome=upstream_error duration=319ms stream=false ``` ## 2) `zg trace` (latest failing trace) ```text Latest trace: /root/.config/zerogravity/traces/2026-02-19/18-57-54.479_01d1bc9a # Trace: 01d1bc9a — POST /v1/messages Outcome: upstream_error Duration: 319ms Errors: - upstream 400: ... invalid_request_error ... tools.0.custom.input_schema: Field required Injected Tools (1): | # | Name | Params | |---|------------|--------| | 0 | web_search | 0 | ``` ## 3) `zg trace errors` (multiple repeated failures) ```text -- 18-56-48.611_fc0575fd -- cascade=fc0575fd ... outcome=upstream_error ... errors.json: "... invalid_request_error ... tools.0.custom.input_schema: Field required ..." -- 18-57-43.867_245cc37f -- cascade=245cc37f ... outcome=upstream_error ... errors.json: "... invalid_request_error ... tools.0.custom.input_schema: Field required ..." -- 18-57-54.479_01d1bc9a -- cascade=01d1bc9a ... outcome=upstream_error ... errors.json: "... invalid_request_error ... tools.0.custom.input_schema: Field required ..." 13 error trace(s) found. ``` ## 4) Runtime log evidence (`/tmp/zerogravity-issue23.log`) ```text ... MITM: request modified [... inject 1 custom tool group(s) ...] ... MITM: streaming response ... status=400 ... MITM: upstream error response ... "message": "{\"type\":\"error\",\"error\":{\"type\":\"invalid_request_error\",\"message\":\"tools.0.custom.input_schema: Field required\"},...}" ``` This 400 pattern repeats across many cascades (`144be912`, `d32da2fd`, `9b9d5100`, `b175b8c2`, `245cc37f`, `d29bcfde`, `94aa2a67`, `6e745e3f`, `98eddbc6`, `01d1bc9a`, etc.). ## 5) Redacted payload evidence (schema omitted) ### Trace `18-56-48.611_fc0575fd` ```json { "request": { "tools": [ { "functionDeclarations": [ { "name": "web_search", "description": "" } ] } ], "toolConfig": { "functionCallingConfig": { "mode": "AUTO" } } }, "tools_summary": [ { "name": "web_search", "param_count": 0 } ], "error_excerpt": "tools.0.custom.input_schema: Field required" } ``` ### Trace `18-57-54.479_01d1bc9a` ```json { "request": { "tools": [ { "functionDeclarations": [ { "name": "web_search", "description": "" } ] } ], "toolConfig": { "functionCallingConfig": { "mode": "AUTO" } } }, "tools_summary": [ { "name": "web_search", "param_count": 0 } ], "error_excerpt": "tools.0.custom.input_schema: Field required" } ``` ## Conclusion Issue is still reproducible on `v1.1.6-beta.1` in my environment. The traces consistently show injected tool declarations with no schema (`param_count: 0`) and upstream rejects with `tools.0.custom.input_schema: Field required`. I kept this update fully text-based and redacted (no full raw trace bundle) to avoid leaking local/private context.
Author
Owner

@NikkeTryHard commented on GitHub (Feb 19, 2026):

hold on im porting litellm logic which might be able to solve this in v1.1.6 hopefully

<!-- gh-comment-id:3929357842 --> @NikkeTryHard commented on GitHub (Feb 19, 2026): hold on im porting litellm logic which might be able to solve this in v1.1.6 hopefully
Author
Owner

@DarKWinGTM commented on GitHub (Feb 19, 2026):

Update: retested with stable v1.1.6 release binaries. Core endpoints are healthy, but inference endpoints still hang/time out.

Environment / Version Proof

  • Time (UTC): 2026-02-19T21:21:52Z
  • Container: zerogravity
  • Image tag: zerogravity:1.1.6-local
  • Running image ID: sha256:78d0b9a5b6b944588b4423fd493c05f71fd5fd0537e520bc83601aaca0464a2a

In-container binary checksums:

  • dfff1e49dc03b1a5a52ed213e77eee1ef83e65e8c89a20e8912bd2c3dd7506cf /usr/local/bin/zerogravity
  • 70ffbca23987f114fa4c361c36a2c47db6ccb127cb4edcf42abe12b9b344f526 /usr/local/bin/zg

These match the published v1.1.6 x86_64 checksums:

  • zerogravity-linux-x86_64: dfff1e49...7506cf
  • zg-linux-x86_64: 70ffbca2...44f526

Direct Endpoint Tests (host -> http://127.0.0.1:8741)

health            200   0.00s   {"status":"ok"}
models            200   0.00s   {"data":[...models list...]}
messages_opus46   ERR  45.04s   TimeoutError('timed out')
chat_gemini3flash ERR  45.05s   TimeoutError('timed out')

Observed behavior:

  • /health and /v1/models return immediately.
  • /v1/messages and /v1/chat/completions consistently block until client timeout.

Runtime Logs (excerpt)

During timeout tests, daemon receives requests and registers cascades, but no completion payload is returned to client:

... INFO zerogravity::api::messages: POST /v1/messages model=opus-4.6 stream=false
... INFO zerogravity::mitm::store: Registered request context cascade=... 
... INFO zerogravity::api::completions: POST /v1/chat/completions model=gemini-3-flash stream=false
... INFO zerogravity::mitm::store: Registered request context cascade=...

Also seen repeatedly at startup:

... WARN zerogravity::warmup: Warmup RegisterGdmUser timed out
... WARN zerogravity::warmup: Warmup GetUserStatus timed out
... WARN zerogravity::warmup: Warmup GetCascadeModelConfigData timed out
... WARN zerogravity::warmup: Warmup GetWebDocsOptions timed out

Trace Commands Output (important)

Despite inference timeout reproductions, trace commands currently return empty:

$ zg trace ls
No traces today (2026-02-19)

$ zg trace
No traces today (2026-02-19)

$ zg trace errors
No traces today (2026-02-19)

So for this v1.1.6 run, I cannot provide non-empty trace artifacts; only daemon logs and direct endpoint timing are available.

Summary

  • v1.1.6 binaries are confirmed by checksum.
  • Service is up (/health, /v1/models OK).
  • Inference endpoints (/v1/messages, /v1/chat/completions) still hang and time out.
  • zg trace* outputs are empty in this run, even while timeouts are reproducible.

If you want, I can run another pass with a longer capture window and attach full raw log slices for the exact timeout intervals.

<!-- gh-comment-id:3930189405 --> @DarKWinGTM commented on GitHub (Feb 19, 2026): Update: retested with **stable `v1.1.6` release binaries**. Core endpoints are healthy, but inference endpoints still hang/time out. ## Environment / Version Proof - Time (UTC): `2026-02-19T21:21:52Z` - Container: `zerogravity` - Image tag: `zerogravity:1.1.6-local` - Running image ID: `sha256:78d0b9a5b6b944588b4423fd493c05f71fd5fd0537e520bc83601aaca0464a2a` In-container binary checksums: - `dfff1e49dc03b1a5a52ed213e77eee1ef83e65e8c89a20e8912bd2c3dd7506cf /usr/local/bin/zerogravity` - `70ffbca23987f114fa4c361c36a2c47db6ccb127cb4edcf42abe12b9b344f526 /usr/local/bin/zg` These match the published `v1.1.6` x86_64 checksums: - `zerogravity-linux-x86_64`: `dfff1e49...7506cf` - `zg-linux-x86_64`: `70ffbca2...44f526` ## Direct Endpoint Tests (host -> `http://127.0.0.1:8741`) ```text health 200 0.00s {"status":"ok"} models 200 0.00s {"data":[...models list...]} messages_opus46 ERR 45.04s TimeoutError('timed out') chat_gemini3flash ERR 45.05s TimeoutError('timed out') ``` Observed behavior: - `/health` and `/v1/models` return immediately. - `/v1/messages` and `/v1/chat/completions` consistently block until client timeout. ## Runtime Logs (excerpt) During timeout tests, daemon receives requests and registers cascades, but no completion payload is returned to client: ```text ... INFO zerogravity::api::messages: POST /v1/messages model=opus-4.6 stream=false ... INFO zerogravity::mitm::store: Registered request context cascade=... ... INFO zerogravity::api::completions: POST /v1/chat/completions model=gemini-3-flash stream=false ... INFO zerogravity::mitm::store: Registered request context cascade=... ``` Also seen repeatedly at startup: ```text ... WARN zerogravity::warmup: Warmup RegisterGdmUser timed out ... WARN zerogravity::warmup: Warmup GetUserStatus timed out ... WARN zerogravity::warmup: Warmup GetCascadeModelConfigData timed out ... WARN zerogravity::warmup: Warmup GetWebDocsOptions timed out ``` ## Trace Commands Output (important) Despite inference timeout reproductions, trace commands currently return empty: ```text $ zg trace ls No traces today (2026-02-19) $ zg trace No traces today (2026-02-19) $ zg trace errors No traces today (2026-02-19) ``` So for this `v1.1.6` run, I cannot provide non-empty trace artifacts; only daemon logs and direct endpoint timing are available. ## Summary - `v1.1.6` binaries are confirmed by checksum. - Service is up (`/health`, `/v1/models` OK). - Inference endpoints (`/v1/messages`, `/v1/chat/completions`) still hang and time out. - `zg trace*` outputs are empty in this run, even while timeouts are reproducible. If you want, I can run another pass with a longer capture window and attach full raw log slices for the exact timeout intervals.
Author
Owner

@NikkeTryHard commented on GitHub (Feb 19, 2026):

v1.1.6 stable version just released test again should be fixed

<!-- gh-comment-id:3930424883 --> @NikkeTryHard commented on GitHub (Feb 19, 2026): v1.1.6 stable version just released test again should be fixed
Author
Owner

@DarKWinGTM commented on GitHub (Feb 20, 2026):

Update: production-like test now works on stable v1.1.6 in my environment

This is a real re-test after the latest local update flow (including cookie-derived token sync into /v1/token).

Environment / Build proof

  • Time (UTC): 2026-02-20T00:07:14Z
  • Container: zerogravity (running)
  • Image: zerogravity:1.1.6-local
  • Runtime status: Up

Binary checksums (inside container):

  • zerogravity: cbbfa6799bff10f1863fdb42c3f16009e5559ac5ff217a712eb899fd830d01b6
  • zg: 70ffbca23987f114fa4c361c36a2c47db6ccb127cb4edcf42abe12b9b344f526

These match current published v1.1.6 x86_64 artifacts.

Direct endpoint validation (http://127.0.0.1:8741)

  • GET /health200 (~0.00s)
  • GET /v1/models200 (~0.00s)
  • GET /v1/quota200 (~0.00s, non-empty quota payload)
  • POST /v1/messages (model=opus-4.6, stream=false) → 200 (~6.53s), response contains thinking: ok + text: ok
  • POST /v1/chat/completions (model=gemini-3-flash, stream=false) → 200 (~1.94s), content ok

Trace status

zg trace ls now shows successful outcomes (today):

  • POST /v1/messagesend_turn (multiple)
  • POST /v1/chat/completionscompleted

zg trace errors:

  • No error traces today.

Current conclusion

For this latest run, the previously reported failures (hang/timeout, upstream 401, tools.0.custom.input_schema) are not reproduced.
System is currently usable in real flow on my side.

I’ll continue monitoring and will report back if regression appears again.

<!-- gh-comment-id:3930838217 --> @DarKWinGTM commented on GitHub (Feb 20, 2026): ✅ **Update: production-like test now works on stable `v1.1.6` in my environment** This is a real re-test after the latest local update flow (including cookie-derived token sync into `/v1/token`). ## Environment / Build proof - Time (UTC): `2026-02-20T00:07:14Z` - Container: `zerogravity` (running) - Image: `zerogravity:1.1.6-local` - Runtime status: `Up` Binary checksums (inside container): - `zerogravity`: `cbbfa6799bff10f1863fdb42c3f16009e5559ac5ff217a712eb899fd830d01b6` - `zg`: `70ffbca23987f114fa4c361c36a2c47db6ccb127cb4edcf42abe12b9b344f526` These match current published `v1.1.6` x86_64 artifacts. ## Direct endpoint validation (`http://127.0.0.1:8741`) - `GET /health` → `200` (~0.00s) - `GET /v1/models` → `200` (~0.00s) - `GET /v1/quota` → `200` (~0.00s, non-empty quota payload) - `POST /v1/messages` (`model=opus-4.6`, `stream=false`) → `200` (~6.53s), response contains `thinking: ok` + `text: ok` - `POST /v1/chat/completions` (`model=gemini-3-flash`, `stream=false`) → `200` (~1.94s), content `ok` ## Trace status `zg trace ls` now shows successful outcomes (today): - `POST /v1/messages` → `end_turn` (multiple) - `POST /v1/chat/completions` → `completed` `zg trace errors`: - `No error traces today.` ## Current conclusion For this latest run, the previously reported failures (hang/timeout, upstream 401, `tools.0.custom.input_schema`) are **not reproduced**. System is currently usable in real flow on my side. I’ll continue monitoring and will report back if regression appears again.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/zerogravity#20
No description provided.