[GH-ISSUE #10] MCP server processes leak — zombie instances write TBs to disk #4

Open
opened 2026-03-03 12:01:28 +03:00 by kerem · 7 comments
Owner

Originally created by @alecmarcus on GitHub (Mar 2, 2026).
Original GitHub issue: https://github.com/ForLoopCodes/contextplus/issues/10

Originally assigned to: @ForLoopCodes on GitHub.

Bug

Context+ MCP server processes (contextplus via bunx) do not get cleaned up when the parent Claude Code session ends. Zombie instances accumulate and write unbounded data to disk.

Environment

  • macOS 15.4 (Darwin 25.3.0)
  • Claude Code 2.1.63
  • Context+ installed via bunx contextplus in .mcp.json
  • Ollama backend (qwen3-embedding:8b, gemma2:27b)

Observed behavior

Found 4 stale contextplus processes from previous Claude Code sessions that had never been terminated:

PID Cumulative disk written
43552 2.64 TB
6049 147 GB
87096 99 GB
11188 20 GB

All four were running at:

node /var/folders/.../bunx-501-contextplus@latest/node_modules/.bin/contextplus

System storage grew from normal to 505+ GB before I noticed. After kill -9 on all four processes, ~440 GB was reclaimed (unlinked files held open by the zombie processes).

Expected behavior

When a Claude Code session ends, the MCP server process it spawned should terminate. If the parent process dies, the MCP server should detect the broken pipe (stdin closed) and exit gracefully.

Root cause hypothesis

The contextplus process likely does not monitor stdin for closure or the parent PID for exit. When Claude Code terminates, the MCP server keeps running indefinitely. If it has any periodic background work (embedding, indexing, cache writes), that work continues and accumulates disk I/O.

Suggested fix

  1. Monitor stdin — when stdin closes (parent died), exit immediately
  2. Alternatively, monitor the parent PID and exit when it disappears
  3. Add a watchdog timer — if no MCP request received within N minutes, exit

Workaround

pgrep -f contextplus | xargs kill -9
Originally created by @alecmarcus on GitHub (Mar 2, 2026). Original GitHub issue: https://github.com/ForLoopCodes/contextplus/issues/10 Originally assigned to: @ForLoopCodes on GitHub. ## Bug Context+ MCP server processes (`contextplus` via `bunx`) do not get cleaned up when the parent Claude Code session ends. Zombie instances accumulate and write unbounded data to disk. ## Environment - macOS 15.4 (Darwin 25.3.0) - Claude Code 2.1.63 - Context+ installed via `bunx contextplus` in `.mcp.json` - Ollama backend (qwen3-embedding:8b, gemma2:27b) ## Observed behavior Found 4 stale `contextplus` processes from previous Claude Code sessions that had never been terminated: | PID | Cumulative disk written | |-----|------------------------| | 43552 | **2.64 TB** | | 6049 | 147 GB | | 87096 | 99 GB | | 11188 | 20 GB | All four were running at: ``` node /var/folders/.../bunx-501-contextplus@latest/node_modules/.bin/contextplus ``` System storage grew from normal to 505+ GB before I noticed. After `kill -9` on all four processes, ~440 GB was reclaimed (unlinked files held open by the zombie processes). ## Expected behavior When a Claude Code session ends, the MCP server process it spawned should terminate. If the parent process dies, the MCP server should detect the broken pipe (stdin closed) and exit gracefully. ## Root cause hypothesis The `contextplus` process likely does not monitor stdin for closure or the parent PID for exit. When Claude Code terminates, the MCP server keeps running indefinitely. If it has any periodic background work (embedding, indexing, cache writes), that work continues and accumulates disk I/O. ## Suggested fix 1. Monitor stdin — when stdin closes (parent died), exit immediately 2. Alternatively, monitor the parent PID and exit when it disappears 3. Add a watchdog timer — if no MCP request received within N minutes, exit ## Workaround ```sh pgrep -f contextplus | xargs kill -9 ```
kerem 2026-03-03 12:01:28 +03:00
  • closed this issue
  • added the
    bug
    label
Author
Owner

@ForLoopCodes commented on GitHub (Mar 2, 2026):

thank you for reporting this serious issue

<!-- gh-comment-id:3983968809 --> @ForLoopCodes commented on GitHub (Mar 2, 2026): thank you for reporting this serious issue
Author
Owner

@ForLoopCodes commented on GitHub (Mar 2, 2026):

let me know if this fixes it

<!-- gh-comment-id:3984107318 --> @ForLoopCodes commented on GitHub (Mar 2, 2026): let me know if this fixes it
Author
Owner

@ForLoopCodes commented on GitHub (Mar 2, 2026):

i'm so sorry for the trouble caused

<!-- gh-comment-id:3984174554 --> @ForLoopCodes commented on GitHub (Mar 2, 2026): i'm so sorry for the trouble caused
kerem reopened this issue 2026-03-15 15:59:25 +03:00
Author
Owner

@jojoprison commented on GitHub (Mar 7, 2026):

Confirming similar behavior on v1.0.6 (built from source)

Environment

  • MacBook Pro M4 Pro (14 cores: 10P + 4E), 24 GB RAM
  • macOS 15.4 (Darwin 25.3.0)
  • Claude Code (2 concurrent sessions)
  • Context+ v1.0.6 — built from source (commit ac7deb7)
  • Ollama 0.17.5 with nomic-embed-text (running as brew service)
  • 67 projects in the working directory

Observed behavior

Two Claude Code sessions spawned 2 independent contextplus instances, each consuming extreme resources:

PID %CPU RSS (MB) Command
5002 99% 2,970 node contextplus/build/index.js .
23559 99% 2,810 node contextplus/build/index.js .
862 30% 49 ollama (main process)
52630 15% 355 ollama (runner)

Total impact: ~245% CPU + ~6.4 GB RAM just for contextplus + Ollama. The MacBook was extremely hot with fans at maximum.

Key difference from original report

The zombie/stdin fix (commit 2a6816b) is included in my build. However, the processes were still alive and consuming 99% CPU each — they weren't zombies in the traditional sense (parent Claude Code sessions were still running). The issue here seems to be sustained 100% CPU during normal operation, not just leaked processes after session close.

My build also does not include PR #14 (pre-truncate oversized embedding input), which was merged after v1.0.6. The Ollama JS SDK hang described in #14 could be a contributing factor to the sustained CPU usage — the SDK hangs indefinitely when input exceeds the context window, keeping the event loop busy.

Resolution

  1. killed both contextplus processes
  2. Stopped Ollama (brew services stop ollama)
  3. Removed Ollama from autostart
  4. CPU dropped from ~245% to ~20% immediately

Suggestion

Even with the stdin fix, contextplus seems to have inherently high CPU usage during indexing/embedding. For machines with limited RAM (24 GB) and many projects, consider:

  • A CONTEXTPLUS_MAX_CPU or --max-workers option to throttle indexing
  • Lazy indexing (only index when a tool is actually called, not at startup)
  • Documenting minimum hardware requirements (RAM, CPU) in README
<!-- gh-comment-id:4016841262 --> @jojoprison commented on GitHub (Mar 7, 2026): ## Confirming similar behavior on v1.0.6 (built from source) ### Environment - **MacBook Pro M4 Pro** (14 cores: 10P + 4E), **24 GB RAM** - **macOS 15.4** (Darwin 25.3.0) - **Claude Code** (2 concurrent sessions) - **Context+ v1.0.6** — built from source (commit `ac7deb7`) - **Ollama** 0.17.5 with `nomic-embed-text` (running as brew service) - **67 projects** in the working directory ### Observed behavior Two Claude Code sessions spawned **2 independent contextplus instances**, each consuming extreme resources: | PID | %CPU | RSS (MB) | Command | |-----|------|----------|---------| | 5002 | **99%** | **2,970** | `node contextplus/build/index.js .` | | 23559 | **99%** | **2,810** | `node contextplus/build/index.js .` | | 862 | 30% | 49 | `ollama` (main process) | | 52630 | 15% | 355 | `ollama` (runner) | **Total impact:** ~245% CPU + ~6.4 GB RAM just for contextplus + Ollama. The MacBook was extremely hot with fans at maximum. ### Key difference from original report The zombie/stdin fix (commit `2a6816b`) **is included** in my build. However, the processes were still alive and consuming 99% CPU each — they weren't zombies in the traditional sense (parent Claude Code sessions were still running). The issue here seems to be **sustained 100% CPU during normal operation**, not just leaked processes after session close. My build also does **not** include PR #14 (pre-truncate oversized embedding input), which was merged after v1.0.6. The Ollama JS SDK hang described in #14 could be a contributing factor to the sustained CPU usage — the SDK hangs indefinitely when input exceeds the context window, keeping the event loop busy. ### Resolution 1. `kill`ed both contextplus processes 2. Stopped Ollama (`brew services stop ollama`) 3. Removed Ollama from autostart 4. CPU dropped from ~245% to ~20% immediately ### Suggestion Even with the stdin fix, contextplus seems to have inherently high CPU usage during indexing/embedding. For machines with limited RAM (24 GB) and many projects, consider: - A `CONTEXTPLUS_MAX_CPU` or `--max-workers` option to throttle indexing - Lazy indexing (only index when a tool is actually called, not at startup) - Documenting minimum hardware requirements (RAM, CPU) in README
Author
Owner

@ForLoopCodes commented on GitHub (Mar 8, 2026):

will be working on fixing this from source, i have an exam tomorrow, give me 2 days

<!-- gh-comment-id:4019616271 --> @ForLoopCodes commented on GitHub (Mar 8, 2026): will be working on fixing this from source, i have an exam tomorrow, give me 2 days
Author
Owner

@jojoprison commented on GitHub (Mar 9, 2026):

will be working on fixing this from source, i have an exam tomorrow, give me 2 days

gl on exam bro

<!-- gh-comment-id:4022796529 --> @jojoprison commented on GitHub (Mar 9, 2026): > will be working on fixing this from source, i have an exam tomorrow, give me 2 days gl on exam bro
Author
Owner

@ForLoopCodes commented on GitHub (Mar 10, 2026):

yo, can you confirm that is context+ still writing tbs to the disc on the mac operating system or it is just the cpu/mem issue now

<!-- gh-comment-id:4033193955 --> @ForLoopCodes commented on GitHub (Mar 10, 2026): yo, can you confirm that is context+ still writing tbs to the disc on the mac operating system or it is just the cpu/mem issue now
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/contextplus#4
No description provided.