mirror of
https://github.com/ForLoopCodes/contextplus.git
synced 2026-04-26 06:25:50 +03:00
[GH-ISSUE #10] MCP server processes leak — zombie instances write TBs to disk #4
Labels
No labels
bug
bug
documentation
enhancement
enhancement
good first issue
good first issue
help wanted
pull-request
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/contextplus#4
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @alecmarcus on GitHub (Mar 2, 2026).
Original GitHub issue: https://github.com/ForLoopCodes/contextplus/issues/10
Originally assigned to: @ForLoopCodes on GitHub.
Bug
Context+ MCP server processes (
contextplusviabunx) do not get cleaned up when the parent Claude Code session ends. Zombie instances accumulate and write unbounded data to disk.Environment
bunx contextplusin.mcp.jsonObserved behavior
Found 4 stale
contextplusprocesses from previous Claude Code sessions that had never been terminated:All four were running at:
System storage grew from normal to 505+ GB before I noticed. After
kill -9on all four processes, ~440 GB was reclaimed (unlinked files held open by the zombie processes).Expected behavior
When a Claude Code session ends, the MCP server process it spawned should terminate. If the parent process dies, the MCP server should detect the broken pipe (stdin closed) and exit gracefully.
Root cause hypothesis
The
contextplusprocess likely does not monitor stdin for closure or the parent PID for exit. When Claude Code terminates, the MCP server keeps running indefinitely. If it has any periodic background work (embedding, indexing, cache writes), that work continues and accumulates disk I/O.Suggested fix
Workaround
@ForLoopCodes commented on GitHub (Mar 2, 2026):
thank you for reporting this serious issue
@ForLoopCodes commented on GitHub (Mar 2, 2026):
let me know if this fixes it
@ForLoopCodes commented on GitHub (Mar 2, 2026):
i'm so sorry for the trouble caused
@jojoprison commented on GitHub (Mar 7, 2026):
Confirming similar behavior on v1.0.6 (built from source)
Environment
ac7deb7)nomic-embed-text(running as brew service)Observed behavior
Two Claude Code sessions spawned 2 independent contextplus instances, each consuming extreme resources:
node contextplus/build/index.js .node contextplus/build/index.js .ollama(main process)ollama(runner)Total impact: ~245% CPU + ~6.4 GB RAM just for contextplus + Ollama. The MacBook was extremely hot with fans at maximum.
Key difference from original report
The zombie/stdin fix (commit
2a6816b) is included in my build. However, the processes were still alive and consuming 99% CPU each — they weren't zombies in the traditional sense (parent Claude Code sessions were still running). The issue here seems to be sustained 100% CPU during normal operation, not just leaked processes after session close.My build also does not include PR #14 (pre-truncate oversized embedding input), which was merged after v1.0.6. The Ollama JS SDK hang described in #14 could be a contributing factor to the sustained CPU usage — the SDK hangs indefinitely when input exceeds the context window, keeping the event loop busy.
Resolution
killed both contextplus processesbrew services stop ollama)Suggestion
Even with the stdin fix, contextplus seems to have inherently high CPU usage during indexing/embedding. For machines with limited RAM (24 GB) and many projects, consider:
CONTEXTPLUS_MAX_CPUor--max-workersoption to throttle indexing@ForLoopCodes commented on GitHub (Mar 8, 2026):
will be working on fixing this from source, i have an exam tomorrow, give me 2 days
@jojoprison commented on GitHub (Mar 9, 2026):
gl on exam bro
@ForLoopCodes commented on GitHub (Mar 10, 2026):
yo, can you confirm that is context+ still writing tbs to the disc on the mac operating system or it is just the cpu/mem issue now