Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: Context Mode – 315 KB of MCP output becomes 5.4 KB in Claude Code (github.com/mksglu)
45 points by mksglu 2 hours ago | hide | past | favorite | 14 comments
Every MCP tool call dumps raw data into Claude Code's 200K context window. A Playwright snapshot costs 56 KB, 20 GitHub issues cost 59 KB. After 30 minutes, 40% of your context is gone.

I built an MCP server that sits between Claude Code and these outputs. It processes them in sandboxes and only returns summaries. 315 KB becomes 5.4 KB.

It supports 10 language runtimes, SQLite FTS5 with BM25 ranking for search, and batch execution. Session time before slowdown goes from ~30 min to ~3 hours.

MIT licensed, single command install:

/plugin marketplace add mksglu/claude-context-mode

/plugin install context-mode@claude-context-mode

Benchmarks and source: https://github.com/mksglu/claude-context-mode

Would love feedback from anyone hitting context limits in Claude Code.

 help



Really cool. A tangential task that seems to be coming up more and more is masking sensitive data in these calls for security and privacy. Is that something you considered as a feature?

One moment you're speaking about context but talking in kilobytes, can you confirm the token savings data?

And when you say only returns summaries, does this mean there is LLM model calls happening in the sandbox?


For your second question: No LLM calls. Context Mode uses algorithmic processing — FTS5 indexing with BM25 ranking and Porter stemming. Raw output gets chunked and indexed in a SQLite database inside the sandbox, and only the relevant snippets matching your intent are returned to context. It's purely deterministic text processing, no model inference involved.

Excellent, thank you for your responses. Will be putting it through a test drive.

Sure, thank you for your comment!

Hey! Thank you for your comment! There are test examples in the README. Could you please try them? Your feedback is valuable.

The BM25+FTS5 approach without LLM calls is the right call - deterministic, no added latency, no extra token spend on compression itself.

The tradeoff I want to understand better: how does it handle cases where the relevant signal is in the "low-ranked" 310 KB, but you just haven't formed the query that would surface it yet? The compression is necessarily lossy - is there a raw mode fallback for when the summarized context produces unexpected downstream results?

Also curious about the token count methodology - are you measuring Claude's tokenizer specifically, or a proxy?


Great questions.

--

On lossy compression and the "unsurfaced signal" problem:

Nothing is thrown away. The full output is indexed into a persistent SQLite FTS5 store — the 310 KB stays in the knowledge base, only the search results enter context. If the first query misses something, you (or the model) can call search(queries: ["different angle", "another term"]) as many times as needed against the same indexed data. The vocabulary of distinctive terms is returned with every intent-search result specifically to help form better follow-up queries.

The fallback chain: if intent-scoped search returns nothing, it splits the intent into individual words and ranks by match count. If that still misses, batch_execute has a three-tier fallback — source-scoped search → boosted search with section titles → global search across all indexed content.

There's no explicit "raw mode" toggle, but if you omit the intent parameter, execute returns the full stdout directly (smart-truncated at 60% head / 40% tail if it exceeds the buffer). So the escape hatch is: don't pass intent, get raw output.

On token counting:

It's a bytes/4 estimate using Buffer.byteLength() (UTF-8), not an actual tokenizer. Marked as "estimated (~)" in stats output. It's a rough proxy — Claude's tokenizer would give slightly different numbers — but directionally accurate for measuring relative savings. The percentage reduction (e.g., "98%") is measured in bytes, not tokens, comparing raw output size vs. what actually enters the conversation context.


Nice trick. I’m going to see how I can apply it to tool calls in pi.dev as well

That means a lot, thank you! Would love to hear your feedback once you try it — and an upvote would be much appreciated if you find it useful

Looks pretty interesting. How could i use this on other MCP clients e.g OpenCode ?

Hey! Thank you for your comment! You can actually use an MCP on this basis, but I haven't tested it yet. I'll look into it as soon as possible. Your feedback is valuable.

nice, I'd love to se it for codex and opencode

Thanks! Context Mode is a standard MCP server, so it works with any client that supports MCP — including Codex and opencode.

Codex CLI:

  codex mcp add context-mode -- npx -y context-mode
Or in ~/.codex/config.toml:

  [mcp_servers.context-mode]
  command = "npx"
  args = ["-y", "context-mode"]
opencode:

In opencode.json:

  {
    "mcp": {
      "context-mode": {
        "type": "local",
        "command": ["npx", "-y", "context-mode"],
        "enabled": true
      }
    }
  }
We haven't tested yet — would love to hear if anyone tries it!



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: