I've been using Claude Code and Codex daily for months. They're some of the best programming tools I've tried. But there's something nobody tells you when you start: context runs out fast, and the cost grows exponentially.
The real problem isn't the message you're sending
When you're 50 messages into a session and you send message 51, your CLI doesn't just send that message. It sends all 51. The entire conversation, from the beginning, with every single request.
On top of that, Claude Code's system prompt is 13,000 characters — also sent with every message. Every command result the AI has run, every file it read, every search it performed — all of it is in the history, resent again and again.
Why existing tools don't fix this
There's a very popular tool for this problem: RTK (Rust Token Killer), with over 16,000 GitHub stars. It does exactly what it promises: it works as a shell wrapper that intercepts the stdout of each command before it enters the context. When the AI runs git diff, RTK filters the output before the result is stored in the history.
Once a command result has entered the history, RTK can't touch it anymore. And on message 51, those 50 previous messages — with all their results, logs, file reads — are resent in full to the API. RTK has no visibility into the accumulated history.
In numbers: in a 50-turn session with 150,000 total tokens, RTK saves approximately 1.6%. It can only act on the current turn.
What I built
Squeezr is a local HTTP proxy that intercepts each request before it reaches the API. It operates at a different level than RTK: not on the stdout of a single command, but on the complete HTTP request — it sees and compresses the entire conversation on every send.
The system prompt is compressed once and cached. From 13,000 chars down to ~650. On the next request, and the one after, it comes straight from cache — no recompression.
Command and tool results are filtered before they accumulate in the history. When the AI runs npm test and gets 200 lines back, Squeezr extracts only the failing tests. When it
reads a file, it keeps what's relevant. When it searches, it compacts the results. Git commands, Docker, kubectl, compilers, linters — each has its own specific pattern. And unlike RTK, Squeezr also compresses file reads and search results, not just bash output.
The full history is compressed with every request. Older messages are summarized automatically. Message 51 doesn't resend 50 full conversations — it resends 48 compressed ones and the last 3 intact.
The result on that same 85,000 char example: 25,000 chars. 71% less, on every message. In long sessions, cumulative savings reach 89%.
No quality loss
Compression is lossless. All original content is stored locally. If the AI needs more detail from something that was compressed, it calls squeezr_expand() and gets the full original back instantly — no cost, no API call.
The AI gets the same information. Without the filler.
AI compression uses the cheapest model you already have — no extra cost
When a block is too long for deterministic patterns, Squeezr uses an AI model to summarize it — always the cheapest one from the provider you're already using: Haiku if you're on Claude, GPT-4o-mini if you're on Codex, Flash if you're on Gemini. And if you work with local models through Ollama or LM Studio, it uses local models too. No extra API keys, no additional cost.
What changed in practice
Sessions last much longer. The AI keeps track because the context isn't filled with noise. And token spending dropped considerably:
Works today with Claude Code, Codex, Aider, and Gemini CLI. Cursor support is coming soon.
I've been using Claude Code and Codex daily for months. They're some of the best programming tools I've tried. But there's something nobody tells you when you start: context runs out fast, and the cost grows exponentially.
The real problem isn't the message you're sending When you're 50 messages into a session and you send message 51, your CLI doesn't just send that message. It sends all 51. The entire conversation, from the beginning, with every single request.
On top of that, Claude Code's system prompt is 13,000 characters — also sent with every message. Every command result the AI has run, every file it read, every search it performed — all of it is in the history, resent again and again.
Why existing tools don't fix this There's a very popular tool for this problem: RTK (Rust Token Killer), with over 16,000 GitHub stars. It does exactly what it promises: it works as a shell wrapper that intercepts the stdout of each command before it enters the context. When the AI runs git diff, RTK filters the output before the result is stored in the history.
Once a command result has entered the history, RTK can't touch it anymore. And on message 51, those 50 previous messages — with all their results, logs, file reads — are resent in full to the API. RTK has no visibility into the accumulated history.
In numbers: in a 50-turn session with 150,000 total tokens, RTK saves approximately 1.6%. It can only act on the current turn.
What I built Squeezr is a local HTTP proxy that intercepts each request before it reaches the API. It operates at a different level than RTK: not on the stdout of a single command, but on the complete HTTP request — it sees and compresses the entire conversation on every send.
The system prompt is compressed once and cached. From 13,000 chars down to ~650. On the next request, and the one after, it comes straight from cache — no recompression.
Command and tool results are filtered before they accumulate in the history. When the AI runs npm test and gets 200 lines back, Squeezr extracts only the failing tests. When it reads a file, it keeps what's relevant. When it searches, it compacts the results. Git commands, Docker, kubectl, compilers, linters — each has its own specific pattern. And unlike RTK, Squeezr also compresses file reads and search results, not just bash output.
The full history is compressed with every request. Older messages are summarized automatically. Message 51 doesn't resend 50 full conversations — it resends 48 compressed ones and the last 3 intact.
The result on that same 85,000 char example: 25,000 chars. 71% less, on every message. In long sessions, cumulative savings reach 89%.
No quality loss Compression is lossless. All original content is stored locally. If the AI needs more detail from something that was compressed, it calls squeezr_expand() and gets the full original back instantly — no cost, no API call.
The AI gets the same information. Without the filler.
AI compression uses the cheapest model you already have — no extra cost When a block is too long for deterministic patterns, Squeezr uses an AI model to summarize it — always the cheapest one from the provider you're already using: Haiku if you're on Claude, GPT-4o-mini if you're on Codex, Flash if you're on Gemini. And if you work with local models through Ollama or LM Studio, it uses local models too. No extra API keys, no additional cost.
What changed in practice Sessions last much longer. The AI keeps track because the context isn't filled with noise. And token spending dropped considerably:
Works today with Claude Code, Codex, Aider, and Gemini CLI. Cursor support is coming soon.
MIT. https://squeezr.es
If you try it, squeezr gain will tell you exactly how much you're saving.