I built this because I kept losing hours of AI coding context every time a rate limit hit.
The scenario: You're 3 hours into a session with Claude Code. It finally understands your codebase, your decisions, your architecture. Then — rate limit. Session dead. You can't even run a save command.
You open Cursor. "How can I help you today?" Back to square one. 15 minutes re-explaining everything.
I looked for existing tools. Ruler (~2,500 stars) is great at syncing rules across AI tools — if that's what you need, use it. SaveContext captures sessions but uses SQLite and dependencies. But none of them solved the specific problem: when the rate limit hits, you can't run anything. Your context has to already be saved.
So I built ctx. It installs git hooks that auto-save your context on every commit. Pre-generates resume prompts for 11 tools (Claude, Cursor, Codex, Copilot, Windsurf, Cline, Aider, Continue, Amazon Q, Zed, Antigravity). When a rate limit hits, you just open .ctx/resume-prompts/cursor.md, paste, and keep working. In 10 seconds.
The interesting engineering challenge was token-aware compilation — each tool has wildly different size limits (Windsurf: 6K/file, 12K total. Codex: 32 KiB. Claude: ~100K chars). The compiler prioritizes session context over rules and compresses aggressively for tight-budget tools.
Honest disclosure: I'm a project manager, not a CS grad. I built this entire thing through vibe coding with Claude Code. TypeScript, zero production dependencies, 115 tests. It works on itself (I used ctx during its own development when sessions died).
New in the latest version: ctx init --external stores all context outside the project directory (~/.ctx-global/) — zero files in the repo. Built this after I accidentally pushed private session data to a public repo.
npm i -g ai-context-bridge
cd your-project
ctx init # private repos
ctx init --external # public/open-source repos
Zero deps. MIT licensed. Would love feedback, especially from people who use multiple AI coding tools.
Hey HN,
I built this because I kept losing hours of AI coding context every time a rate limit hit.
The scenario: You're 3 hours into a session with Claude Code. It finally understands your codebase, your decisions, your architecture. Then — rate limit. Session dead. You can't even run a save command.
You open Cursor. "How can I help you today?" Back to square one. 15 minutes re-explaining everything.
I looked for existing tools. Ruler (~2,500 stars) is great at syncing rules across AI tools — if that's what you need, use it. SaveContext captures sessions but uses SQLite and dependencies. But none of them solved the specific problem: when the rate limit hits, you can't run anything. Your context has to already be saved.
So I built ctx. It installs git hooks that auto-save your context on every commit. Pre-generates resume prompts for 11 tools (Claude, Cursor, Codex, Copilot, Windsurf, Cline, Aider, Continue, Amazon Q, Zed, Antigravity). When a rate limit hits, you just open .ctx/resume-prompts/cursor.md, paste, and keep working. In 10 seconds.
The interesting engineering challenge was token-aware compilation — each tool has wildly different size limits (Windsurf: 6K/file, 12K total. Codex: 32 KiB. Claude: ~100K chars). The compiler prioritizes session context over rules and compresses aggressively for tight-budget tools.
Honest disclosure: I'm a project manager, not a CS grad. I built this entire thing through vibe coding with Claude Code. TypeScript, zero production dependencies, 115 tests. It works on itself (I used ctx during its own development when sessions died).
New in the latest version: ctx init --external stores all context outside the project directory (~/.ctx-global/) — zero files in the repo. Built this after I accidentally pushed private session data to a public repo.
Zero deps. MIT licensed. Would love feedback, especially from people who use multiple AI coding tools.