> Claude: "I don't have memory of previous conversations."
> You: "We spent 3 hours on it yesterday"
> Claude: "I'd be happy to help debug from scratch!"
People are twisting themselves into knots trying to solve issues like this. Let's use the mental model of "coding agents are not magic, treat them like humans." The boring old Jira MCP [0] completely solves the posed problem.
> You: "Remember that auth bug we fixed?"
> Claude: Called Atlassian Rovo
> Claude: "Yes, I see PROJ-123 with commit SHAs, comments on what we decided and why. How would you like to proceed?"
If all LLMs disappear, you still have human readable Jira tickets to continue working on your project, instead of something like QR codes in an MP4.
[0] I am not trying to promote Jira or MCPs here, use whatever you want. I went back to Jira because its usage patterns are very well represented in the training data, and their MCP is not in beta.
Boastful lies like this are a telltale sign of vibe-coded projects. Approximately, an AI is making word-association guesses from its context window, and arranging those guesses into grammatical forms that human RLHF reviewers find impactful. Frequently the lies are obvious if you have a mental model of the project, which the AI doesn't have.
This post's title is wildly misleading - mods should correct it. While the GitHub project is called "claude brain", it is not published by Anthropic and the repo is a thin wrapper around a service called "Memvid". So, calling this a "Memvid MCP" would be more accurate. Maybe the title should be "Memvid MCP provides alternative memory for Claude".
“The Knowledge Layer for AI
The AI platform that puts your company's knowledge to work, powering enterprise search, AI agents, and workflow automation. All in a single file you own.”
Okay, so this project encodes memories as QR codes within an MP4 file. And if I'm correct, I believe it's doing some sort of vector search based on the text embedding of the data to find the frame within the MP4 file.
The one thing I don't understand about this project is how encoding data as a QR code can be more efficient than just storing the same data as compressed text?
Also, if you're storing the data as a QR code, aren't you just wasting data anyway because QR codes are specifically designed to be read in the wild via a camera, and so tracking markers and error correction are built into the QR code. Those markers and error correction are redundant when you're no longer needing to decode a banged up code via a smartphone camera.
Is there actually something novel I'm missing about this with the QR code MP4 bit? Because to me this just seems silly.
It just reads like someone who doesn't understand how to index and read data from a disk? I don't understand why anything streaming data from disk would need 8GB RAM like stated in the post. And a regular vector database could easily just stream from disk.
This is all it takes to create a business in the days of AI? Some elaborate Rude-Goldberg machine to do text storage and retrieval?
"Okay, so this project encodes memories as QR codes within an MP4 file. And if I'm correct, I believe it's doing some sort of vector search based on the text embedding of the data to find the frame within the MP4 file."
I'm just as puzzled as you. What the heck, did people forgot how to encode data?
Unless it's actually smart, in sense QR codes carry some limited amount of data, MP4 does compression introducing artifacts and losing some data, QR codes can recover from some (well from a lot) loss of data, repeat. So it's DB with natural low-passing of data. Conceptually cool, kinda how memory update process worked in TV Series "Travelers" for Historians.
Or you know, I'm hearing SQLite is kinda nice and does not need encoding/decoding to-from QR-videos.
QR codes can only recover data because the QR code itself is built with redundancy. It's always storing more data than is actually needed. But if you lose too much of the QR code, it becomes impossible to read.
Think about the most basic example. It is for example impossible to recover 100 bytes of data from a binary file, even with error correction, if you only have 80 bytes.
Also, QR codes are only storing the bits. There's absolutely no way H264/H265 is storing bits in images any more efficiently than just writing the bits to a regular binary file.
In such case, soon we will need licenses-for-AI, just like with guns... There is no way a smart person wouldn't write this as a joke or an art project, but for serious.
ugh the declarative short pseudo-sentence, markdown arrows everywhere, and asinine metaphors. the trillion dollar future is here and it’s just markdown files shouting how they’ve solved machines intelligence by propagating a different flavor of markdown files. maybe it is useful, but seeing the field reduced to this leaves a sour taste in my mouth
> The Problem
> You: "Remember that auth bug we fixed?"
> Claude: "I don't have memory of previous conversations."
> You: "We spent 3 hours on it yesterday"
> Claude: "I'd be happy to help debug from scratch!"
People are twisting themselves into knots trying to solve issues like this. Let's use the mental model of "coding agents are not magic, treat them like humans." The boring old Jira MCP [0] completely solves the posed problem.
> You: "Remember that auth bug we fixed?"
> Claude: Called Atlassian Rovo
> Claude: "Yes, I see PROJ-123 with commit SHAs, comments on what we decided and why. How would you like to proceed?"
If all LLMs disappear, you still have human readable Jira tickets to continue working on your project, instead of something like QR codes in an MP4.
[0] I am not trying to promote Jira or MCPs here, use whatever you want. I went back to Jira because its usage patterns are very well represented in the training data, and their MCP is not in beta.
Even simpler: wedow/ticket with in-repo tracking of tasks. No MCP, no Jira.
(This is a markdown-only replacement for Beads.)
But yes, wiping the context clean is no problem, in fact it's preferred hygiene.
> # One-time setup (if you haven't used GitHub plugins before)
> git config --global url."https://github.com/".insteadOf "git@github.com:"
Are you sure?? This sounds like terrible advice. Won't it prevent people from pushing to github?
Discussed 10 months ago here: https://news.ycombinator.com/item?id=44125598
Back then the consensus was that the idea was absurd, I'm surprised they're now trying to make it into a product
> Is it private?
> 100% local. Nothing leaves your machine. Ever.
Except for the part where it gets added to the context window and sent to anthropic's servers?
This is a bit strange to point out explicitly, since that ship sailed anyways.
Boastful lies like this are a telltale sign of vibe-coded projects. Approximately, an AI is making word-association guesses from its context window, and arranging those guesses into grammatical forms that human RLHF reviewers find impactful. Frequently the lies are obvious if you have a mental model of the project, which the AI doesn't have.
This post's title is wildly misleading - mods should correct it. While the GitHub project is called "claude brain", it is not published by Anthropic and the repo is a thin wrapper around a service called "Memvid". So, calling this a "Memvid MCP" would be more accurate. Maybe the title should be "Memvid MCP provides alternative memory for Claude".
Seems like vibe coded garbage, as well as being irrelevant given the latest Claude Code features, which includes a memory file.
Oh also note that this isn't a free OR open-source tool. It's a wrapper round a ludicrously priced service: https://memvid.com/pricing
Which is why the repo says "Written in Rust" but contains only a thin JavaScript/TypeScript layer around the underlying service.
How does this work under the hood? What is so different from the OpenClaw approach of being able go do a semantic search over past sessions?
As a sales pitch this is unconvincing. It's easy to save data a file. Why this file format?
Tsk tsk you are so asking the wrong question.
It’s not “saving data to a file”, it is:
“The Knowledge Layer for AI The AI platform that puts your company's knowledge to work, powering enterprise search, AI agents, and workflow automation. All in a single file you own.”
(From their site: https://memvid.com)
Get with the program!
Okay, so this project encodes memories as QR codes within an MP4 file. And if I'm correct, I believe it's doing some sort of vector search based on the text embedding of the data to find the frame within the MP4 file.
The one thing I don't understand about this project is how encoding data as a QR code can be more efficient than just storing the same data as compressed text?
Also, if you're storing the data as a QR code, aren't you just wasting data anyway because QR codes are specifically designed to be read in the wild via a camera, and so tracking markers and error correction are built into the QR code. Those markers and error correction are redundant when you're no longer needing to decode a banged up code via a smartphone camera.
Is there actually something novel I'm missing about this with the QR code MP4 bit? Because to me this just seems silly.
Edit: So I found the original reddit post https://www.reddit.com/r/ClaudeAI/comments/1ky1y7z/i_acciden...
It just reads like someone who doesn't understand how to index and read data from a disk? I don't understand why anything streaming data from disk would need 8GB RAM like stated in the post. And a regular vector database could easily just stream from disk.
This is all it takes to create a business in the days of AI? Some elaborate Rude-Goldberg machine to do text storage and retrieval?
"Okay, so this project encodes memories as QR codes within an MP4 file. And if I'm correct, I believe it's doing some sort of vector search based on the text embedding of the data to find the frame within the MP4 file."
I'm just as puzzled as you. What the heck, did people forgot how to encode data?
Unless it's actually smart, in sense QR codes carry some limited amount of data, MP4 does compression introducing artifacts and losing some data, QR codes can recover from some (well from a lot) loss of data, repeat. So it's DB with natural low-passing of data. Conceptually cool, kinda how memory update process worked in TV Series "Travelers" for Historians.
Or you know, I'm hearing SQLite is kinda nice and does not need encoding/decoding to-from QR-videos.
QR codes can only recover data because the QR code itself is built with redundancy. It's always storing more data than is actually needed. But if you lose too much of the QR code, it becomes impossible to read.
Think about the most basic example. It is for example impossible to recover 100 bytes of data from a binary file, even with error correction, if you only have 80 bytes.
Also, QR codes are only storing the bits. There's absolutely no way H264/H265 is storing bits in images any more efficiently than just writing the bits to a regular binary file.
You don't have to explain that to me :D I'm agreeing with your critical comment before.
I'm just trying to make any sense why someone thought it would be any good idea to do memory->QR->MP4 encoding, instead of some sane format.
I suspect it's because Claude said "You're absolutely right! QR codes are an excellent way to store data with their redundancy and small size".
In such case, soon we will need licenses-for-AI, just like with guns... There is no way a smart person wouldn't write this as a joke or an art project, but for serious.
"photographic" memory.
I can't tell if it's a joke or not.
I'm looking for something like this to use per project for pi.dev or oh-my-pi
edit: oh-my-pi has https://github.com/can1357/oh-my-pi/blob/main/docs/memory.md which seems pretty close maybe i should try to use that more
ugh the declarative short pseudo-sentence, markdown arrows everywhere, and asinine metaphors. the trillion dollar future is here and it’s just markdown files shouting how they’ve solved machines intelligence by propagating a different flavor of markdown files. maybe it is useful, but seeing the field reduced to this leaves a sour taste in my mouth