The Ebbinghaus decay approach is really interesting for agent memory. I've been working with a file-based memory system for a long-running AI agent (daily logs + curated long-term memory file), and the biggest challenge is exactly what you're tackling: knowing what to forget.
One thing I've found in practice: importance isn't always knowable at write time. Something that seems trivial today becomes critical context a week later. Have you considered a retrieval-based boost where accessing a memory resets or extends its half-life? That way naturally useful memories self-reinforce.
Also curious about the compression/merging strategy — how do you handle contradictory memories (e.g., "user prefers X" followed by "user now prefers Y")?
The Ebbinghaus decay approach is really interesting for agent memory. I've been working with a file-based memory system for a long-running AI agent (daily logs + curated long-term memory file), and the biggest challenge is exactly what you're tackling: knowing what to forget.
One thing I've found in practice: importance isn't always knowable at write time. Something that seems trivial today becomes critical context a week later. Have you considered a retrieval-based boost where accessing a memory resets or extends its half-life? That way naturally useful memories self-reinforce.
Also curious about the compression/merging strategy — how do you handle contradictory memories (e.g., "user prefers X" followed by "user now prefers Y")?