Save a TikTok video from your phone via default share sheet, go back to what you were doing. Open Claude on your laptop later, ask "what was that tutorial I saved about harness building?" and it pulls back the saved item via mcp, the post and step by step list already extracted.
A save endpoint that accepts tik Tok videos, YouTube videos, URLs, text, files, screenshots, audio, and PDFs from the iOS/Android share sheets, a Chrome extension, or HTTP. A worker fetches the source, runs a transcript on video and audio, OCR on images, Mozilla Readability on web pages, and summarizes.
The same retrieval drives a chat tab inside the app for people who don't run their own ai/mcp.
A small extraction module format. Sandboxed JavaScript with a restricted API surface and a static scanner before publish. A module says "for saves matching this pattern, extract these fields and surface these outbound links." A movie module pulls year, watch providers in your country, IMDB, RT. A book module pulls Goodreads, Amazon, and a Libby search tied to whichever library card you've added in settings. A recipe module pulls ingredients with amounts and a step list. There's a small store inside the app with defaults. Anyone can publish a module after a static scan plus an approval review. Or keep your own extractions private.
Because of the modules, saves point outward. The detail page for a movie shows rows for actually watching it. A book save deep links into Libby. The relevant link surface grows as people write modules, instead of me wiring integrations one at a time.
I wouldn't pay for another non frontier ai service or have to discontinue the app when ai prices change, so the model is byo ai api key for unlimited use or if you don't want to bother with that it's 50 scans a day, all free.
Stack: Svelte 5 wrapped in Capacitor for iOS and Android, Firebase Functions for the API, Cloud Run for the worker, Firestore for storage. Defaults are Gemini for ingest / embedding model. Bring your own key works for Claude, Gemini, OpenAI, xAI, or DeepSeek. The same key list drives the MCP server and the chat tab. Sign in with Apple or Google.
Rough edges: video processing can run 30 to 90 seconds, public profile pages still need work.
If you've thought about plugging your own content library into an LLM through MCP, what would you want from it?
Also curious what things people are looking to have extracted from saves, I think it will become more interesting as people create different extraction modules. Ideas welcome!
Save a TikTok video from your phone via default share sheet, go back to what you were doing. Open Claude on your laptop later, ask "what was that tutorial I saved about harness building?" and it pulls back the saved item via mcp, the post and step by step list already extracted.
A save endpoint that accepts tik Tok videos, YouTube videos, URLs, text, files, screenshots, audio, and PDFs from the iOS/Android share sheets, a Chrome extension, or HTTP. A worker fetches the source, runs a transcript on video and audio, OCR on images, Mozilla Readability on web pages, and summarizes.
The same retrieval drives a chat tab inside the app for people who don't run their own ai/mcp.
A small extraction module format. Sandboxed JavaScript with a restricted API surface and a static scanner before publish. A module says "for saves matching this pattern, extract these fields and surface these outbound links." A movie module pulls year, watch providers in your country, IMDB, RT. A book module pulls Goodreads, Amazon, and a Libby search tied to whichever library card you've added in settings. A recipe module pulls ingredients with amounts and a step list. There's a small store inside the app with defaults. Anyone can publish a module after a static scan plus an approval review. Or keep your own extractions private.
Because of the modules, saves point outward. The detail page for a movie shows rows for actually watching it. A book save deep links into Libby. The relevant link surface grows as people write modules, instead of me wiring integrations one at a time.
I wouldn't pay for another non frontier ai service or have to discontinue the app when ai prices change, so the model is byo ai api key for unlimited use or if you don't want to bother with that it's 50 scans a day, all free.
Stack: Svelte 5 wrapped in Capacitor for iOS and Android, Firebase Functions for the API, Cloud Run for the worker, Firestore for storage. Defaults are Gemini for ingest / embedding model. Bring your own key works for Claude, Gemini, OpenAI, xAI, or DeepSeek. The same key list drives the MCP server and the chat tab. Sign in with Apple or Google.
Rough edges: video processing can run 30 to 90 seconds, public profile pages still need work.
Screenshots and what extraction looks like: https://clippsapp.com/show
There is an obsidian plugin as well
Web: https://clippsapp.com iPhone: https://apps.apple.com/us/app/clipps/id6761316060 Android: closed beta, email waitlist on clippsapp.com
If you've thought about plugging your own content library into an LLM through MCP, what would you want from it? Also curious what things people are looking to have extracted from saves, I think it will become more interesting as people create different extraction modules. Ideas welcome!