ArcBrain is a persistent memory layer for VS Code Copilot, Cursor, Claude Desktop, and Windsurf. Your decisions, configs, snippets, and context — remembered forever, across every session.
A full-featured MCP server that stores, searches, and surfaces your development context exactly when you need it.
ArcBrain doesn't just store memories — it maps the relationships between them. The Constellation Map renders your entire knowledge graph as an interactive visual network, live in your dashboard.
Every fact, decision, snippet, and error becomes a node. Edges show how they reference each other. The more connections a memory has, the brighter and more prominent it glows.
ArcBrain runs as a local MCP server. Your AI editor talks to it automatically — no prompts, no manual saving.
Every memory ArcBrain captures — whether from a live AI session right now or imported from ChatGPT history you exported six months ago — enters the same aging pipeline. There are no second-class memories. No silos. One system, one truth.
Start with a full-featured 14-day free trial. Pick the plan that fits — monthly flexibility or a one-time lifetime deal.
We compared ArcBrain against built-in AI memory tools (ChatGPT, Claude, Copilot, Cursor) and direct memory competitors (Mem0, Zep, Letta, MemState, Giga). Every cell verified against public docs — April 2026.
ArcBrain vs. the built-in memory of ChatGPT, Claude Projects, GitHub Copilot, and Cursor — every feature verified
ArcBrain vs. direct AI memory competitors: Mem0, Zep, Letta (MemGPT), MemState, and Giga — every feature verified against public docs
10 things ArcBrain does that no other AI memory system offers today
Most "AI memory" tools are locked to one model, one tool, one session. ArcBrain was built on a different assumption: your knowledge belongs to you, not to any one AI product.
ArcBrain connects to VS Code Copilot, Cursor, Claude Desktop, and Windsurf simultaneously via MCP. Fix a bug in Copilot — Cursor knows about it in 5 minutes. The same memory store powers every tool.
ChatGPT memory is ChatGPT-only. Cursor memory lives in Cursor only. Claude Projects are Claude-only. ArcBrain is the only system where knowledge genuinely flows across all your AI tools without you doing anything.
ArcBrain doesn't care which AI model you use. GPT-4o in Copilot today, Claude Sonnet in Cursor tomorrow, Gemini in Windsurf on your laptop — they all pull from the same persistent brain.
Your memory isn't locked to OpenAI, Anthropic, or Google. Switch models or providers any time. The knowledge is yours, not theirs.
Every memory — regardless of where it came from — is stored in the same structured schema: key, value, category, confidence score, source, provenance chain, vector embedding, and timestamp.
A decision your AI captured from Cursor today is structurally identical to a fact parsed from a 2-year-old ChatGPT export. Unified format means unified search, unified aging, and unified retrieval — no translation layer, no lossy conversion.
ArcBrain's memory isn't a bucket you pour facts into. It has a metabolism. Facts decay over time if unused. Facts strengthen when referenced repeatedly. Contradictions are detected and scored. Nightly distillation promotes high-confidence facts to canonical status.
Older imported memories age exactly like live memories. Import 18 months of ChatGPT history and those facts start competing, reinforcing, and distilling — just like anything captured today.
All data lives in SQLite + ChromaDB on your machine. The MCP server is a local process. AI summarization runs locally via Ollama if you choose — zero cloud dependency.
PII detection scans every memory at ingestion. Emails, phone numbers, API keys, and tokens are flagged before storage, with optional automatic redaction. You decide what gets stored.
ArcBrain communicates via the Model Context Protocol — an open standard, not a proprietary API. Any MCP-compatible tool can connect. As the ecosystem grows (and it's growing fast), ArcBrain works with new tools automatically.
Competitors force you into their SDK, their rate limits, their per-query pricing. ArcBrain runs on your machine, on your terms, with no per-query costs.
No. ArcBrain runs entirely on your local machine. Your memory is stored in a local SQLite database and a local ChromaDB vector store — both on your computer. The optional AI summarization uses Ollama (also local). The only network call is optional: license validation.
ArcBrain works with any editor that supports the Model Context Protocol (MCP). Currently that includes VS Code with GitHub Copilot, Cursor, and Claude Desktop. The ArcBrain setup wizard auto-detects your editors and writes the configuration for you — no manual JSON editing required.
No — Ollama is optional. The core memory features (storing, searching, and retrieving facts, snippets, decisions) work without any local AI model. Ollama (llama3.2:3b or similar) is only needed if you want automatic end-of-session summaries. The setup wizard will let you enable or skip it.
Yes — every plan includes a 14-day free trial with no credit card required. You get full Pro access during the trial. After 14 days, choose the Pro Monthly ($25/mo) or Lifetime ($199 beta, normally $299) plan to continue. If you don't upgrade, your local memory data is always yours to keep.
Static rules files require you to manually write and maintain context. ArcBrain automatically captures and persists context as you work — decisions you make, errors you solve, configs you set, todo items, and more. It also supports semantic search so your AI can find relevant memories from months ago without you having to remember to include them.
Copy your ~/arcbrain/data/ folder to the new machine — it contains your full SQLite database and ChromaDB vector store. Run the installer on the new machine and it will pick up where you left off. The setup wizard auto-connects your editors, so you're back in business in under a minute.
Yes — completely. Every memory, regardless of source, enters the same three-layer pipeline. A fact parsed from a 2-year-old ChatGPT export starts in Working Memory alongside anything captured by the Live Watcher five minutes ago. From there it ages identically: contradiction detection runs, confidence scores update as the fact gets referenced or ignored, and nightly distillation promotes high-confidence facts to Canonical status. There is no "imported" category. There are no second-class memories. If you import 18 months of Cursor conversations on day one, those facts don't sit inert — they immediately become part of the active knowledge graph, competing, reinforcing, and distilling just like live captures.
Cursor's memory relies on .cursor/rules files and system prompts you write by hand — static, session-scoped, and tied to one editor. ArcBrain automatically captures decisions, errors fixed, configurations set, and project context as you work. Every memory is semantically searchable and shared across VS Code Copilot, Cursor, Claude Desktop, and Windsurf simultaneously. Think of it as a persistent second brain that all your AI tools pull from, rather than a per-editor rules file you have to maintain.
Giga Memory (also called Gigaboost) was a cloud-based AI memory service. ArcBrain is a more powerful self-hosted alternative built by ARC Technologies — no monthly cloud fees, no data leaving your machine. You get the same persistent-context superpowers (and then some: semantic search, multi-editor support, structured fact storage, session summaries) running entirely on your own hardware via a local MCP server.
Currently available on Windows. macOS & Linux coming soon.
irm https://arcbrain.pages.dev/install.ps1 | iex
Windows PowerShell