Using GraphRAG with OpenClaw to build persistent knowledge graph memory for AI agents
I gave my AI assistant a memory. Here’s what it knew about me.
For months, my AI assistant woke up every session with no idea what we’d done the day before. I’d ask about a project we’d built together and get a polite “I’m not sure what you’re referring to.” We’d spent hours on it. It helped write the code. Gone. Clean slate. Every time.
This is the problem people keep glossing over when they talk about agentic AI. Not hallucinations, not cost, not context windows. Memory. Specifically, the complete absence of it between sessions.
The Standard “Fix” Doesn’t Work
The obvious solution is stuffing everything into a long system prompt. I tried this. It breaks down fast. You can’t fit months of session notes into a single prompt window, and even if you could, the model can’t do multi-hop reasoning across 50 documents at once. It retrieves something. It doesn’t think across everything.
Standard vector RAG has the same ceiling. You ask a question, it finds the closest chunks in embedding space, and shoves them into the prompt. That works fine for “what did I say about X last Tuesday.” It falls apart for questions like “how does this new project connect to what I was building in February?” or “what are the recurring patterns in my work this quarter?” Vector search finds relevant passages. It doesn’t find relationships.
What GraphRAG Actually Does Differently
Microsoft’s GraphRAG takes a different approach to the problem. Instead of doing similarity search at query time, it reads your documents at index time and builds an actual knowledge graph. Entities, relationships, community clusters. By the time you ask a question, the structural reasoning is already done. You’re querying a representation of your knowledge, not scanning raw text.
The practical difference is significant. Graph search can traverse relationships and synthesize across an entire corpus. That’s not a marketing claim. I tested it.
I pointed GraphRAG at 55 documents: daily session notes, project logs, a curated long-term memory file, reference documentation. The indexer ran on gpt-4o-mini for bulk extraction (cheaper for that volume) and built the graph overnight. Queries run on gpt-4o for better reasoning quality. The full index build took 25 minutes for 55 documents. Two hours of config and setup total, which I’ll take.
What the Agent Knew About Me
Here’s the part that got my attention. I asked it: “Give me a comprehensive summary of Glen Rhodes’ life.”
It had never had a conversation with me. It had only read the documents.
What came back was accurate, detailed, and connected things across files from months apart. The legal situation. The consulting work. The products I’m building. Personal history. It surfaced threads I hadn’t consciously linked together because they existed in passing references scattered across completely different documents.
That’s graph retrieval doing something vector search can’t. Vector search would have found the most relevant documents. The graph found how everything connects.
The Current Setup
Two memory servers now run side by side in my OpenClaw environment. SmartMemory on port 8765, which is my own custom graph system and handles specific fact lookups well. GraphRAG on port 8766, which handles synthesis questions. The agent checks GraphRAG before saying it doesn’t know something. New files sync every night at 2am and the index updates incrementally using graphrag update –method fast-update, so daily reindexing doesn’t redo everything from scratch.
Honest Limitations
Query latency sits around 20 seconds. That’s fine for a background agent working on a task. It’s too slow for real-time chat. GraphRAG also doesn’t do temporal reasoning natively. If a document from February contradicts something from April, it’ll surface both rather than knowing which is more recent. You have to build that logic yourself or handle it in your prompting layer.
These are real limitations. They’re also solvable, and the foundation underneath them is solid enough to build on.
The “your AI assistant forgot what you built together” problem has a practical solution now. It took two hours to set up. It runs itself after that. If you’re building or running agents and hitting this wall, this is the most direct path through it I’ve found.
Full config and server code are on my GitHub. Start with pip install graphrag, point it at your documents directory, and let it index overnight. The setup instructions are at the bottom of the full writeup.
The agent that forgets everything is a choice at this point, not a constraint.
Sources & Further Reading
#AIAgents #GraphRAG #KnowledgeGraph #MachineLearning #LLM #OpenClaw #AIMemory
