How AI Agents Remember Things

· 9:06
memory-systems agent-architecture context-management openclaw

How do AI agents remember things between sessions? Every agent forgets everything when a conversation ends, so how do the best ones seem to know you?

I break down the memory architecture behind real AI agents, using OpenClaw (an open-source AI assistant) as a reference implementation. You’ll see how LLM agents write, store, and load persistent memory using plain markdown files, and the four mechanisms that keep context across sessions, including context window management, bootstrap loading, and pre-compaction memory flush.

Building an AI agent?

I help teams design and ship agentic systems — from architecture to production.

See how I can help

Get new videos and posts by email

Weekly videos on AI engineering, plus deeper dives in the newsletter.

Occasional emails, no fluff.

Powered by Buttondown