
Memory Runtime for AI Agents
Your AI agents lose context every session. ElephantBroker gives them persistent memory and intelligent context — so they remember everything that matters.
The Problem
Every session, your agents start from scratch. Chat history gets cut off. Search retrieves fragments but can't tell what's important. Databases store data but lose the connections between ideas.
The result? Agents that repeat mistakes, forget your preferences, and lose track of complex, multi-step work.
Imagine this:
Session 1: "Use PostgreSQL for the user table, not MongoDB."
Session 47: "Let me set up a MongoDB collection for users..."
Without ElephantBroker, your agent forgot what you decided 46 sessions ago.
How It Works
Every conversation, decision, and artifact is automatically saved and organized — not just dumped into a database, but categorized by type: facts, procedures, decisions, and evidence.
Your agent learns you prefer TypeScript over JavaScript. That preference is stored as a fact and recalled whenever relevant.
Not all memories are equal. ElephantBroker ranks what's most relevant to the current task — weighing recency, importance, and how well it fits the goal at hand.
Working on auth? Your agent surfaces the security decisions from last week, not the CSS discussion from yesterday.
Before every response, ElephantBroker assembles the ideal context — fitting the most important information within the model's attention window, like a briefing prepared for an executive.
Instead of 100K tokens of raw chat history, your agent gets a focused 8K-token briefing with exactly what it needs.
In the background, memories are continuously refined and consolidated — merging related facts, resolving contradictions, and strengthening important connections. Like how your brain processes memories during sleep.
Three separate conversations about your API design get merged into one coherent architectural summary.

Memory that grows
with every conversation.
Capabilities
Your agent doesn't just search — it builds a ranked briefing of the most relevant memories, tailored to the current task and fitted within the model's limits.
Facts, procedures, decisions, goals, evidence, and files — each stored with its own lifecycle. Like how your brain separates knowing how to ride a bike from remembering yesterday's meeting.
Humans, supervisors, workers, and services — all tracked as participants with their own goals and permissions. Your agents know who said what and why.
Claims can carry proof. When your agent says "the API rate limit is 1000/min," it can point to the source — and flag when information conflicts.
Context isn't trimmed randomly. The system knows the current goal and preserves what matters, compressing what doesn't — like a skilled assistant preparing a brief.
A coding agent needs different context than a research agent. Profiles tune what gets prioritized, how deep retrieval goes, and what safety checks apply.
The Difference
Without ElephantBroker
What database are we using?
I don't have that context. Could you remind me?
Deploy to staging
What's the staging URL? I don't see it in this conversation.
Fix the auth bug from last week
I'm not sure which bug you're referring to. Can you describe it?
With ElephantBroker
What database are we using?
PostgreSQL — you decided this in session 3 when we compared it against MongoDB for the user table.
Deploy to staging
Deploying to staging.acme.dev. Using the Docker config from the infra setup we did on Tuesday.
Fix the auth bug from last week
Found it — the JWT expiry was set to 5 min instead of 60. Here's the fix based on the auth module we built in session 12.
Under the Hood
ElephantBroker isn't two disconnected tools duct-taped together. It's a single runtime with a shared knowledge layer — so memory and context always stay in sync.
Lightweight integrations that plug into any agent framework. A Memory Plugin for storage and a Context Plugin for retrieval — two surfaces, one brain.
The core engine — managing actors, goals, memory scoring, context assembly, compaction, and safety guardrails across 15+ coordinated modules.
The knowledge substrate — connecting entities in a graph, embedding content for semantic search, running enrichment pipelines, and maintaining ontologies.
Bring your own stack — vector stores, graph databases, relational databases, caches, and blob storage. Observable end-to-end with OpenTelemetry.

The Journey of a Memory
Every message flows through a pipeline: captured, classified, scored for importance, assembled into context, and refined over time. In the background, a consolidation process runs continuously — like how your brain strengthens memories while you sleep.
How Memories Compete
Every piece of stored knowledge competes for a spot in your agent's active context. Multiple signals are weighed together to surface exactly what matters — and different agent profiles (coding, research, management) tune these weights differently.
Think of it like a newsroom editor deciding which stories make the front page. Space is limited, so only the most important and relevant information gets through.
Use Cases
Remembers your codebase patterns, tracks open issues across sessions, and recalls architectural decisions made weeks ago — so it never suggests something you've already ruled out.
Builds a growing knowledge base of findings, cross-references sources automatically, and never wastes time re-reading a paper it has already analyzed.
Tracks delegated tasks across team members and worker agents, maintains project goals, and ensures nothing falls through the cracks — even across weeks of work.
ElephantBroker is currently in private development. Join the waitlist to get early access and shape the future of agent memory.