← All posts
2026-03-28 · AI agents, memory systems, operational infrastructure
By Stuart Hall

Build the system or lose the plot.

In the film Memento, Leonard Shelby cannot form new memories.

Every time he wakes up, he has no idea what happened the day before. No idea who he spoke to. No idea what he decided. No idea who is on his side and who is not. The last thing he remembers clearly is the night his wife was killed — and from that fixed point, he has to navigate an entirely hostile present.

So he built a system.

Polaroids with notes written on the back. A map covered in locations, names, and warnings. The most critical facts tattooed directly onto his skin where he could not lose them.

Not because he was broken. Because without a system, he could not function at all. The world moves forward whether Leonard remembers it or not. The system was the only way he could stay oriented.

AI agents are Leonard

Every session, an AI agent wakes up fresh.

No memory of what you discussed last week. No recollection of the decision you made about pricing on Tuesday. No idea that you already tried the approach it is about to recommend and it did not work. No awareness that your client changed their brief three days ago.

It starts from zero, every time, with whatever context you give it in that moment.

This is not a flaw you are waiting for them to fix. It is a fundamental architectural reality of how most AI systems are built today. The model does not persist state between sessions. It does not accumulate operating memory. It is not keeping notes on your behalf unless you explicitly build that infrastructure.

Leonard is not going to suddenly remember. You have to build the system.

The agents that fail

Most businesses deploy AI agents without thinking about this at all.

They connect a model to a workflow, run a few sessions, and find it genuinely useful. Then a week later they have to re-explain everything. The agent starts making recommendations that ignore decisions already made. It asks questions that were answered two conversations ago. It produces work that does not reflect current context.

The business starts to lose confidence in the agent. It feels inconsistent. Unreliable. Like you always have to supervise it carefully because it might go off in the wrong direction.

The problem is not the agent. The problem is that nobody built the system.

The agent is doing exactly what it can do: operating on the information in front of it. If that information is incomplete, stale, or missing entirely, the output reflects that. Garbage in, garbage out — but the version of garbage that looks like helpfulness until it quietly leads you in the wrong direction.

What a Leonard setup actually looks like

Leonard's system was not complicated. It was disciplined.

Every piece of critical information had a home. Every session started by checking the notes. Every new decision got recorded before it could be lost. He did not trust his mind to hold things. He trusted the system.

For AI agents, the equivalent infrastructure looks like this:

A persistent context document

A single source of truth that gets loaded at the start of every session. Who the business is. What it sells. Who it serves. Current priorities. Active constraints. Recent decisions and the reasoning behind them. This document lives outside the model and gets updated continuously.

Decision logs

When you make a meaningful decision in or with an AI session, write it down somewhere structured. Not in the chat window — in a file, a database, a note that persists. The model will not remember. You have to record it so it can be retrieved.

Session handoffs

At the end of any significant working session, generate a short summary of what was covered, what was decided, and what the next steps are. That summary feeds into the next session so it can pick up without starting from zero.

Scoped prompts

Every agent interaction should load the context relevant to that task. Not everything — just what is needed. Sales context for sales tasks. Delivery context for delivery tasks. The agent cannot know what it does not receive. Make loading the right context part of the process, not an afterthought.

Why most businesses skip this

It feels like overhead.

When AI is working well in a demo, it feels fluid and smart. Building a context system feels like adding friction to something that already seems to work. Why set up all this infrastructure when the model just answers questions?

Because the demo is a single session with a clean prompt. The real work happens across days and weeks, with context that evolves, decisions that compound, and a business that does not reset to zero every morning even when the agent does.

The cost of skipping it is not visible on day one. It becomes visible at week four, when you realize your AI agent is working from a version of reality that no longer exists.

Memory is an operational problem, not a technology problem

This is the shift in thinking that most businesses miss.

They treat AI memory as a capability gap — something the technology will eventually solve. And eventually, it may. But right now, the gap exists. And the businesses waiting for the technology to solve it are operating with agents that are perpetually disoriented.

The businesses that are making AI work reliably have accepted that memory is an operational problem. It needs an operational solution. Process, structure, and discipline — not a feature update.

Leonard did not wait for his brain to heal. He built a system that worked given the constraints he actually had.

nVelocity point of view

The gap between AI that feels useful and AI that actually changes how a business operates is almost always a context problem.

The model is capable enough. The integrations are close enough. What is missing is the infrastructure that makes each session coherent with the one before it — and with the business reality the agent is supposed to be operating inside.

Most of our work with clients is building exactly this. Not the agent itself. The system around it that makes the agent reliable over time.

Current state documented. Decisions recorded. Context loaded consistently. Outputs reviewed and fed back into the knowledge base.

It is not glamorous. It is not what the demos show. But it is the difference between an AI agent that impresses people for a week and one that is still adding value six months later.

Build the system or lose the plot.

Leonard already showed you what happens when you skip it.