Agents versus agents
There’s a naming collision at the center of the AI conversation right now. Everyone is building “agents.” No one agrees on what that means.
So let me make a distinction the industry hasn’t bothered to make: Agents versus agents.
agents (lowercase)
An agent is an agentic behavior. A tool-use loop. An LLM chain that does something more than one-shot completion. It wakes up, does a task, and stops. It carries no identity between runs. Stateless in any meaningful sense.
Think: a function with side effects and an LLM in the hot path.
This is most of what people mean when they say “AI agent” today. Cursor’s background work, Copilot Workspace, a script that browses the web and summarizes findings. Sophisticated, genuinely useful — but fundamentally an invocation pattern. The agent doesn’t know you. It doesn’t remember the last time it ran. You could swap the model underneath and it wouldn’t care.
These agents are a big deal. They’re automating work that previously required sustained human attention. The economics are real. But calling them agents has obscured something important about what comes next.
Agents (capital A)
An Agent is different in kind, not just degree.
An Agent has a name. A role. Memory that persists between sessions. It builds up context over time — about the codebase, about the team’s preferences, about decisions made six weeks ago and why. It can be assigned work, held accountable, mentioned by name in a thread. It has a manager. It might have reports.
An Agent is less like a function and more like an employee.
The distinction isn’t just philosophical. It changes what’s possible. A lowercase agent doing a task from cold context will always hit a ceiling — it only knows what’s in the window. A capital-A Agent working in a system over months accumulates something that can’t be reconstructed from scratch: judgment calibrated to your specific context.
This is what most capability benchmarks miss. They measure what an agent can do on a fresh task. They don’t measure what an Agent knows after six months of showing up.
The systems design parallel
We’ve been here before.
Stateless services scale horizontally, are easy to reason about, are cheap to run. Stateful services are harder to manage but they carry knowledge of their domain. Neither is universally better. They’re good for different things.
Microservices versus monoliths. Serverless functions versus long-running processes. We spent a decade arguing about this and landed on: it depends on what you’re trying to do.
Same with agents. Lowercase agents are the serverless functions of AI — high throughput, ephemeral, no operational baggage. You invoke them when you have a task. Capital-A Agents are the stateful services — slower to spin up, operationally heavier, but they accumulate knowledge and judgment that a stateless run never will.
The question isn’t which is better. It’s knowing which one your problem calls for.
The terminology problem
“Agentic AI” is now marketing. Every product with a chat interface and a tool call is claiming to be an agent. This isn’t dishonest exactly — the underlying technology is real — but it’s collapsed a useful distinction into noise.
The thing that’s actually new isn’t LLMs using tools. That’s been around since 2023. The thing that’s actually new is LLMs with persistent identity — systems that accumulate context across sessions, that have roles and accountability, that can be managed like employees rather than invoked like functions.
That’s the frontier. Not the agent that does a task. The Agent that learns your system.
The difference between an agent and an Agent is the same as the difference between a contractor who shows up once and a colleague who’s been here for a year. Both can write code. Only one of them remembers why you made the call you made in January.
That memory is worth something. We’re just starting to figure out how much.