If you’ve spent any time trying to make sense of agentic AI recently, you’ve probably noticed a familiar pattern. You start with a simple question—what exactly is an AI agent?—and quickly find yourself navigating a growing ecosystem of frameworks, abstractions, and design patterns.
There are libraries for building autonomous agents, orchestration layers for multi-step workflows, patterns for planning and reflection, and increasingly elaborate ways to connect models to tools and memory. All of this is exciting. It also feels… a bit disorienting.
This post is a proposal to introduce the term agentics as a way of organizing these ideas—a possible scientific lens for understanding agents, not just building them.
A Proliferation of Agentic Solutions
Today, “agentic AI” often shows up as a collection of practical solutions:
- Frameworks that wrap large language models with planning, tool use, and memory
- Workflow engines that manage multi-step reasoning and execution
- Design patterns like plan–execute–reflect, self-critique, or task decomposition
- Multi-agent setups where agents collaborate, debate, or specialize by role
If you squint a little, many of these approaches look surprisingly similar, even when they come from different teams or communities. In fact, my team at eBay built an internal platform called Mercury to power agentic recommendation experiences.
That convergence is probably not accidental. It suggests that people are independently rediscovering the same underlying ideas.
At the same time, it’s not always clear how to reason about these systems beyond “it works” or “it doesn’t.”
The Conceptual Gap Behind “Agentic AI”
In practice, agentic AI has become a convenient umbrella term for systems that combine:
- a reasoning core (often an LLM),
- access to tools and external resources,
- some form of memory,
- and a loop that allows planning, acting, and revising.
That’s a useful abstraction. But it also leaves many questions unanswered.
For example, two systems can both be called “agentic” while differing wildly in how much autonomy they have, how long they persist, or how much authority they exercise. Agency often gets treated as a binary property—either a system is an agent or it isn’t—when in reality it seems more graded and contextual.
Or take the example of agentic frameworks. Most agentic frameworks are very good at answering how questions:
- How do I let a model call tools?
- How do I structure planning and execution?
- How do I keep state or memory across steps?
- How do I coordinate multiple agents?
They are less helpful when you start asking deeper questions:
- Why does one system feel more autonomous than another?
- At what point does a workflow start to resemble an agent?
- What are the trade-offs between autonomy, reliability, and control?
- How should we compare two agents built in very different ways?
These questions tend to sit just outside the scope of any single tool or library.
This is where the idea of agentics comes in.
(Re) Introducing Agentics
The term agentics isn’t entirely new; it has appeared occasionally in past work. But it has never really settled into a clear, shared meaning.
Agentics, as I’m using the term, would be the study of artificial agents and agent systems: their properties, their degrees of agency, and the ways they interact with each other and with humans.
A helpful analogy might be the relationship between computer science and software engineering. Agentic AI is largely about building systems. Agentics, if it turns out to be useful, would be about understanding the principles those systems seem to be instantiating.
Whether this deserves to be called a “field” is an open question. But the distinction itself may be helpful.
Agency Is More Than a Workflow
Many current agentic systems are, at their core, sophisticated workflows with feedback. There’s nothing wrong with that. But workflows don’t fully capture ideas like intention, delegation, persistence, or responsibility.
From an agentics perspective, questions like these start to matter:
- When does a system move from reacting to deliberating?
- What does it mean for an agent to “own” a goal?
- How should authority and override be modeled?
- How does agency change over time?
These are not questions that any single framework can answer. They require stepping back and comparing patterns across systems.
Standing on Existing Traditions
None of this is happening in a vacuum. Work on agents already spans:
- Artificial intelligence and reinforcement learning
- Multi-agent systems and distributed AI
- Human–computer interaction and mixed-initiative systems
- Cognitive science and decision theory
- Economics, game theory, and organizational studies
Agentics, if it takes shape at all, would simply try to connect these threads and treat agents and agency as the primary object of study, as opposed to something explored in a fragmented way within those other domains.
Why This Might Matter
The rapid growth of agentic AI frameworks suggests a shift in expectations. We increasingly want AI systems that don’t just respond, but act—sometimes over long horizons, sometimes alongside other agents, and often in collaboration with humans.
Without some shared way of thinking about agency, it’s easy to:
- Reproduce the same patterns without understanding their limits,
- Conflate autonomy with intelligence,
- Or treat “agentic” as a buzzword rather than a property we can reason about.
Agentics won’t solve these problems on its own. But it might give us better language and better questions.
– SriG
Leave a comment