From Agent to Agentic AI
This article is translated by AI, if have any corrections please let me know.
Recently, due to work requirements, I needed to evaluate AI Agents and also ended up comparing them with Agentic AI. If you’re also evaluating or adopting AI agent-related technologies, these two concepts may look similar but are quite different.
Autonomous Operation
If you’ve been following LLMs (Large Language Models) and their subsequent development since ChatGPT became widely known, you’ve probably noticed that AI can now operate autonomously for longer periods without human prompting.
This is indeed the ideal direction for Artificial Intelligence — someday we’ll be able to let AI handle many tasks for us, making life easier. While the current trajectory doesn’t quite seem to be heading that way, the duration of autonomous operation has definitely increased.
The key factors enabling longer autonomous operation are improvements in language models and the emergence of Reasoning Models. These make language models more stable in predicting generated content. Through intermediate reasoning steps, they can handle tasks similar to Thinking and Orchestration, reducing the need for human intervention to adjust direction along the way.
This makes Agentic AI increasingly viable.
Agentic AI
The key difference between Agent and Agentic AI lies in the ability to operate autonomously. Agents can mostly only handle individual tasks, while Agentic AI can independently determine what needs to be done and orchestrate other Agents to accomplish tasks.
For example, early Coding Agents required complete descriptions of modification and editing workflows, along with relatively detailed Codebase background information, to successfully complete changes. While writing instructions took time, it was still significantly faster than manually modifying code — the focus was on the quality of Prompt Engineering.
As models gained reasoning capabilities and Context Windows expanded, the focus shifted from individual prompts to the overall Context throughout the operation, and how many Tokens the language model could process while maintaining expected capabilities. With larger Context Windows, operation time could gradually increase, but better Context management became necessary — entering the era of Context Engineering.
By late 2025, after rapid growth in large language models, the Context Window was sufficient for most scenarios. Reasoning model capabilities had also reached a point where errors were minimal in most situations, and tool usage and invocation had become quite stable. This marked the transition to the Agentic AI phase — the question of “can it operate autonomously for extended periods without human intervention.”
Harness Engineering
Finally, I believe Harness Engineering may become the next hot topic following Context Engineering in the first half of this year.
In fact, the concept of Harness isn’t entirely new. Anthropic published Effective harnesses for long-running agents last year, and OpenAI recently showcased the Symphony project — both exploring Harness capabilities.
If we’re looking for the most representative Agentic AI product, it would probably be Claude Code, which we’re currently using. You just input development goals (specifications) and, quality aside, it can explore, plan, and complete implementations on its own.
As for the embodiment of Harness Engineering, I believe it’s the currently rough-around-the-edges Claude Code Agent Team feature. How to enable multiple Agents to collaborate and run for extended periods to complete tasks presents a significant challenge.
On another note, I also consider the recently popular OpenClaw (commonly known as “The Lobster”) to be an attempt at Harness. By using mechanisms like Heartbeat, it keeps Agentic AI operational with properly assigned tasks that can continue running without relying on human triggers.
Autonoe was my attempt last year, inspired by Anthropic’s article. Recently, inspired by Paperclip.ai, I’ve finally figured out how to refactor Autonoe to remove the Agent SDK dependency, which will likely bring it closer to a type of Harness Engineering in practice.
In the AI era, things are evolving extremely fast. At least now we don’t need to spend as much effort on “implementation” and can focus more on “engineering” problems — thinking about what designs and architectures can solve problems.
The journey from Agent to Agentic AI may not look like progress toward AGI (Artificial General Intelligence), but things are definitely becoming more autonomous.
Enjoyed this article? Buy me a milk tea 🧋