01 August 2025


Imagine an artificial intelligence that doesn't wait for orders. It doesn't follow strict instructions or wait for someone to press a button. It understands the context, sets goals, and acts on its own initiative. Hollywood dystopia? No: it's the reality of Agentic AI.
These autonomous artificial intelligence systems are designed to perceive the environment, analyse and make logical conclusions on the context, plan actions, and achieve goals independently. Unlike traditional virtual assistants – such as chatbots or voice services – which follow rigid scripts or respond only to explicit commands, Agentic AI can make decisions without direct human supervision.
The term “agentic” derives from “agent”, i.e. an entity with the ability to observe, learn and act to pursue a goal. In practice, while normal software executes the instructions it receives, Agentic AI decides for itself which instructions to follow, adapting them to the context.
This difference lies in a new level of flexibility and proactivity. An agentic system can recalibrate its actions based on new conditions, learn from experience, and improve performance over time. It is no longer an algorithm specialised in a single task, but a versatile assistant capable of managing entire processes and acting in complex and unpredictable environments.
In the field of cybersecurity, this leap in quality is particularly relevant. An agentic AI can act as an independent guardian: monitoring networks and systems, analysing logs and telemetry, recognising warning signs and activating countermeasures in real time, without waiting for human intervention.
This is an unprecedented scenario. If artificial intelligence acts on its own, who can really say they have control over it? And what happens if someone manages to manipulate it? It is a situation that does not allow for neutrality: the implications do not stop with those who decide to adopt these technologies. They affect everyone. Even those who think they can do without them. Because such powerful tools can be used to circumvent barriers, evade controls and infiltrate critical infrastructure. And when that happens, the consequences can be disastrous.
The full article (in Italian) by David Casalini, Head of TEHA Lab, is available on the website of Harvard Business Review Italia