Over the last 18 months, we’ve witnessed a fundamental shift in the AI landscape. We have gone from asking models for information to giving them the keys to our systems. This move from “Generative” to “Agentic” AI is not just a technical upgrade; it is a massive change in our organisational risk profile.
The Singapore Government’s new Model AI Governance Framework for Agentic AI is better than most. While other regions are stuck in high-level ethical debates, this is the first dedicated government framework refreshingly preoccupied with the mechanics of control. It deserves your attention because it finally starts asking the right questions about accountability and action.
What makes this framework stand out?
For the first time, a government body has moved past ivory-tower theorising to focus specifically on the Action-Space. The framework identifies that an agent’s risk is a direct function of what it is allowed to touch. It correctly advocates for Agent Identity: the idea that every semi-autonomous actor in your system needs a unique ID tied back to a human supervisor. This is a critical step toward ensuring that when things happen at machine speed, we still have a trail for forensic accountability.
It also tackles the often-ignored issue of Tradecraft. Automating entry-level tasks poisons your future leadership pipeline by removing the training ground for junior staff. The IMDA framework suggests we must deliberately maintain these foundational skills even as we deploy agents for the heavy lifting. This shows rare strategic foresight in public policy.
However, even a document this practical faces significant hurdles in the real world.
First, the economic paradox of Human-in-the-Loop. The framework is right that humans must stay involved to prevent automation bias. But if you have a thousand agents performing micro-tasks every minute and require human sign-off for each one, your ROI disappears. The pressure to move toward Human-out-of-the-Loop will be immense. The framework provides a solid target, but it does not solve for maintaining vigilance without killing velocity.
Second, the problem of Agent Sprawl. Just as we struggled with microservices, we are about to face a world where agents from different organisations interact in ways we cannot predict. What happens when your procurement agent negotiates with a supplier’s sales agent? Real-world risk lives in the messy, unmanaged boundaries between systems that this framework treats as contained lifecycles.
Finally, Protocol Security. While the framework mentions the Model Context Protocol (MCP), it does not hit hard enough on the fact that we are essentially building an internet for agents. If the underlying authentication for these tool-calls is not bulletproof, we are opening an entirely new attack surface for prompt injections to turn into system-level breaches.
So, how do we fix this without grinding to a halt?
We have to move from “reactive oversight” to “architectural constraints.” This means building our systems so that agents don’t just have an identity, but are physically incapable of exceeding their narrow Action-Space. We need to treat Agent Identity like we treat zero-trust networking: every action must be continuously verified, not just checked at a one-time gate.
We also need to rethink our talent development. If we are going to automate the “boring” work, we must intentionally create new training paths that give junior staff the same foundational exposure without the manual drudgery. Governance cannot just be a set of rules; it has to be part of the engineering culture.
The IMDA framework is an excellent guide for your first few agents and sets a benchmark for other nations to follow. But as we scale, it remains a dangerous map for your thousandth. It assumes a level of human attention that rarely exists in a high-pressure corporate environment.

