~ 3 min read
Plan, Don't Execute: Agentic Workflows in Zero Trust Environments

We are not even half a decade into the immersion of generative AI and other AI-powered disciplines into our daily lives and it is already hard to imagine a world without AI, and yet, some low-trust environments will not be able to fully embrace AI agents and AI-centric workflows but at the same time these environments won’t be able to stay relevant without them. This is the paradox of zero trust environments, such as those in defense, government and highly sensitive industries, where the risk of misuse and abuse of AI agents is high, but the need for efficiency and innovation is also pressing.
What are zero-trust environments? From a cybersecurity perspective, zero-trust environments are those that assume no user or system can be trusted by default. A sort of, “trust no one” approach that optimizes for the long-lasting security practice of “principle of least privilege”. In these environments, every action and request is treated with suspicion and requires strict verification and validation. This is in contrast to traditional computing environments where trust is granted based on network location or user credentials.
Examples for computing environments that are operate in highly sensitive verticals and require a zero-trust approach include:
- Defense and military systems
- Government and public sector systems
- Financial institutions and banking systems
- Healthcare and medical systems
- Critical infrastructure systems (e.g., power grids, water supply)
How can such environments and organizations leverage AI and agentic workflows without compromising security and trust?
I believe the answer lies in two key principles: local-first AI agents and semi-agentic workflows.
Local-first AI Agents
Just as these highly sensitive environments been slow to adopt cloud computing and have more-often-than-not reached for on-premise solutions, so too will they be slow to adopt AI agents that are cloud-based and provided by cloud vendors such as OpenAI or Anthropic’s remote models. Instead, they will prefer local-first AI agents that run on-premise, ensuring that data and operations remain within the organization’s control. Hard requirements to ensure that data does not leak outside the organization and that the underlying vendors and supplies of AI models do not train or fine-tune their models on the data that is being processed by the AI agents.
Semi-Agentic Workflows
Given the advancement of agentic development workflows, where AI agents assume autonomy in executing tasks and making decisions, these workflows are likely to be too risky for highly sensitive environments. The risk of AI agents making decisions that could lead to security breaches, data leaks, or other unintended consequences is simply too high and will receive too much scrutiny from regulators and security teams (yes, your CISO is not going to like the idea of AI agents running off on their own).
As such, what does it mean to embrace agentic workflows in these environments? First and foremost, it means that these environments will need to adopt a plan, don’t execute approach.
At its core, semi-agentic workflows will require a “human in the loop” (HITL) approach for oversight and overall control flow validation. Such semi-agentic workflows will position AI agents as tools for planning and inferring intent, rather than executing tasks autonomously. This means that AI agents will be used to analyze data, generate insights, and propose actions, but the actual execution of tasks will be done through pre-defined workflows that are carefully designed and vetted by human operators.