All articles

AI

Agentic AI Becomes Enterprise Infrastructure Under Defined Ownership Models

The Data Wire - News Team

|

February 27, 2026

Jai Sisodia, Managing Director of Cybersecurity, Privacy, and AI at Cycore, explains why enterprises need structured governance frameworks and functional ownership to safely scale agentic AI from experimentation into core infrastructure.

Credit: Outlever
Key Points
  • Agentic AI has moved enterprises past the experimentation phase by enabling end-to-end task execution, but governance has not kept pace with deployment.

  • Jai Sisodia, Managing Director of Cybersecurity at Cycore, outlines a three-pillar guardrail framework built on purpose and strategy, fairness and bias controls, and application security, with ISO 42001 or NIST AI RMF as the structural backbone.

  • Functional department heads, not IT, should own AI systems because they understand the processes and risks involved, coordinating with security, legal, and compliance as support functions.

With agentic AI, we have moved into a different league altogether where AI agents can do the task from end to end.

Jai Sisodia

Managing Director of Cybersecurity, Privacy, and AI
Cycore

Agentic AI has changed the math on enterprise adoption. Until recently, most companies were stuck in an experimentation phase limited to content generation and basic analysis. Now, AI agents complete tasks from end to end, and that capability is pushing organizations to treat AI as core infrastructure rather than a pilot project. The problem is that governance has not kept pace, and for most organizations, the question of who actually owns these systems remains unanswered.

To understand how to navigate this transition, we spoke with Jai Sisodia, the Managing Director for Cybersecurity, Privacy, and AI at the fractional CISO and compliance consultancy Cycore. With expertise honed in senior roles at a Big 4 firm, a multinational healthcare organization, and a UK-based fintech, he holds key industry certifications including CISSP, CISA, and CDPSE.

"With agentic AI, we have moved into a different league altogether where AI agents can do the task from end to end," says Sisodia. That shift from content generation to autonomous task execution is what Sisodia sees accelerating adoption, as companies are now replacing certain human tasks entirely and moving those workers into oversight and monitoring roles. But the governance infrastructure most enterprises have in place was never designed for systems that make decisions, touch sensitive data, and integrate with critical processes.

  • Guardrails start with capability: Sisodia frames the governance challenge around two factors: what the AI system can access and the decisions it can make. "How does it integrate to critical systems? What data sources does it have access to? And what sort of decisions can it take? Are these decisions customer-facing? Are they impacting financial statements?" he says. The answers determine the scope and intensity of the guardrails required, and those vary significantly by industry and risk profile.
  • Three pillars: From those inputs, Sisodia builds guardrails around three pillars. The first is purpose and strategy, defining why the agent exists and what business objectives it serves. The second is fairness and reliability, ensuring the system's configuration and algorithms are free from bias in the decisions it makes. The third is security, covering DLP, encryption, and application-level controls. "As long as we have these three pillars in mind, we can build appropriate guardrails depending on the industry or the criticality of these agents."

The regulatory picture adds another layer of complexity. Sisodia counts more than 113 AI-related regulations globally, and many of them are still changing. Rather than tracking each one individually, he advises clients to adopt a structured framework like ISO 42001 or NIST AI RMF that covers the majority of requirements through a single governance lens.

  • Framework as foundation: "When you adopt a framework, you can take care of most of the regulatory requirements that apply to you. These frameworks provide a structured thought process for how you should govern AI systems as a whole," Sisodia explains. "Otherwise, keeping track of regulatory changes happening across the globe on a daily basis, you would actually need a couple of AI agents just to do that itself." Every credible framework follows the same logic, he continues. Define the scope, perform a risk assessment, build controls, and then iterate continuously. "As long as you're able to iterate and evolve with the development of your processes and reflect that in your overall framework, you should be good from a scaling perspective."
  • Why not IT: On the question of ownership, Sisodia is direct: functional department heads should own AI systems, not IT. The reasoning is that AI agents are embedded in specific business processes, and the people who understand those processes are best positioned to assess risk and measure performance. "IT does not want to own this. They don't know how engineering is using it or how finance is using it," Sisodia says. "These systems are not just technical in nature. They are very functional. The system owner should be the functional head so that they can understand the risks associated and measure the performance and security of the system over time." That ownership, he adds, works best in coordination with security, legal, and compliance as supporting functions providing a 360-degree view.

For enterprises still stuck in experimentation, Sisodia sees one recurring mistake: deploying AI without clear guidance or monitoring, then wondering why adoption stalls or data leaks occur. His prescription is straightforward. Define an acceptable-use policy that people actually follow, pair it with practical training, and repeat it regularly. "If the humans who are going to use these AI systems have appropriate awareness, it provides confidence not just to the end users, but to the board of directors, so they're comfortable deploying AI into critical processes," Sisodia says. "Like everything technical, the people at the end of the day are the ones who make it work."

Related Stories