All articles
Banking Embraces Tiered Autonomy Model to Balance AI Outcomes with Risk and Oversight
Dheeraj Maken of Everest Group advocates for a tiered AI model in banking, where AI handles low-risk tasks and humans oversee critical decisions.

Key Points
The financial industry is shifting to AI-driven decision-making with a focus on tangible outcomes and robust governance.
Dheeraj Maken, Everest Group Practice Director, advocates for a tiered model where AI handles low-risk tasks and humans oversee critical decisions.
He explains why translating complex banking rules into AI systems can be a significant hurdle, often requiring clear guardrails for unexpected scenarios.
Regulatory acceptance hinges on transparent decision logs and defined escalation paths for AI systems, Maken concludes.
You need a tiered, risk-based framework. Low-risk, transactional tasks can be done entirely by agentic systems. Higher-risk tasks require a human in the loop for final decisioning or critical judgments. Finally, critical decisions that might impact a large group of shareholders will definitely remain with humans only.

JPMorgan Chase is rolling out an AI assistant to 140,000 employees. Wells Fargo's virtual assistant has already managed over 200 million interactions. Now, the banking industry is accelerating past routine automation, hurtling toward a new "system of execution" where AI makes the decisions. Built on a tiered autonomy model, the approach prizes real outcomes over sheer efficiency, with governance acting as the non-negotiable guardrail.
For insight into how to manage the transition, we spoke with Dheeraj Maken, Practice Director and Financial Crime and Compliance Leader at global research and advisory firm Everest Group. His background at firms like Accenture and Wipro places him at the epicenter of the foundational technologies that underpin this evolution: robotic process automation, artificial intelligence, and machine learning. To balance innovation with control, banks can adopt a disciplined framework that separates tasks based on their potential impact, he advises.
- Tiers of trust: Here, Maken proposes a three-tiered model. "You need a tiered, risk-based framework. Low-risk, transactional tasks can be done entirely by agentic systems. Higher-risk tasks require a human in the loop for final decision-making or critical judgments. Finally, critical decisions that might impact a large group of shareholders will definitely remain with humans only."
- The road to autonomy: Much of the immediate value lies in the second tier, where agents handle complex, nonlinear scenarios that once required a human from start to finish, Maken explains. Consider a simple corporate onboarding process instantly blowing up when a client's ownership ties trigger mandatory international sanctions screenings and watchlists. "The agentic system should handle the variability in these processes, learning from the scenarios it encounters. It also needs to collaborate with humans so it becomes more adaptable and, over time, more robust as we move toward greater autonomy."
Rather than raw IT infrastructure, which Maken considers a "largely solved problem," the real problem is integration. For most organizations, the most challenging part is embedding the nuanced rules of banking into a functional AI and establishing the guardrails to manage the unexpected.
- Planning for unknowns: "Banks need to ensure domain-level knowledge is translated accurately for AI agents. Guardrails must be clear enough to handle exceptions, because unknown scenarios will always emerge. Getting the domain right is ultimately a tougher challenge than the IT infrastructure itself," Maken says.
- The human backstop: By letting agents do the heavy lifting while reserving final judgment for humans, banks can innovate within a framework that regulators find acceptable. "Large banks like Citi and JPMorgan have started with low-hanging fruit such as adverse media screening. The AI handles crawling, context analysis, and summarization. But humans still file the final reports, keeping regulators comfortable with the process."
- Rules of the road: For regulators to get comfortable, they must have a clear set of non-negotiable demands that this tiered system helps meet, Maken explains. "Regulators want every decision to have an auditable trail, clear logs, and defined escalation guardrails. Once those conditions are in place, they are far more willing to accept agentic AI systems acting independently."
A workable model exists today, but the path to greater autonomy remains a moving target, Maken concludes. Frameworks that fit current use cases will be stretched as technology advances and business ambitions rise. The focus has already shifted, he says. "The talk now is more about achieving tangible outcomes. It's about reducing false positives, ensuring a 360-degree view of the customer, and avoiding repeated outreach for the same information. As we get into more complex use cases, we'll have to see how the regulators adapt to that."




