All articles

AI

How Financial Services Leaders are Shifting Internal Audits Upstream to Ensure Responsible AI Deployment

The Data Wire - News Team

|

February 27, 2026

Xin “Cindy” Tu, Head of Data Governance and IT Audit at Capital One, shares her playbook for embedding governance in AI from day one and transforming audit into a strategic partner for responsible innovation.

Credit: Outlever
Key Points
  • The rise of AI has created a foundational shift for internal audit, demanding a new approach to governance that moves beyond a traditional, adversarial role.

  • Xin "Cindy" Tu, Head of Data Governance and IT Audit at Capital One, argues that the window for audit to lead is now, requiring a move from a "police function" to a strategic partner.

  • She explains that an effective partnership requires deep technical literacy and a formal training program that ensures everyone is following the latest guidance.

If internal audit shows up after AI is deployed, we’ve already missed the point. Governance has to be embedded from day one.

Xin Tu

Head of Data Governance & IT Audit
Capital One

AI adoption is turning internal audit into a strategic control tower for enterprise risk. As organizations race to scale AI, audit teams have a narrow window to shift from post-deployment review to day-one design, embedding controls, shaping guardrails, and influencing strategy before risk calcifies. The function long defined by hindsight now has a chance to help write the rules in real time, redefining its value across governance, risk, and compliance.

Xin “Cindy” Tu is a responsible AI Innovator with over 18 years of experience in AI, Data, and IT risks. As the current Head of Data Governance and IT Audit at Capital One, and with a career spanning top financial services companies like Discover and Fannie Mae, Tu has spent the last decade building the very risk frameworks that many organizations are now working to implement. For her, the mission is clear: audit’s window of opportunity to lead is now, and it starts with a foundational principle. "If internal audit shows up after AI is deployed, we’ve already missed the point. Governance has to be embedded from day one," says Tu.

  • Badge to boardroom: For Tu, the first step is to change the narrative and evolve the role from a police function into a strategic partner. "It’s not about simply rejecting their proposals. It's about saying, ‘Yes, we can deploy that AI agent by Q3. Here’s how you can do it responsibly.’ That is a very different perspective, and it positions audit as an enabler who can actually help, not a blocker."
  • Adapt or fall behind: Tu's second pillar is a mandate for every audit professional to develop digital fluency. She points out that audit reports are now dominated by technology-related issues, from the data flowing "from system A to system B to system C" to the AI models themselves. In an environment like this, a lack of AI, data, and IT literacy becomes a serious professional handicap, as deep technical understanding strengthens internal controls and trust. "As the risk landscape shifts from manual to AI-assisted processes, understanding the underlying technology, including how it works and how to audit it, is essential to stay relevant. It is every auditor's responsibility to pick up these new skills now."

The third pillar is confronting the operational reality of making governance work. For Tu, designing a framework is one thing; enforcing it across fragmented business lines is often an even greater difficulty. As the need for AI governance becomes non-negotiable and as governance must be institutionalized, that effort requires an ongoing process of education defined by iterative learning.

  • Stopping the presses: "The fact that we've had to stop non-compliant AI projects from going into production proves there is a greater need for ongoing education," she says. "We must constantly communicate the reasoning behind the governance framework because the consequence is that you don't want your organization to be the one that ends up on the front page of the Wall Street Journal."
  • Test, learn, train: Tu says this is best done by implementing an ongoing training program that evolves as new insights on how to use AI are discovered. "Because governance is an iterative, test-and-learn process, you must have a formal training program to keep all stakeholders informed. That’s how you ensure that as the framework matures, everyone is following the latest and greatest guidance."

Her focus on foundational data tees up an urgent and unresolved question for the industry. As she details in her own work on an agentic AI framework, the rise of autonomous systems challenges conventional thinking in IT audit, where process ownership traditionally determined responsibility. The result is a new frontier of risk and a hot potato of liability that few in the organization are eager to hold, not just because of blame, but because the shared responsibility model makes ownership genuinely complex.

Tu concludes by saying that she sees the profound uncertainty around AI as an opportunity. She sees this moment of technological change as a rare opening that extends beyond the organization to every individual auditor. As the rules of the industry are being rewritten, the challenge becomes deeply personal, but that challenge is matched by the scale of the opportunity. "It feels like the floor is shifting underneath us, and that can be scary. But to stay relevant and maintain your competitive advantage, you must learn something new and contribute to this movement. If you invest the time to learn, you will see the rewards."

Related Stories