All articles

Future of Data Management

A Northern Trust Senior VP Defines the New Framework for AI Governance and Trust

The Data Wire - News Team
|
November 5, 2025

Armel Roméo Kouassi, a SVP at Northern Trust, explains why the rise of AI in global finance is creating new risks and how a new approach to governance can help.

Credit: Outlever
Key Points
  • In global finance, AI is creating new risks to institutional trust that demand a new approach to governance.

  • Armel Roméo Kouassi, a Senior Vice President at Northern Trust, explains how leaders can establish ethical mandates from the top, applying the same rigor to responsible AI as they do to financial compliance.

  • His framework protects client data as a sacred trust and anchors the system in human accountability, where users are responsible for the results AI generates.

The mandate of AI has to be set at the top. The promise of this tool is exponential because it’s happening at a time when other technologies like the internet and massive data already exist. If ethical principles aren't established at the beginning, it will be much more difficult to stop issues from arising.

Armel Roméo Kouassi

Senior VP and Global Head of Asset Liability Management
Northern Trust Corporation

In global finance, the rules for managing AI are changing. As responsible AI evolves from a compliance function into a pillar of governance, institutional integrity now depends on data sovereignty and trust. Now, the new consensus is that ethical AI must be addressed on three fronts: the data itself, the decisions it informs, and the accountability of the people who use it.

To understand how financial institutions are building the necessary guardrails, we spoke with Armel Roméo Kouassi, Senior Vice President and Global Head of Asset Liability Management at Northern Trust Corporation. With a career spanning senior roles at Citi, State Street, and Merrill Lynch, his message is clear: establishing ethical guardrails is the priority, not rushing to implementation.

“The mandate of AI has to be set at the top. The promise of this tool is exponential because it’s happening at a time when other technologies like the internet and massive data already exist. If ethical principles aren't established at the beginning, it will be much more difficult to stop issues from arising," Kouassi says. But to be effective, a top-down mandate must account for risks beyond abstract concerns.

  • Sycophant systems: AI can amplify bias in how it targets customer segments or screens résumés, Kouassi explains. "I can ask AI to generate information that looks accurate but is designed to please my executive management. Just as a person might massage information for their boss, AI can be prompted to do the same."

However, leaders must also address more subtle internal risks, from deliberate misuse to operational paradoxes. "If a bank being ESG-driven is part of its principles, there should be consistency in the tools they use," Kouassi says.

  • Crown jewels: Financial institutions now apply the same rigor to responsible AI as they do to financial compliance, often rebuilding modern tools to guarantee data sovereignty. "It's the bread and butter of our business, and it's a sacred trust to keep the integrity and confidentiality of all our clients' information."

Tying it to the high cost of regulatory compliance, Kouassi makes the business case for this expense. "If we lose that trust and confidence, what is the value of the firm?"

  • Human elements: While the investment is offset by productivity gains, such as faster product launches and real-time data for the C-suite, the real justification lies in a core principle. “Compliance and ethics are human characteristics. There are right and wrong, good and bad. A machine may not necessarily understand that. The cost is worth it," Kouassi explains.

Looking ahead, Kouassi predicts that the ethical challenges are likely to continue growing. Eventually, AI-driven productivity could displace 20-30% of the workforce within the next decade, raising new questions about the role of human oversight. With human accountability as the anchor, leadership responsibility must cascade down to every user, he concludes. "If you are a user of AI, you are responsible for the prompts, and you must have ownership of the results. That is what ensures accountability."

Related Stories