All articles

AI

Creating a Strong Foundation for Scalable AI Means Operationalizing Responsibility

The Data Wire - News Team

|

April 10, 2026

Kenza Rchi, a Senior IT Analyst for AI Scaling at Philip Morris International, highlights the role of principles and policy in scaling AI infrastructure, and how enterprises must ground deployment in solid fundamentals.

Credit: The Data Wire
Key Points
  • Corporate AI ethics policies frequently stall because teams fail to translate high-level principles into ground-level engineering workflows.

  • Kenza Rchi, a Senior IT Analyst for AI Scaling at Philip Morris International, argues that shared accountability often leads to endless reviews in which no single person truly owns the deployment.

  • Rchi concludes that without clear ownership and traceable data pipelines, ungoverned models remain stuck in proof-of-concept purgatory and fail to scale.

Responsible AI is the only AI that will scale—you can innovate all you want, but without responsibility, it won't deliver real value.

Kenza Rchi

Senior IT Analyst, AI Scaling
Philip Morris International

Corporate AI has a translation problem. Companies are great at writing high-level ethics policies, but often struggle to turn those documents into actual engineering workflows. Survey data suggests that well-intentioned principles frequently stall before reaching ground-level execution, leaving teams unclear on how to apply them to real projects. The gap usually comes down to a lack of clear ownership. Without assigned accountability, governance remains a theoretical concept rather than a functioning operating model.

Kenza Rchi is an expert in enterprise architecture and AI scaling. Holding an MSc in Big Data Analytics and serving as a Global Ambassador for the WomenTech Network, she currently works as a Senior IT Analyst for AI Scaling at Philip Morris International. Rchi’s approach to scaling intelligent systems focuses on how policy and ideals are translated into operational reality.

"Responsible AI is the only AI that will scale," she says. "You can innovate all you want, but without responsibility, it won't deliver real value." Teams tend to establish standards at the "principle level," or the level where policies and ethical requirements live, without successfully translating those ideals into distinct lifecycle ownership.

  • Diluted by design: Weak leadership accountability often stalls expensive enterprise tech projects. "The biggest gap between the policy and making it work is operational ownership, not just assigning it to teams in the air. Most teams get it wrong because they define responsible AI at a principle level, but fail to translate that into who owns what across the lifecycle."
  • All noise, no signal: In Rchi's experience, the concept of communal sign-offs can unintentionally create a structural anti-pattern. When accountability is entirely shared, no one truly owns the deployment. "Responsible AI requires multiple perspectives, of course, but accountability cannot be a shared value. If everyone is involved but no one is clearly responsible, governance becomes just noise rather than real control." A setup of communal accountability often leads to slow, circular reviews where no single person is confident enough to authorize trade-offs, prompting high-stakes sectors to implement stricter governance controls that directly shape the engineering lifecycle.
  • Mandates meet reality: The fix usually involves building a lean, cross-functional task force capable of establishing enterprise data standards that flow securely in both directions. Rchi stresses that while executive direction sets the tone, the substance has to come from practitioners. "Policy ultimately comes from leadership, but the details defining those standards need to come from the ground. The people actually interacting with AI on a daily basis are the ones who truly understand the use cases and the governance process."

Rchi advises bringing in specialized IT counsel to define legal bounds, alongside information security teams who already possess the required risk acumen and can treat new models as an additional layer on existing controls. The room often also includes HR to assess workforce impact and capability-building teams to design future training—all guided by comprehensive data governance frameworks.

That organizational alignment bleeds directly into the technical architecture. Product teams are often incentivized by speed, while compliance teams focus on risk reduction, which can sometimes create friction as new initiatives are introduced. Embedding models into mature IT processes natively aligns those competing incentives without reinventing the entire governance stack.

  • Boring builds better: Rchi tends to resolve that misalignment by framing the technology as the next logical layer on top of traditional IT infrastructure. "We already have IT processes and conduct risk assessments for standard use cases, and AI is just existing on top of that. It must be treated as a component that sits on top of an existing layer." A major test for many of these governance structures often arrives when a live model hits the production environment. Governed models built on a solid IT foundation can provide greater durability at scale and are one way to better protect the bottom line. For many organizations, maintaining infrastructure stability and controlling inference costs serve as prerequisites for safely swapping out fast-moving tools on top of the technology stack.
  • The drift dilemma: Moving from theoretical discussions to operational practice requires, Rchi says, traceable data pipelines to track performance and allow for quick adjustments. "We are moving from a general belief in fairness and transparency to establishing exactly who is accountable if a model drifts next month."

Navigating the reality of enterprise business frequently involves calculated trade-offs between revenue targets and ethical boundaries. With new regulatory frameworks building on earlier data privacy laws, the financial stakes for non-compliance are a standard operational reality. Many mature companies tend to focus heavily on managed risk, using official sponsors to authorize trade-offs within defined tolerances. Strict compliance baselines empower engineering teams to move faster within safe zones, removing the anxiety of ambiguity.

  • Calculated trade-offs: Rchi notes that establishing clear authority on foundational rules is what allows a company to move forward confidently. "Sometimes you have a low risk that is acceptable, where the business owner accepts it and is the sponsor of the risk, and it is registered. Sometimes the value is proven, and you have a small risk that you are willing to take, but you need to decide it's a trade-off."

She also draws a line between risks created by new models and risks inherent in the business use case itself. In those situations, governance is about recognizing that the underlying case is not workable.

The tools will keep changing. The models will keep improving. The regulatory landscape will keep shifting. But the organizations positioned to ride that wave will be the ones that built their foundations on accountability and disciplined risk management long before the next disruption arrived. Rchi sees adaptability itself as a reward for doing the hard governance work early: strong IT foundations allow teams to layer on whatever comes next without questioning the base. "The fundamentals you build won't change. Because if you have the AI layer built and the control and everything, what's coming next will be on top of it. You will be sure that it is standing on the right base."

The views and opinions expressed are those of Kenza Rchi and do not represent the official policy or position of any organization.

Related Stories