All articles

AI

How Governance Leaders are Defining AI's 'Standard Of Care' for Regulated Industries

The Data Wire - News Team
|
February 13, 2026

Nikki Mehrpoo, AI Governance Architect and Founder of iGovernAI, reframes AI adoption as a professional duty, calling for documented standards that withstand legal and regulatory scrutiny.

Credit: Outlever
Key Points
  • AI tools now shape professional judgment in regulated industries, yet many organizations focus on speed and cost savings without building defensible governance around their use.

  • Nikki Mehrpoo, AI Governance Architect and Founder of iGovernAI, outlines an AI Standard of Care built on Educate, Empower, and Elevate to anchor AI use in professional duty.

  • She calls on leaders to formalize oversight, demand vendor transparency, and document AI-assisted decisions so every action withstands future legal and regulatory scrutiny.

It's irresponsible for a leader to focus only on ROI over ensuring AI is used responsibly by professionals in regulated industries. Govern before you innovate. Govern before you automate.

Nikki Mehrpoo

Founder
iGovernAI

AI has quickly evolved from a tool that aids in completing basic tasks to a partner that shapes and influences the very decision-making process—for better or worse. The result has spurred a renewed focus in regulated industries, moving away from the technology itself and back to the core of professional responsibility. For these professionals, AI governance isn't about the tools. It's about duty.

Nikki Mehrpoo has built her practice around this principle. As an AI Governance Architect and the Founder of iGovernAI, she advocates for establishing an AI Standard of Care for regulated industries. Her authority is built on over two decades of legal experience, including her time as a Workers' Compensation Judge for the State of California and her distinction as the first and only attorney in the state to be dual-certified in both Workers’ Compensation and Immigration Law.

"Any time AI is utilized, it’s influencing your judgment from the first second it gets involved,” says Mehrpoo, an issue that's especially relevant for regulated professionals such as attorneys. She developed a framework to tackle the issue, built on three pillars: Educate, Empower, and Elevate.

  • Duty over devices: "Educate" reframes training, elevating oversight from a simple procedural step into a developed skill that combines AI literacy with ethical discernment. "When I say 'educate,' I'm not talking about what AI is or how to use it. I'm talking about the governance risks and compliance issues you must understand to use it effectively and properly as a professional."

  • Demand for transparency: The second pillar, "Empower," moves the burden of responsibility upstream. For Mehrpoo, true empowerment comes from proactively governing the tools an organization uses. This often requires a cultural change where leadership and boards are formally structuring their governance and demanding transparency from AI vendors about the inherent risks of their products. "We must demand that the innovators and suppliers of AI tools show us the risks. They have a responsibility to explain how we should teach our workforce about the issues their AI could have. If they can't tell us what their product's risks are, how can we protect ourselves against it?"

  • Start with workflow: The final pillar, "Elevate," is about creating a legally defensible logic trail. Ungoverned AI is a business risk, and the specific risks AI poses to the legal profession and other licensed fields underscore the need for a documented, risk-based framework for alignment. The principle extends to many regulated sectors, where clear documentation becomes the key to managing compliance risk and relies on sound technical underpinnings of AI governance. "Start small. Take one tool in your system and figure out where it fits in your workflow and your decision-making process." A professional must be able to justify every AI-assisted decision, explaining what they did and why they relied on the tool. "What if someone questioned me about this five years from now? I must be able to answer what I did and why I relied on it," insists Mehrpoo.

Mehrpoo adds that the entrance of AI into workflows hasn't changed the established duties and requirements of professionals. "Our job as professionals has always been to prove why we made a decision when asked. It’s the same thing with using AI. You must be able to prove what you prompted and what your verification mechanisms were in the process."

A key part of the global conversation on AI regulation, Mehrpoo concludes, is that organizations must clear one final hurdle for this framework to be effective: a tendency within leadership to prioritize short-term ROI over long-term responsibility. "What has surprised me most is how often leaders and CEOs are focused only on using AI to save money, instead of thinking about the consequences of bringing it into their world. It's irresponsible for a leader to focus only on ROI over ensuring AI is used responsibly by professionals in regulated industries," she concludes. "Govern before you innovate. Govern before you automate."

Related Stories