All articles
Executives Take Ownership of AI Risk as Accountability Shifts Beyond IT Teams
Rod Schatz, Chief AI Transformation Officer at Alidade Strategies Inc., explores the connections among AI risk, executives' financial stake, and the demand for clear leadership to guide AI projects to success.

Key Points
High-profile data leaks and HR incidents prove that deploying generative AI without proper guardrails poses severe risks to enterprise intellectual property and brand reputation.
To combat these risks, Rod Schatz, Chief AI Transformation Officer at Alidade Strategies, advises companies to stop treating AI like a standard software application and start holding the C-suite financially accountable.
He also emphasized that leaders need to address the "quiet freak out" over AI-driven job losses through empathetic communication and clear reskilling paths.
This is really about how you incentivize leadership to really pay attention. And the only way I know how to do it is through their pocketbook. All executives need performance goals tied directly to data quality, AI adoption, and AI safety.

Historically, many traditional software rollouts operated on the assumption that IT could manage risk independently. Deploying software usually meant installing an app, adding users, and walking away. Generative AI breaks that model. As generative AI adoption accelerates in the workplace, more companies are recognizing that the quality of their data infrastructure determines whether AI governance becomes a strategic advantage or an operational liability.
Rod Schatz, Chief AI Transformation Officer at Alidade Strategies, a leading digital and data transformation company, has spent over 15 years navigating enterprise architecture and information management. Having delivered more than 70 enterprise technology projects that generated over $65 million in operational savings, he treats AI risk and data quality as core business metrics. In his experience, the most effective way to enforce those metrics is to tie them directly to performance: "This is really about how you incentivize leadership to really pay attention. And the only way I know how to do it is through their pocketbook. The executives all need to have performance goals tied directly to data quality, AI adoption, and AI safety."
Public AI models are often a two-way street for data. When employees casually paste proprietary information into these tools, they can inadvertently feed trade secrets into systems that retrain on user prompts. That risk is well-documented. In 2023, Samsung engineers pasted confidential source code into ChatGPT, prompting an internal ban and a reassessment of how staff should use public models. When a simple copy-and-paste can expose intellectual property, data management instantly graduates from an IT help-desk ticket to a C-suite problem. To manage that reality, some boards are adopting structured AI risk frameworks with clear expectations for executive accountability.
- Spilling the secret sauce: The data leakage risk becomes concrete when Schatz describes how a competitor could extract proprietary intelligence simply by prompting a public model that has already ingested another company's inputs. "If someone's using it for working on a top secret project, and now all of a sudden, a competitor types in, 'What is Competitor A doing in this and this and this?' And it regurgitates back to them. That's why these conversations need to be at the boardroom table."
But many senior leaders often treat AI as just another software application, delegating oversight to legacy tech departments that lack the infrastructure—let alone the data-observability tooling—to monitor what generative models ingest and produce. Without proper infrastructure, employees often adopt tools like ChatGPT and Claude on their own, outside formal oversight. The CIOs who have invested in unified, well-governed data platforms find themselves in a fundamentally different position: they can see what is flowing through their systems, enforce guardrails at the infrastructure layer, and move forward with confidence rather than caution. To close that gap, some organizations are encouraging executive-level AI experimentation so leaders can understand the tools firsthand. Others are treating governance as a product strategy backed by investments in monitoring and data infrastructure.
- Passing the tech buck: Schatz describes a pattern he has seen repeatedly at executive tables, where leaders dismiss AI as a technology concern and assume someone else will handle it. "I've sat at many executive tables, and typically my peers have always said, 'Oh, yeah. That's an IT thing. I don't really care.' The unfortunate part is that, because of how all this works, step one is that leaders need to actually understand this stuff. So they actually need to be using the tools themselves."
- Watching the wires: Once governance structures are in place, Schatz argues, the boardroom's job shifts from policy-setting to active monitoring—and the metrics that matter go far beyond training completion rates. "Once you get governance in place, the boardroom needs to be re-asking for reports. How are we doing? Where are we failing? And it's not just how many people we trained. It is literally: Do we have data observability in place to check whether our trade secrets, IP, or financials have leaked?"
For Schatz, an organization's "innovation quotient" often dictates its readiness for AI. He has observed a clear distinction between companies built to adapt and those that view innovation as a painful workaround to rigid processes. In historically conservative sectors such as law, where firms face well-documented AI adoption challenges, established procedures can slow experimentation even as AI capabilities are advancing at breakneck speed. Leaders uncomfortable with that pace may instinctively try to preserve the status quo.
For employees, clarity often matters more than comfort. Schatz has observed that vague assurances about reskilling can sometimes deepen anxiety, whereas offering specific paths tends to make the transition more manageable.
- The soft side of tech: The real work of AI adoption, Schatz emphasizes, is communicative. Breaking down barriers means answering the questions employees are actually asking and addressing the fear that most leaders fail to acknowledge. "What's in it for you? Why should you worry or not? How can we help you? It's really just breaking down those barriers. Because, behind the scenes, people are quietly freaking out," he says. "It's about the soft side. The people side." The way to manage that fear is to be direct. Ambiguity, he finds, does more damage than difficult truths.
Ultimately, Schatz concludes that continuous leadership education is the mandatory first step for any successful AI rollout, and that will extend naturally when their compensation and professional success are tied to that education. Without leaders who are willing to learn the technology, understand the risks, and guide their teams through the transition, the AI culture that could boost an innovative company becomes haphazard and opaque. "To me, this is all about leadership. I'm focusing a lot on executive education because my experience has been that leaders earn their university degrees and then never learn again. And that's one of the fundamental problems: it's hard for them to see where the journey goes and what people are also experiencing if they don't understand the technology."




