All articles

Future of Data Management

AI Turns Data Pipelines Into Supply Chain Attack Surfaces As Asset Inventory Breaks Down

The Data Wire - News Team

|

May 15, 2026

Mohamed Azard Rilvan, Cybersecurity Consultant at Verizon Enterprise Solutions, explains why ETL is now a runtime environment, why inherited risk from third-party code will not go away, and why internal documentation matters more than any new tool.

Credit: The Data Wire

AI adoption is extending supply chain risk. The threat vectors are extended, and we are seeing it across all the stacks, from the data layer, to how it is collected and processed, to the modeling layer, and then the application layer where users are interacting.

Mohamed Azard Rilvan

Cybersecurity Consultant
Verizon Enterprise Solutions

Data pipelines used to move information from one system to another. Now they execute code. Third-party packages, AI model dependencies, and software maintenance layers all run inside the pipeline as part of normal operations, which means a compromised dependency no longer just corrupts data. It becomes executable inside the stack. The organizations that still treat pipeline security as a data protection problem are defending the wrong surface.

Mohamed Azard Rilvan, Cybersecurity Consultant at Verizon Enterprise Solutions, leads SOC automation initiatives focused on reducing mean time to respond through SIEM and SOAR integration across finance, healthcare, transportation, retail, and technology sectors.

"AI adoption is extending supply chain risk," Rilvan says. "The threat vectors are extended, and we are seeing it across all the stacks, from the data layer, to how it is collected and processed, to the modeling layer, and then the application layer where users are interacting."

The attack surface spans the entire pipeline

Rilvan traces risk through each layer. Data collection introduces exposure through sources and ingestion. Processing executes code from third-party packages. Modeling depends on training data and compute infrastructure, often in cloud environments where misconfiguration remains the primary vector. And the application layer inherits every vulnerability introduced below it.

"The challenge is that monitoring assets and identities is already hard," Rilvan says. "But now you're expanding into multiple layers of AI that require monitoring which traditional security operations don't focus on."

SOC teams cannot do this alone

SOC teams were not built to monitor software dependencies or model training pipelines. AppSec, DevOps, and data engineers each own pieces, but none has full visibility. "With AI capabilities, just those teams alone can't do their jobs," Rilvan says. "The SOC should integrate with these teams because the visibility is far and wide."

The starting point is orchestration: getting business owners, application owners, and security teams into one framework with shared policies. "Getting them together so we can consolidate our security solutions is very challenging," Rilvan says. "But that is the starting point. Afterwards is where the monitoring happens."

Rilvan draws a direct line from Log4Shell to the current problem. A single debugging library embedded across thousands of applications created a discovery and patching crisis because most organizations did not know which systems used it. The same dynamic applies to AI pipelines with more layers and more dependencies. "Inherited risk is not going to go away," Rilvan says. "Applying a risk-based solution and focusing investments onto key technologies is the approach."

Shared intelligence is the fastest response

Detecting unknown threats inside trusted code is the hardest problem. Rilvan argues that the most effective answer is sector-level shared intelligence: organizations within the same industry sharing indicators and detection logic so that when one is compromised, the rest automate a response before the attack reaches them. "Once we know one customer is impacted, we share that information to the pool and implement our automation solutions to respond," he says.

The foundation underneath is still documentation. Rilvan sees breaches traced back to undocumented dependencies, departed developers, and systems running without anyone knowing what libraries are inside. "Many times we see breaches happen because the internal knowledge is not being documented," he says. "Employees left and the organization doesn't even know what dependencies are running. That becomes the entry point."

Related Stories