All articles

Future of Data Management

When Enterprise AI Fails, It's Often Not the Model, but a Mess of Data Lurking Beneath

The Data Wire - News Team

|

March 26, 2026

Ravish Kumar of BCS Technology International makes the case that data discipline, not model quality, determines AI success.

Credit: Outlever
Key Points
  • Most enterprise AI projects fail not because the model is wrong, but because the data architecture underneath was never built to handle real-world scale.

  • Ravish Kumar, Practice Lead for Data & AI at BCS Technology International, says the fix requires data leaders who communicate in business terms, set honest expectations, and treat stakeholders as partners.

  • The solution starts with disciplined data architecture, a communication-first leadership approach, and a carefully scoped semantic layer that gives AI systems a single source of truth.

Architecture is the real bottleneck. If you don't architect it properly, it is bound to fail at scale.

Ravish Kumar

Practice Lead for Data & AI
BCS Technology International

Enterprise AI projects don't usually fail because the model is wrong. The failure point is almost always the data underneath. Leaders eager for results bolt AI onto fragile infrastructure, and the system collapses the moment it meets real-world scale. Organizations assume the technology is the problem and reach for a better model, when the actual bottleneck is the architecture that no one wanted to fix first.

Ravish Kumar is the Practice Lead for Data & AI at BCS Technology International, a global technology services firm specializing in data and AI delivery. A global delivery expert with over a decade of experience across data engineering, architecture, and AI, he has led data teams at Accenture and Perficient, and spearheaded the Data and AI Pre-Sales Practice for the APAC and Middle East region at Total eBiz Solutions. Kumar's view is that the hardest problems in AI are not technical; they are organizational, communicative, and architectural. "Architecture is the real bottleneck. If you don't architect it properly, it is bound to fail at scale," says Kumar.

  • Tacking on vs. building in: The problem is often invisible until it is too late. Small-scale projects rarely surface the gaps that break systems at enterprise scale; there are few guardrails, little need for PII validation, and data quality checks that can be safely skipped in a proof of concept become critical failure points in production. "When you have to do that at scale, everything falls apart," says Kumar.
  • Lost in translation: The solution starts with how data professionals communicate. Technical and business teams are both systems thinkers, but they operate in different languages, and the responsibility to bridge that gap falls on the data side. "Talk to them in the language that they can not only understand, but also relate to. Talk about concepts like data modeling and data transformation, but avoid going into the weeds with technical terms like Spark clusters," Kumar says.

Research consistently shows that the majority of enterprise AI projects are abandoned before delivering meaningful value, and the primary culprit is rarely the model. Industry analysts point to data fragmentation, poor governance, and infrastructure that was never built to operate at machine speed as the leading causes of failure. The market is beginning to respond, with enterprise data platforms increasingly repositioning around AI readiness as the core value proposition, recognizing that compute access is no longer the bottleneck it once was.

  • Hired for talk, not tech: At the senior level, communication is not a soft skill but the core job requirement. Drawing on his experience hiring senior data scientists, Kumar is direct about what the role demands. "At that level, you are not supposed to be purely technical. You are supposed to be a very business-focused, communication-forward person who can understand both sides of the story," he says. That means explaining what is possible while being equally clear about constraints. "They see where the bottleneck is and either reduce the scope or work toward a solution that works for both parties." 
  • Trust is a team sport: For teams working with experimental technology, where setbacks are inevitable, trust is a strategic asset. Kumar's approach to building it starts well before a project delivers results. "Don't overpromise. Make them understand that these are experimental things we are doing," he says. The goal is to bring business stakeholders into the process as active contributors. "You have to let them know what you are trying to do, where you are, and that you require their business expertise to help get the project over the finish line. If they are working with you, they are part of the team. The trust will build."

The through line across Kumar's entire framework is discipline. Not the discipline of following a rigid process, but the discipline of knowing what not to do. Nowhere is that more evident than in his views on the semantic layer, one of the most consequential developments in enterprise AI. A well-built semantic layer transforms raw data into a reliable, queryable source of truth, allowing multiple AI agents and applications to draw from the same foundation and produce consistent outputs across the organization. The potential is significant, but so is the risk of overbuilding. "Be very mindful of not putting everything in it. You have to be proactive by first considering which applications will require this kind of semantic layer, and then build from there. Otherwise, it will simply become another noisy data lake." 

Related Stories