All articles

AI

How AI-Enabled GM Engineers Move Quality Upstream As Continuous Validation Replaces End-Stage Testing

The Data Wire - News Team

|

March 25, 2026

Satyabrata Pradhan, Senior Program Manager at General Motors, on how continuous AI validation is catching defects earlier and raising the quality floor for software engineering.

Credit: Outlever
Key Points
  • Software development has long pushed validation to the end of the process, allowing defects to compound through the pipeline until they reach deployment, or worse, customers.

  • Satyabrata Pradhan, Senior Program Manager at General Motors, explains how continuous AI validation is improving software stability and reducing the defects customers experience.

  • He outlines how continuous AI validation catches defects early at scale, while human judgment and enterprise guardrails ensure responsible adoption and handle real-world edge cases.

We’re not just fixing bugs faster. We’re improving the overall stability and quality of the software before it ever reaches the real world, and that reduces the number of issues that customers actually experience.

Satyabrata Pradhan

Senior Program Manager
General Motors

In software development, quality assurance is moving upstream. Instead of just catching bugs, AI is helping engineers improve code quality long before it ever reaches a customer. Across the industry, teams are using AI to break requirements into test cases, analyze logs, and run validation with a consistency that manual processes cannot match. The gains show up long before deployment.

Satyabrata Pradhan, Senior Program Manager at General Motors, is an IEEE Senior Member with over 14 years of experience specializing in the integration of ADAS, infotainment, and cybersecurity systems. Having spent the bulk of his career leading component validation and cybersecurity programs at GM, Pradhan brings a ground-level understanding of what it takes to certify that software is safe, functional, and ready for the road. Pradhan's work is foundational to building next-generation vehicles on a bedrock of reliable software.

"We're not just fixing bugs faster. We're improving the overall stability and quality of the software before it ever reaches the real world, and that reduces the number of issues that customers actually experience," says Pradhan. The shift begins at the requirement level, where AI breaks down specs into test cases, analyzes logs, and validates outputs with a consistency that manual processes cannot sustain.

This upstream improvement begins by using AI to break down engineering requirements into specific, robust test cases, then comparing expected outcomes against actual test logs with an objectivity and consistency that manual review cannot match. Run continuously across hundreds of builds, that same capability becomes the engine of a new validation model, one where issues are caught long before they reach deployment.

  • Apples to apples: When a test log comes in, the question is no longer whether a human's reading of it is accurate. "Instead of a human thinking about whether it matches or not, the AI can look at the situation, at the actual software or test log we got, and do a comparison to see if it matches. This will be consistent across software validation," notes Pradhan.
  • The thousand-test day: That consistency enables a move toward a model of continuous validation. By leveraging digital twins, teams can run thousands of regression tests on new software builds daily, providing a new layer of enterprise AI observability and serving as an effective form of internal audit to certify that the software is robust, functional, and fully compliant. "In the past, we developed code and had to wait until the final deployment to validate everything. Now, with AI-driven regression tests running on numerous builds every day, we catch issues much earlier," adds Pradhan. "We might run a thousand test cycles and find only one failure, which is improving software stability and reducing the issues customers experience."

But this technological advancement is still steered by human judgment. AI's validation capabilities have limits when facing the messy unpredictability of the real world, which can create AI production challenges. It's a reality that makes a case for a robust probabilistic AI governance framework and a clear understanding of where machine capability ends and human expertise must take over. At the enterprise level, that understanding doesn't stay philosophical. It gets codified into policy.

  • Accounting for chaos: Validation environments can model expected behavior, but the range of human actions in the real world resists full simulation. "There may be some issues that will still be there because we don't know how the customer will behave. We try to replicate some scenarios, but it's possible there are edge cases," says Pradhan. "For example, if a small kid comes and repeatedly presses an ignition button, the system may behave differently than how an actual driver would."
  • Machine versus MD: The same limitation applies wherever AI operates without the full context that human expertise provides. "If an AI flags something about a patient's heartbeat, an actual human doctor is needed to provide a better idea of what's really going on. A machine can't give you all of that information on its own," notes Pradhan. At large enterprises like GM, the human-in-the-loop model is not just a philosophy, it's policy. Strict corporate AI guardrails dictate a deliberate, process-driven approach to new technology. Tools like GitHub Copilot earn internal approval only when they operate within stringent data policies, a clear departure from the unfettered access associated with many public AI models.

As AI's role in software development expands, so does the range of tasks engineers can direct it toward. The same tools that validate code can be turned toward cross-organizational analysis. This allows engineers to compare their vehicle's technical architecture against a competitor's, using AI to generate a detailed, bit-by-bit breakdown of the differences to flag risks and identify opportunities. But AI's expanding capability also reveals something less intuitive. It demonstrates a capacity for data-driven reasoning that operates differently from human instinct, and that is reshaping what it means to work alongside these systems.

  • Logical leap: Where human instinct relies on experience and habit, AI draws on continuous streams of data to predict, calculate, and act in ways that can seem counterintuitive by comparison. "I was in a Waymo robotaxi that came upon a long queue of traffic. Instead of waiting, the car automatically calculated a detour around the block to avoid blocking other people, based on the reasoning that waiting would waste their time. How many humans would do that?" he asks. "The car is constantly collecting data and predicting its next move based on a level of logic that is different from a human's."
  • Adapt or be replaced: That gap between how AI reasons and how humans reason doesn't make engineers obsolete. It makes mastering these tools the new differentiator. Pradhan believes that AI fluency is simply becoming a new baseline skill, creating a new dynamic between AI strategy, engineers, and business experts. And for those who adapt, the efficiency jumps are clear. "AI is not going to replace engineers. The engineer who knows how to use AI is going to replace the one who doesn't," says Pradhan. "If you don't feed data to the AI, it won't know anything. The person who knows how to use AI effectively is the one who will have the advantage."

The future of software development is continuous validation at scale, where AI helps catch issues earlier and reduces the defects customers experience. The engineers who get the most out of this change will be those who know how to direct these systems with skill and judgment. "If you go back to the early 2000s, everybody had to know the internet. It is going to be the same way," Pradhan adds. "If you know AI, you'll replace the person who doesn't know AI."

Related Stories