All articles
Why Governed Infrastructure And Validation Underpin Responsible Probabilistic AI
Enterprises struggle to govern probabilistic AI with deterministic mindsets. Google AI Infrastructure Architect Yujun Liang explains the leadership challenge and shares his three-layer framework for managing risk through secure infrastructure, data quality, and human validation.

Key Points
Enterprises are adopting probabilistic AI with governance models built for deterministic software, creating a leadership challenge where the primary risk is treating AI's random outputs as certain fact.
Yujun Liang, a Forward Deployed AI Infrastructure Architect at Google and distinguished "Golden Kubestronaut," argues that compute scarcity has shifted most companies from AI builders to AI consumers whose central responsibility is risk management.
His three-layer framework for AI governance focuses on fortifying security infrastructure, ensuring data quality to avoid misleading models, and mandating human validation as the final guardrail against AI hallucinations.
We cannot require that a calculation will always return the same result. How we interpret that result is important. Some problems cannot be solved by applying AI. Using it in the wrong context will mislead the direction of the business.

Enterprises are increasingly swapping predictable, deterministic software for probabilistic AI. But their governance models are often unprepared for the transition, creating a gap between business processes and technological reality that can leave companies exposed. The primary challenge isn't an engineering curiosity; it's an issue of leadership and decision-quality. The mistake is treating AI’s probabilistic outputs as deterministic facts.
Yujun Liang works at the foundation of enterprise AI infrastructure. A Forward Deployed AI Infrastructure Platform Engineering Architect, he holds deep, hands-on expertise across Google Cloud, AWS, and Azure. A distinguished "Golden Kubestronaut," he is one of the few professionals in the world to have achieved all 14 certifications offered by the Cloud Native Computing Foundation. His pioneering work includes architecting systems in the high-stakes financial services industry for institutions like Deutsche Bank, JPMorgan Chase, and UBS. From his unique vantage point, the conversation about AI risk needs to move beyond the technical and toward the human elements of judgment and interpretation.
His central principle for leaders is simple: get comfortable with uncertainty. "We cannot require that a calculation will always return the same result. It will be random. How we interpret that result is important. Some problems cannot be solved by applying AI. Using it in the wrong context will mislead the direction of the business," says Liang.
Liang argues that the popular fascination with prompts and models overlooks the fundamental change happening one layer deeper. He explains that AI concepts have existed since the 1950s, but previous hype cycles fizzled because traditional data centers couldn’t support the computational load. The economics of AI are driven by the immense and rising costs of specialized hardware, creating a market situation where compute scarcity forces a focus on governance. For most enterprise consumers, this reality suggests that managing risk becomes a more central responsibility than building the models themselves.
- The forgotten foundation: "When people talk about AI, they are thinking of prompts and models. Few people think about the infrastructure, which is essential for this revolution. The difference now is the cloud, which gives us theoretically infinite scalability. That's how AI took off," notes Liang.
- The scalability tax: "The hardware itself, the GPUs and TPUs, is expensive. As a matter of fact, AWS just raised the price of a GPU by 15%," explains Liang. "It all comes down to whether a company can support the resource-intensive workloads for model training and inference. The reality is that not every company can afford this. Only a few giants like Google, AWS, Microsoft, and OpenAI can truly play that game. Even with OpenAI, their balance sheet shows they aren't profitable, and they rely on continuous investment to survive."
For these "consumer" organizations, Liang frames the challenge with a three-layer framework. It begins with his analogy for securing the foundation:
- Locking the doors: "It's just like a luxury house, where you need locks on the doors. A multi-million dollar home also needs a fence and a gate that you keep closed. It's the same for businesses using AI. Those consumers need to have a good security foundation," states Liang. Liang’s analogy highlights serious, real-world consequences. An unsecured connection can become a direct pipeline for data exfiltration, and as global AI regulations diverge, modern AI guardrails are becoming a key component for any enterprise AI ambition.
- Garbage in, garbage out: In Liang's view, securing the data pipes is only the first step. The quality of what flows through them is an important second layer. "The principle of 'garbage in, garbage out' is more critical than ever with AI. If your own data doesn't represent reality, you will build an unrealistic model, and the results will be misleading. It's essential to clean the data, remove the outliers, and ensure it fits the business model." A clear strategy for data readiness is therefore a key factor in avoiding building intelligent systems on a flawed foundation.
- Validate, validate, validate: Secure pipes and clean data are foundational, but the probabilistic nature of AI calls for a final, necessary guardrail: human oversight. The new reality changes the role of the expert, making them the final validator of AI-generated output. Liang praises the productivity benefits of AI but offers one vital requirement. "As someone with English as a second language, when I used to write a technical document, I always worried about having incorrect grammar. But now, because of generative AI, I can just ask AI to review and correct all those grammatical issues. It has helped me a lot. But what's important is that we need to validate the output."
This final step of human validation is so important because the consequences of skipping it are increasingly public. As issues like AI hallucination move from academic theory to real-world failures, Liang notes that these missteps offer a clear lesson against blind trust. "I read in the news about some big-name firms that just asked AI to write a report for their clients, and it turned out to be untrue. That's not the right way to do it. The output has to be validated," states Liang.
Navigating this transition demands a cultural move that extends to the entire workforce. Liang frames the current anxiety around AI and job loss within a larger historical context, not as an endpoint, but as another chapter in a long history of technological disruption. "In the 1800s, labor-intensive work was replaced by machines and many people who did those jobs manually lost their jobs. But then people learned how to operate those machines and found new employment," he says.
He suggests that, similar to past job market transformations, the advantage will likely go to those who learn to master the new tools. "If you stick to an older job that doesn't require a human anymore, you will probably find that those roles are no longer available, making your job hunt very challenging," cautions Liang. "But if you adapt to the new way of work and find an area with demand, it will be easier to get re-employed."




