All articles
G2's Chief Innovation Officer On Closing The AI Trust Gap To Unlock Growth
Tim Sanders, Chief Innovation Officer at G2, explains why AI's value is unlocked not by new tech, but by trusting agents with the autonomy to do their work.

Key Points
While many organizations invest in AI agents, most struggle to realize their full potential due to a hesitation to grant them autonomy.
Tim Sanders, Chief Innovation Officer at G2, explains that the real risk is not in giving agents too much freedom, but in giving them too little.
By compressing workflows to fewer human checkpoints and using guardrail services, companies can empower AI to drive growth and create new opportunities for their people.
Leaders always think the danger lies in giving agents too much autonomy. The real danger lies in giving them too little.

A new frontier is opening in enterprise AI, and it has less to do with cutting-edge models and more to do with a foundational human element: trust. As AI evolves from offering suggestions to taking action, companies are investing in powerful "agents" that can execute tasks independently. But a hesitation to grant these tools true autonomy is preventing many from reaping the rewards. Now, the organizations pulling ahead are empowering their teams by closing this "trust gap" and preparing their workforces for a new way of operating.
Tim Sanders, Chief Innovation Officer at software marketplace G2 and Executive Fellow at Harvard Business School, has spent decades studying how enterprises adopt new technology. A former Chief Solutions Officer at Yahoo and the New York Times bestselling author of books including Love is the Killer App, Sanders has built his career at the intersection of technology and business transformation.
Today, Sanders offers a candid perspective on the rise of agentic AI. While many companies will adopt the technology, he says, the real benefits will go to those who learn to integrate it wisely. In his view, it all comes down to educating teams and building the confidence to let AI do the work it was designed to do.
"Leaders always think the danger lies in giving agents too much autonomy. The real danger lies in giving them too little," Sanders says. "When you invest in this much capability but keep it behind human approvals, you limit your potential from the start." In fact, nearly half of organizations still route every AI-driven action through a manual review, he explains, an approach that slows down work and yields only minor efficiency gains.
From checkpoints to progress: The companies that pull ahead are those that examine each step in a workflow and ask whether it truly needs human sign-off, Sanders explains. Is a decision reversible? Can an agent handle it? "Some organizations will smother agents with so many guardrails that teams drown in 'evaluator debt,' spending all their time approving machine-led work. The winning companies are compressing a seventeen-step workflow into a single human checkpoint. Those are the organizations that see 2 or 3x boosts in velocity, not just 10 or 15%."
Building a safety net: For companies to deploy agents widely, they need a reliable safety net. This is where the new wave of third-party guardrail services comes in, Sanders says. "The agent guardrail industry is the trust enabler that makes wide deployment possible, which is why it's growing at a 65% compound annual rate." For HR, these guardrails can ensure compliance and fairness in automated processes, making wider adoption not just possible, but responsible. "The upside is so significant that it makes automatic sense to pay for these services."
For decades, Parkinson's Law—the adage that work expands to fill the time available—has limited the impact of new productivity tools. Because these tools still relied on people to act, the pace of work remained stubbornly human.
Actions, not words: "Agents help solve for Parkinson's Law because they operate without a sense of deadlines," Sanders claims. "For the first time, a CFO can make a real growth projection based on a technology purchase. Until now, AI has been about providing insights—making suggestions. Agentic AI is different. It’s about execution—it does the thing."
A lesson from the cloud: But the resistance to automation is not new, Sanders admits. The slow adoption of cloud computing, for example, revealed how deeply organizations can struggle with technological trust. "For fifteen years, companies resisted the cloud, and the deadweight loss from staying on-premise reached $44 billion a year," he says. "If organizations struggled to trust the cloud for data storage, imagine asking them to trust an AI agent with sensitive tasks like managing payroll or screening candidates. This is where the trust gap can feel immense."
For HR leaders concerned about job displacement, Sanders offers an optimistic counter-narrative. He believes the most successful companies will reinvest the efficiency gains from AI to drive growth, expand their workforces, and elevate their people from task-doers to strategic thinkers. "As a researcher, I always follow the job cuts, and they rarely come from the areas using strong AI," he says. "Blaming AI is a red herring. It distracts from the real story, which is that AI is poised to create more opportunities for meaningful, high-impact work."
Ultimately, Sanders believes the choice is simple. Companies that educate their teams and learn to trust AI will accelerate, while those that keep every action behind a human checkpoint will fall behind. The goal isn't to use agents with blind boldness, but to avoid using them so timidly that the investment never translates into meaningful outcomes. "While many companies will invest in AI agents, those that truly thrive will be the ones that learn to trust them," Sanders concludes. "For HR leaders, this is the key to unlocking compounding value and moving beyond 'evaluator debt' toward a future of accelerated growth and opportunity."




