All articles
AI Transforms Medical Communications By Elevating Expert Interpretation And Trust
Steve Tulk, Chief Technology Officer at Vaniam Group, shows how AI increases the value of human judgment in regulated work by turning experts into the trusted front end of interpretation.

Key Points
Regulated industries face a trust problem when AI becomes a black box, especially as employees use public tools with sensitive data.
Steve Tulk, Chief Technology Officer at Vaniam Group, explains how a human-centered approach keeps experts in control of interpretation, judgment, and credibility.
His solution is to build secure data environments, train teams in critical thinking, and use AI as a starting point that strengthens expert insight rather than replacing it.
Our goal is not to create AI that produces endpoints, it's to produce starting points for our experts to interpret, validate, and apply in a way that meets the needs of our clients and their strategies.

In regulated fields like medical communications, artificial intelligence isn’t sidelining experts. It’s raising the value of their judgment. The technology can sweep up mountains of information, but it takes a human to decide what any of it means. That shift frees specialists from the grind of gathering data and puts their time where it matters most: interpreting evidence, applying nuance, and earning trust.
Steve Tulk, Chief Technology Officer for medical communications leader Vaniam Group, is one of the architects of this new reality. As an inventor with multiple patents, he is a practical builder grounded in the experience of developing technology. His perspective on AI innovation comes from a career of creating it, having led the development of the Vaniam Intelligence™ suite of tools for scientific engagement and data-driven insights.. Tulk’s philosophy is to engineer technology that serves as a collaborative partner.
"We build with the human at the center from the start. Our goal is not to create AI that produces endpoints, it's to produce starting points for our experts to interpret, validate, and apply in a way that meets the needs of our clients and their strategies," says Tulk. In his view, AI’s primary function is to handle the grunt work of data aggregation, freeing up experts for the uniquely human tasks that build credibility and trust.
Trust is the differentiator: Tulk calls the result "amplification at scale." By processing information at a volume no human could manage, AI can deliver uniquely deep and relevant insights. "Everything can look polished, but credibility comes from the human who can explain the context, validate the output, and take responsibility ultimately for that final call."
From podcasts to predictions: In practice, this can give his teams a predictive edge. Tulk explains how they sift through massive, disparate datasets—from clinical trial results to niche global broadcasts—to uncover patterns that would be invisible to human analysts. "We're looking at everything from press releases to obscure podcasts about oncology. Generative AI is great at pattern recognition, and it connects dots that might seem unimportant to a human. But in the aggregate, a pattern emerges that might indicate a competitor is making a label change," Tulk continues. "We're no longer reacting, we're seeing what's happening based on evidence. That capability makes the human expert that much more important in the long-term."
But to capture this value, Tulk argues leaders must directly confront two key challenges: data security and output reliability. The first stems from shadow AI, where employees use public AI tools with sensitive data. The second is the technology's known tendency for hallucinations. His solution for both is a culture of verification built on transparency and critical thinking.
In the shadows: "A recent study found that 57 percent of employees are using AI secretively and are claiming the work as their own. But the bigger problem with that is that it means they're taking potentially client information, company information, proprietary information, and sharing that with a public foundational model. It's a huge problem," says Tulk. The risk grows when teams move faster than the guardrails that are supposed to protect them. "You've got to trust but verify and really be a good critical thinker."
Know your core mission: To realize the benefits while mitigating the challenges, Tulk offers a straightforward framework for fellow technology leaders. The first step, he says, is establishing a secure walled garden to protect one of the organization's most valuable assets: its data. From there, he emphasizes the need to focus on the core business by leaning on expert partners for commodity technology. "It’s easy to lose sight of what business you’re really in. We’re not in the tech business or just a data business. We’re in the business of helping experts translate complex science into trusted understanding that informs decisions and shapes markets. That trust comes from people who can interpret evidence, explain context, and ultimately take responsibility for the decisions being made. Technology is simply the catalyst that allows that expertise to reach its full potential."
Tulk sees a future where AI slips into the background and becomes as unremarkable as Wi-Fi. The technology will handle the structure and speed, but people will stay out front as the source of trust, interpretation, and accountability. "AI just becomes invisible infrastructure. It's always there in the background, and humans become the front end for the trust and the interpretation," Tulk concludes.




