All articles
In Healthcare, Threat of Shadow AI Outpaces Security as Clinician Adoption Accelerates
Nate Moore, Founder of Enlite IT Solutions Inc., explains how to combat Shadow AI in healthcare by enabling safe AI adoption with sanctioned sandboxes.

Key Points
As burnt-out healthcare clinicians turn to AI, the security risk of unsanctioned tools called "Shadow AI" threatens to expose sensitive patient data.
Nate Moore, Founder of Enlite IT Solutions Inc., explains why leadership should offer secure and pre-approved AI tools for their teams instead of preventing innovation.
The solution is to build controlled environments called "AI sandboxes" that allow clinicians to experiment safely while the organization monitors usage and protects patient data.
Healthcare innovation can't slow down, and neither can security. The goal isn't to slow the pace of innovation, but to create safe lanes with boundaries and guardrails that are ready for teams to actively use.

In response to widespread burnout, a growing number of healthcare clinicians are using AI to improve efficiency. However, with these new tools also comes a significant risk—especially for organizations handling highly sensitive data. Now, a fast-growing threat known as Shadow AI is adding even more pressure to healthcare leaders. How can organizations encourage the use of AI when adoption so often outpaces the ability to secure it properly?
To answer that question, we spoke with Nate Moore, a PMP, CISSP, and CCSP-accredited technology consultant and Founder of Enlite IT Solutions Inc.. With over 15 years of experience in secure digital transformation, Moore has a proven track record of leading large-scale modernization projects at companies like Comcast and Intel. Today, he specializes in securing AI solutions for regulated industries. According to Moore, it's time to shift the focus from fear around AI tooling to the enablement drivers behind it.
"Healthcare innovation can't slow down, and neither can security," Moore says. "The goal isn't to slow the pace of innovation, but to create safe lanes with boundaries and guardrails" that are ready for teams to actively use." In his experience, the gap between the speed of innovation and the pace of governance is driven by burnout.
Burnout burden: Understanding this motivation is the first step toward finding a solution, Moore explains. "Our clinicians are burning out. They're looking to AI tools to extract context and give them credible direction. They are experimenting with AI to ease that burnout and improve documentation, but governance can't keep up with the sheer scale and rate of that adoption."
But the problem is bigger than individual clinicians, Moore continues. Instead, it's a systemic issue that needs to be addressed on a system-wide basis. Here, he identifies a regulatory lag, where foundational rules like HIPAA haven't kept pace with technological advancements.
A policy prescription: Often, that regulatory vacuum pressures hospital leadership to self-regulate, Moore explains. "HIPAA has to catch up with the pace of innovation. You don't want to get into a situation where the tech outruns the rules and regulations, which is what's happening. That's the reason Shadow AI exists."
Meanwhile, shadow AI can severely disrupt a hospital's operational and financial stability, Moore continues. But the task is made more complex by a vendor visibility problem, like when major EHR providers embed AI into their platforms without complete transparency. For example, Moore points to the unauthorized use of public-facing AI tools. "We can't use public chatbots like ChatGPT or Gemini. Once you put a prompt in, that data goes to the AI model, leaving digital breadcrumbs and traces back to the protected health information we are trying to secure," he explains.
The secure solution: For Moore, offering a secure alternative is the most effective solution. "Leading health systems are building sanctioned AI sandboxes, which are governed environments where clinicians can safely test pre-vetted models. As they are used, logs are generated, and guardrails like data loss prevention are implemented. This allows leadership to measure sanctioned versus unsanctioned use, generate KPIs, and identify if governance is truly enabling or hindering innovation."
Yet, even in the face of external inertia, Moore says, organizations must take control of their own security posture from within. For him, the most effective solution strikes a balance between top-down control and bottom-up empowerment. In closing, he advocates for a 90-day framework to help transform generalist cybersecurity teams into AI-ready defenders. But the ultimate goal is to create a culture change where security teams stop siloing information and start actively integrating clinicians. "The goal is to turn curiosity into capability. When you do that, governance becomes something people understand, not a hindrance that slows them down."




