All articles
At Law Firms, AI Success is Defined Through Security, Governance, and Irreplaceable Human Oversight
Scott Cohen, EVP at Lineal, explains why law firms should focus on change management and human workflows for successful enterprise AI.

Key Points
* Scott Cohen, EVP at Lineal, explains why law firms should focus on change management and human workflows for successful AI adoption.
* He introduces a "practice-first" strategy that integrates AI into legal work to augment lawyers and solve high-friction problems.
* With many AI vendors mishandling sensitive client data, information governance is a significant legal concern and a potential dealbreaker for lawyers.
My goal was never to replace a lawyer's judgment. It was to remove the burden from their work. By eliminating non-essential labor, we also reduced the billable hours to a level the clients found more reasonable. It was a true win-win. The firm didn't lose revenue. It gained efficiency, and clients were happier with the cost.

The real challenge for law firms adopting AI is people. In the search for the perfect tool, many organizations forget about change management altogether. Now, only the firms with a solid grasp on the specific legal tasks AI can automate are likely to succeed.
For an expert's take, we spoke with Scott Cohen, Executive Vice President at legal technology solutions company Lineal. As the former Managing Director of eDiscovery & Information Governance at Winston & Strawn LLP, Cohen was tasked with the difficult job of convincing lawyers to use AI. Here, he spent eight years on the front lines of innovation advocating for a practice-first approach to adoption.
"AI tools for lawyers should never be considered technology rollouts," Cohen says. Rther than being relegated to administrative tasks, his philosophy requires integrating AI into actual legal work. For him, the process typically begins with identifying a receptive team and a high-value, high-friction problem.
- Augment, don't replace: At Winston & Strawn, that team was in private equity, and their issue was fund review. The goal was to empower attorneys with a clear win for both the firm and its clients, Cohen explains. The key was to offer a tool that would immediately "move the needle" on their work without threatening their core function. "My goal was never to replace a lawyer's judgment. It was to remove the burden from their work. By eliminating non-essential labor, we also reduced the billable hours to a level the clients found more reasonable. It was a true win-win. The firm didn't lose revenue. It gained efficiency, and clients were happier with the cost."
The initial win became the cornerstone of a staged rollout, Cohen says. Its success served as an internal case study to drive adoption in other groups. But a successful launch is only the beginning, he continues.
- Continuous touch points: To avoid the common pitfall of a one-time training event, Cohen champions a model of relentless follow-up. For him, regular check-ins serve a dual purpose. First, they reinforce initial training. However, they also ensure the tool evolves in line with the team's needs. By maintaining an open dialogue, Cohen's team can introduce new vendor features and gather crucial feedback to improve the product."Most organizations fail because they treat adoption as a one-and-done activity. They do an intensive training, and then they hope for the best. That model doesn't work. You need continuous touch points and a constant feedback loop. Sometimes that just means nagging. You have to be persistent and keep showing up, because these people are busy."
A hands-on approach also solves another critical problem: discovering new use cases. Since the technology team can't possibly remember every nuance, continuous conversations are key. "We need them to tell us if there are particular things that we should focus on," Cohen says.
But as AI tools proliferate, a more ominous challenge has emerged. In Cohen's view, it's a risk many vendors have dangerously ignored: information governance. "People load documents up into these tools, and they're there forever." The very nature of cloud-based AI tools poses a significant security and compliance risk for law firms that handle sensitive client information.
In fact, information governance is something most vendors haven't considered, Cohen concludes. "Their business model is to get people to send content to their cloud products, which creates an automatic governance nightmare for a law firm. For our highly regulated clients, that's a non-starter. Our position is straightforward: if a tool falls short in terms of security and governance, we will not use it. It doesn't matter how good the features are. We won't engage until they fix the gaps. Law firms have a duty to protect client data, and that comes first."




