Banner artwork by tete_escape / Shutterstock.com
Artificial intelligence is no longer a fringe topic for legal departments and law firms. Interest has clearly shifted from whether lawyers should engage with AI to how they can do so responsibly and effectively.
What emerged from the panel discussion hosted by ACC Georgia, featuring Mariette Clardy-Davis, Assistant Vice President and Assistant General Counsel at Primerica; Celeste Gaines, Senior Counsel at McKesson; Regan Lemke, Chief Practice Strategy Officer and Chief of Staff at Polsinelli; Larissa Marshall, Director of Practice Innovation at Polsinelli; and Leslie F. Spasser, Managing Partner at Polsinelli, was not a race to adopt the newest tools, but a shared recognition that sustainable AI use requires structure, discipline, and — above all — clarity of purpose.

Primerica

McKesson

Polsinelli


Polsinelli
“AI doesn’t just implement itself,” Lemke said, capturing a reality many legal leaders are now confronting. Without intentional strategy and change management, even well-funded AI initiatives risk becoming underused — or worse, unmanaged.
Start with intent, not technology
Across both law firm and in-house perspectives, panelists stressed the importance of resisting hype-driven adoption. For Clardy-Davis, the foundational question is straightforward but often overlooked: “What is the problem you’re actually looking to solve?”
Too often, she noted, legal teams find themselves “riding the AI train” without deciding whether AI is truly a solution or simply a distraction. That gap between enthusiasm and execution is especially pronounced in legal departments, where lawyers are already stretched thin and new tools are frequently introduced without a clear path to usability.
Gaines framed this early decision-making as defining an organization’s “AI identity” — including its risk tolerance and adoption posture. “Are you an early adopter? What is your risk appetite?” she asked, emphasizing that these questions should guide everything from tool selection to governance.
Governance still matters, AI or not
Despite AI’s novelty, panelists were quick to demystify its legal risk profile. “AI is shiny, but it is not new,” Gaines observed the same fundamentals apply: data privacy, appropriate controls, contractual protections, and exit strategies must be addressed regardless of whether a product is marketed as “AI-powered.”
Marshall underscored that point from an operational standpoint. Before any pilot begins, tools must clear a basic hurdle: “a security review … making sure that we can actually upload data to it.” Gaines added a related — and often overlooked — question: whether the organization even has the right to use the data being fed into an AI system.
Adoption lives or dies with training
If governance sets the guardrails, training determines whether AI tools ever leave the station.
Marshall described an approach built around continuous communication, recurring training, and data-driven feedback on adoption — rather than one-time rollouts. Usage metrics, she explained, help identify where tools are actually adding value and where best practices can be shared across teams.
For in-house teams, Clardy-Davis was even more direct. Simply giving lawyers a tool and expecting them to “figure it out,” she warned, “is a recipe for a frustrated lawyer and a non-use tool.” Her solution centers on hands-on, scenario-based learning that shows “the art of the possible” while grounding AI use in real, day-to-day legal work.
Build for change, not perfection
As laws, regulations, and tools continue to evolve, panelists agreed that rigidity is the enemy of responsible AI adoption. Gaines summed it up succinctly: “You’ve got to have a plan to change a plan.”
Lemke echoed that sentiment from a law firm leadership perspective, noting that innovation requires discipline to balance speed with risk management. “To be innovative, you have to be … structured and disciplined.”
The bottom line for legal leaders
Effective AI adoption in legal is less about chasing technology and more about aligning tools with people, processes, and professional obligations. Lawyers are not as resistant as stereotypes suggest — “lawyers actually do use AI,” Clardy-Davis noted — but they need guidance, guardrails, and practical pathways to competence.
For legal teams willing to invest in structure, training, and adaptability, AI can move from a shiny experiment to a durable capability — one that enhances, rather than undermines, the core judgment and responsibility at the heart of legal practice.
Disclaimer: The information in any resource in this website should not be construed as legal advice or as a legal opinion on specific facts, and should not be considered representing the views of its authors, its authors’ employers, its sponsors, and/or ACC. These resources are not intended as a definitive statement on the subject addressed. Rather, they are intended to serve as a tool providing practical guidance and references for the busy in-house practitioner and other readers.
