How HR and Business Leaders Should Think About This
The takeaway from Heppner isn’t “stop using AI.” These tools are powerful, and organizations that use them well will outperform organizations that don’t. The takeaway is that AI use needs to be governed by the same discipline you already apply to any other channel that creates a record.
A few practical principles:
Treat public AI tools the way you’d treat email to an outside party. If you wouldn’t put the information in an email to someone outside the organization, don’t put it in a public AI prompt. The legal exposure is the same. A simpler version of the rule: if you wouldn’t CC a stranger on an email to your lawyer, don’t put it into AI.
Distinguish between public tools and enterprise-grade tools. Consumer platforms — the free version of ChatGPT, the standard subscription tier, public-facing tools — are third parties. Enterprise tools with contractual confidentiality protections, data isolation, and policies that prevent your inputs from being used to train external models are a different category. Some law firms, including Ogletree Deakins, use AI tools housed on their own servers, where the data stays internal and the model can’t apply what it learns from internal data anywhere else. That’s a meaningfully different risk profile — and one worth understanding when your organization evaluates which tools to deploy.
Direct sensitive AI use through counsel. When the question is genuinely sensitive — an investigation, a potential legal exposure, a regulatory question — the path that preserves privilege runs through your attorney. AI used at counsel’s direction, with appropriate tools, sits in different legal territory than AI used independently by an employee acting on their own.
Train your people on the framework, not just the features. Most AI training focuses on productivity — how to write better prompts, how to draft faster, how to summarize meetings. The Heppner ruling makes clear that organizations also need to train people on when not to use a public tool, what categories of information should never go into one, and what to do when an AI response surfaces a potential issue that requires action.
If you’re going to ask the question, be prepared to act on the answer. Asking AI to evaluate your handbook, audit your policies, or review a situation creates a record of what you were told. Ignoring the response — or acting on part of it and not the rest — is a record of its own.
“The court did not say AI is bad. It said the individual used it without legal direction — and shared information with a third party that stores your data and learns from it.”
— Burt Garland, Shareholder, Ogletree Deakins