AI Chats Not Privileged Business Risks: What Companies Should Do Now
- Todd Nurick
- 12 hours ago
- 7 min read

A lot of companies are already using ChatGPT, Claude, and similar AI tools for real business work.
They are using them to summarize issues, draft messages, pressure-test legal positions, analyze contracts, organize facts, and think through disputes before a lawyer is even looped in.
That creates a problem many companies have not fully appreciated yet.
A recent federal ruling in New York has become a warning shot for businesses and their lawyers: some AI chats may not be privileged, and in the wrong setting they may be discoverable. Reuters reported on April 15, 2026 that the ruling has prompted major U.S. law firms to warn clients that conversations with chatbots could be demanded in both criminal and civil matters.
Todd Nurick of Nurick Law Group, LLC, a Pennsylvania and New York business attorney with approximately 30 years of civilian business law and litigation experience, and a former Army officer, helps companies translate fast-moving legal developments into practical contract, confidentiality, governance, and risk-management decisions before those issues become litigation problems.
AI Chats Not Privileged Business Risks are not limited to companies that are using AI as a substitute for a lawyer. They can affect ordinary businesses whose employees, executives, or in-house teams drop sensitive facts, draft legal theories, internal communications, customer disputes, or strategic questions into consumer AI tools without enough guardrails. Reuters reports that law firms are now adding warnings to client contracts and advisories because sharing legal advice or legal communications with a third-party AI platform may waive privilege.
AI Chats Not Privileged Business Risks: why this matters now
This matters because the issue has moved from theory to actual case law.
Reuters reported that in United States v. Heppner, Judge Jed Rakoff in the Southern District of New York ruled in February that a defendant had to turn over 31 documents generated through Anthropic’s Claude. Reuters also reported that the court concluded no attorney-client relationship exists, or could exist, between an AI user and a platform such as Claude.
The written Heppner opinion, which is now publicly available, said the AI-generated documents lacked at least two, “if not all three,” of the elements required for attorney-client privilege. The court’s reasoning included that Claude was not an attorney, that the user had communicated with a third-party AI platform, and that the user could have had no reasonable expectation of confidentiality in those communications.
That does not mean every AI interaction automatically loses all possible protection in every context. Reuters also reported that on the same day as Rakoff’s ruling, a federal judge in Michigan held that a pro se litigant did not have to turn over her ChatGPT discussions, treating them as her own work product. A Paul, Weiss client memo discussing that decision says the Michigan court reasoned that generative AI programs are “tools, not persons,” and that work-product waiver generally requires disclosure to an adversary or in a way likely to get into an adversary’s hands.
So the immediate takeaway is not “the law is settled.” The real takeaway is more practical: companies should assume that careless AI use can create discoverability and confidentiality risk, and they should stop treating consumer AI chats as inherently private.
What companies are getting wrong about AI use
A lot of businesses still treat AI tools like enhanced search engines or harmless brainstorming assistants.
That is too casual.
The problem is not only whether the tool gives a good answer. The problem is what employees are putting into the tool, whether that information includes privileged material, confidential business information, trade secrets, internal legal strategy, or customer-specific facts, and whether the company has any policy governing that behavior. Reuters reported that both OpenAI and Anthropic terms state the companies can share user data with third parties, and that both require users to consult qualified professionals before relying on their tools for legal advice.
If an employee pastes internal legal advice into a chatbot, that can raise a different and more dangerous question than whether the chatbot’s answer is right. It can raise whether the company just disclosed protected information to a third party in a way that undermines privilege or confidentiality. That risk is exactly why firms are now warning clients to proceed cautiously and, in some cases, to avoid putting legal matters into consumer AI tools at all.
AI Chats Not Privileged Business Risks in privilege, work product, and confidentiality
This is where the business-law exposure becomes real.
Based on the current cases and reporting, the risk may show up in at least three different ways:
attorney-client privilege problems when legal advice or lawyer communications are shared with consumer AI tools
work-product disputes over whether AI-assisted documents or chats can be discovered later
confidentiality and trade-secret risk when sensitive company information is disclosed to third-party platforms
Heppner is especially important because the court focused on traditional privilege elements, including whether the communications were with an attorney and whether they were kept confidential. The court also emphasized the role of the platform’s privacy terms in finding that there was no reasonable expectation of confidentiality.
Even where a company is not in active litigation, those same facts can matter. If sensitive internal information is being fed into public or consumer AI systems without direction, policy, or controls, that may create problems well before a discovery dispute ever starts. Reuters’ reporting and the Heppner reasoning both support treating this as a live business-risk issue, not a niche litigation issue.
AI Chats Not Privileged Business Risks in ordinary business operations
This is not just a white-collar criminal defense issue or a problem for law firms.
Ordinary companies can create exposure when people use AI tools for:
drafting responses to customer disputes
summarizing demand letters or threatened claims
analyzing contract language
brainstorming employment decisions or HR responses
organizing facts after an internal incident
developing litigation or negotiation strategy before counsel is fully involved
revising internal communications about sensitive matters
In other words, the risk shows up exactly where businesses are already most tempted to use these tools.
That is why this topic is likely to matter to companies even if they never touch a courtroom. The question is not only what the AI says back. The question is whether the company has just created a record that could later become a discovery fight, a confidentiality issue, or a governance problem.
AI Chats Not Privileged Business Risks in outside counsel and internal policy
This is a strong outside-counsel issue because most companies do not need an academic memo on privilege doctrine. They need a workable operating rule.
They need to know:
what employees can and cannot paste into consumer AI tools
when legal or compliance issues must be handled without public AI tools
whether the company should use enterprise or closed AI environments instead
how vendor terms, privacy policies, and internal policies should be aligned
how to preserve confidentiality when AI is being used under legal supervision
Reuters reported that some firms are now advising clients to use closed AI systems designed for corporate use, and that some lawyers are suggesting users expressly state in prompts when research is being conducted at the direction of counsel. Reuters also reported that firms are revising client contracts to warn that disclosure to third-party AI platforms may constitute a waiver of privilege.
That does not guarantee protection. But it does show where sophisticated counsel and clients are moving already.
What companies should do now
If your company is using ChatGPT, Claude, or similar tools in any meaningful way, there are practical steps worth taking now:
prohibit employees from pasting privileged legal advice, draft legal analyses, or sensitive litigation strategy into consumer AI tools
identify whether customer disputes, HR issues, internal investigations, or regulatory matters are being run through public AI systems
review whether the company is using consumer AI, enterprise AI, or a closed internal environment, and understand the actual terms governing each
adopt a written internal AI-use policy that addresses confidentiality, legal matters, and escalation to counsel
train managers and business teams that AI chats may not be private, and may later be discoverable
involve counsel earlier when employees want to use AI for legal, compliance, dispute, or investigation-related work
document approved workflows if AI is being used at counsel’s direction, rather than letting employees improvise
Those steps are worth taking even while courts are still sorting out the legal boundaries. The current split in the cases does not support complacency. It supports discipline.
Why this should get board and executive attention
This is one of those issues that can look small until it is very expensive.
A single executive or employee using a chatbot casually can create a problem that later touches privilege, discovery, cyber hygiene, confidentiality, trade secrets, vendor diligence, and internal policy all at once.
That is why AI Chats Not Privileged Business Risks should be treated as a governance issue, not just a technology issue. Companies that already have AI policies should revisit them. Companies that do not have them should stop waiting. The practical trend in the market, reflected in current law-firm guidance reported by Reuters, is clearly toward more caution, more internal controls, and more differentiation between consumer tools and more controlled AI environments.
Conclusion
The real lesson from the recent AI privilege cases is not that every AI chat will automatically be discoverable.
It is that companies should stop assuming their AI chats are safely private.
The New York ruling in Heppner, the Michigan split on work product, and the rapid response from major law firms all point in the same direction: businesses need clearer rules for how AI tools are used when legal, confidential, or strategically sensitive issues are involved.
If your company is using AI in connection with legal, compliance, HR, contract, investigation, or dispute-related work, Todd Nurick and Nurick Law Group, LLC can help evaluate the risk, tighten internal guardrails, and build practical policies and workflows before a useful tool becomes an avoidable legal problem.
Sources
Reuters, AI ruling prompts warnings from US lawyers: Your chats could be used against you, April 15, 2026.
United States v. Heppner, S.D.N.Y. opinion discussing privilege and work-product issues for AI-generated documents.
Harvard Law Review, United States v. Heppner, summarizing Judge Rakoff’s reasoning and the privilege analysis.
Paul, Weiss, Federal Courts Reach Different Outcomes on Whether AI-Generated Materials Warrant Work Product Protection, discussing the Michigan decision and the split in treatment of AI-assisted materials.
Disclaimer: This article is for informational purposes only and is not legal advice. Reading it does not create an attorney client relationship. Todd Nurick and Nurick Law Group are not your attorneys unless and until there is a fully executed written fee agreement with Todd Nurick or Nurick Law Group.


