top of page

AI Hallucination Liability Business Risks: What Companies Should Review Now

  • Todd Nurick
  • 5 days ago
  • 6 min read

Business leaders and counsel reviewing AI output accuracy, disclosures, and liability risks
Business leaders and counsel reviewing AI output accuracy, disclosures, and liability risks

“Hallucinations” still sounds like an AI buzzword to me and likely you, as well.


It should now sound more like a business-law problem. Italy’s antitrust authority said yesterday that it closed investigations into three AI companies after obtaining binding commitments tied to hallucination risk, including permanent disclaimers in chatbot services and, in one case, investment in technology to reduce hallucinations while acknowledging they cannot be eliminated entirely.


Todd Nurick of Nurick Law Group, LLC, a Pennsylvania and New York business attorney with approximately 30 years of civilian business law and litigation experience, and a former Army officer, helps companies assess fast-moving legal developments affecting contracts, governance, compliance, technology risk, and outside general counsel strategy before those issues become operational or litigation problems.


AI Hallucination Liability Business Risks are not limited to AI developers. They matter to companies buying AI tools, embedding AI into products, relying on AI-generated outputs, or making claims to customers about what those systems can do. The current signal from regulators is not subtle: if your AI tool can generate inaccurate or misleading content, your disclosures, claims, controls, and contracts matter. Italy’s authority expressly treated hallucination risk as a consumer-protection issue, while the FTC has separately pursued AI-related deceptive-claims cases in the United States.


AI Hallucination Liability Business Risks: why this matters now

The Italy development is useful because it turns a familiar AI criticism into a concrete compliance point. Reuters reported that Italy’s antitrust authority, which also polices consumer rights, closed investigations into DeepSeek, Mistral, and NOVA AI after the companies agreed to better inform users about hallucination risks through websites and apps, including permanent chatbot disclaimers. Reuters also reported that DeepSeek agreed to invest in technology to reduce hallucination risk while acknowledging current technology cannot prevent it entirely.

That matters because it shows regulators are not waiting for a perfect theory of AI harm. They are looking at what users are told, what risks are disclosed, and whether AI companies are overselling reliability. That same logic can extend to businesses that market AI tools internally or externally as accurate, compliant, automated, or dependable without enough support behind those claims. The FTC’s AI enforcement materials show a similar U.S. concern with unsupported or misleading AI representations.


AI Hallucination Liability Business Risks in consumer protection and marketing claims

A lot of companies still talk about hallucinations as if they are merely a technical limitation users should understand instinctively. That's not a safe assumption anymore.


The Italian authority’s announcement makes clear that risk disclosures about hallucinations were important enough to resolve active investigations. The authority said the companies made commitments aimed at improving transparency around AI systems on websites and apps and at different stages before purchase or registration. Reuters likewise reported that the regulator targeted allegedly unfair commercial practices involving generative AI and the risk of inaccurate or misleading content.


In the United States, the FTC has already shown it is willing to challenge AI-related claims when businesses overstate what the technology can do. The FTC announced a 2024 crackdown on deceptive AI claims, and in 2025 and 2026 it pursued cases involving inflated or unsupported AI promises, including claims about website accessibility compliance, AI-content detection accuracy, and business-growth and earnings claims.


That does not mean every inaccurate AI output creates liability. It does mean companies should stop treating hallucinations as a purely engineering issue when they are also making product, marketing, compliance, or business-use claims tied to those outputs.


Why ordinary companies should care, even if they are not building models

Most businesses are not training frontier models. Many are still exposed, however.


If your company uses AI to summarize information, generate customer communications, review documents, answer support questions, score leads, assist employees, or generate substantive outputs for clients, then hallucination risk can migrate into contract risk, customer-dispute risk, compliance risk, and reputational risk. Italy’s action is useful precisely because it was not framed as a distant research concern. It was framed as a transparency and consumer-information problem.


That is where the business-law angle gets real. The question becomes whether your company is promising too much, disclaiming too little, documenting too little, or relying too casually on outputs that can be wrong in ways that matter. That is an inference, but it is directly supported by the regulators’ focus on inaccurate or misleading AI outputs and related disclosures.


AI Hallucination Liability Business Risks in contracts, vendors, and procurement

This is also a contract issue. If a company buys AI tools from a vendor, counsel should not assume that “AI-powered” means “legally defensible.” A vendor agreement should be reviewed for what it says, and does not say, about accuracy, limitations, disclaimers, data use, support obligations, cooperation, and responsibility when outputs are wrong. The current enforcement climate around AI claims makes those provisions more important, not less.


At a minimum, companies should understand whether the vendor is promising accuracy, whether the company is passing those promises through to customers, and whether anyone has actually validated the tool for the use case at issue. The FTC’s Workado matter is a useful warning here because the agency alleged that a “98 percent” accuracy claim for an AI detector was false, misleading, or unsubstantiated, and that independent testing showed much lower accuracy on general-purpose content.


What companies should not assume

They should not assume that a disclaimer solves everything. They should not assume that a vendor’s marketing language is good enough legal support for the company’s own claims. And, they should not assume that because AI systems are known to make mistakes, customers or regulators will simply absorb that risk without asking harder questions.


The Italian commitments show that regulators may want permanent, visible warnings about hallucination risk. The FTC’s AI cases show that U.S. regulators may also test whether businesses had support for the claims they made. Those are different legal settings, but they point in the same direction: companies need better discipline around AI outputs, risk disclosures, and customer-facing promises.


AI Hallucination Liability Business Risks in practical next steps

If your company is using AI in a way that touches customers, regulated functions, external communications, or meaningful business decisions, there are practical steps worth taking now:

  • inventory where AI outputs are being used externally or in consequential internal workflows

  • review website copy, sales language, and customer-facing materials for unsupported AI claims

  • tighten vendor agreements around accuracy, limitations, cooperation, and responsibility allocation

  • make sure disclaimers match real risk and are actually visible where users need them

  • require human review for outputs that affect legal, compliance, financial, customer, or reputational decisions

  • document internal testing and validation for the specific use cases the company is relying on

  • avoid using broad phrases like “fully automated,” “highly accurate,” “compliant,” or “reliable” unless the company can actually support them


Those are worthwhile steps even if no regulator ever calls. They reduce mismatch between what the business says, what the tool does, and what the company can defend later. That is an inference, but it follows directly from the current enforcement pattern around hallucination and AI-claims risk.


Why this is a strong outside-counsel issue right now

This is exactly the kind of issue clients tend to underestimate until a regulator, customer, or plaintiff frames it for them. Many businesses don't need a lecture on what hallucinations are. But, they do need answers to more practical questions:

  • What can we say about our AI tool or vendor tool?

  • What do we need to disclose?

  • What should the contract say?

  • What outputs require human review?

  • Are we creating exposure by treating AI content as more reliable than it is?


Those are outside-counsel questions because they sit at the intersection of marketing, contracts, governance, compliance, and operational reality. The Italy developments and FTC enforcement materials make this a current, not theoretical, business-law topic.


Conclusion

AI hallucinations are no longer just a punchline or a product gripe. They are becoming a legal and commercial risk category that regulators are willing to scrutinize through consumer-protection and deceptive-practices frameworks. Italy’s closure of active investigations through binding transparency commitments, combined with the FTC’s existing U.S. AI-claims cases, is a useful warning for companies that market, buy, deploy, or rely on AI tools in real business settings.


AI Hallucination Liability Business Risks matter because they force companies to review the gap between what their AI tools actually do and what the company is telling users, customers, or regulators. If your business is using AI in products, workflows, sales, support, or decision-making, Todd Nurick and Nurick Law Group, LLC can help assess the exposure, tighten the contracts and disclosures, and bring more discipline to a fast-moving risk area.


Sources

  • Reuters, Italy closes antitrust probes into AI firms after commitments on ‘hallucination’ risks, April 30, 2026.

  • Italian Competition Authority (AGCM), The Italian Competition Authority secures transparent information on “hallucination” risks from AI companies DeepSeek, Mistral and NOVA AI, April 30, 2026.

  • FTC, FTC Announces Crackdown on Deceptive AI Claims and Schemes, September 25, 2024.

  • FTC, FTC Order Requires Workado to Back Up Artificial Intelligence Detection Claims, April 28, 2025.

  • FTC, Air AI and its Owners will be Banned from Marketing Business Opportunities to Settle FTC Charges the Company Misled Many Entrepreneurs and Small Businesses, March 24, 2026.

  • FTC, FTC Order Requires Online Marketer to Pay $1 Million for Deceptive Claims that its AI Product Could Make Websites Compliant with Accessibility Guidelines, January 3, 2025.


Disclaimer: This article is for informational purposes only and is not legal advice. Reading it does not create an attorney client relationship. Todd Nurick and Nurick Law Group are not your attorneys unless and until there is a fully executed written fee agreement with Todd Nurick or Nurick Law Group.

 

© 2025 by Nurick Law Group. ***Nurick Law Group and Todd Nurick do not function as your legal counsel or attorney unless a fee agreement has been established. The information presented on this site is not intended to serve as legal advice. Our objective is to educate businesses and individuals regarding legal issues pertinent to Pennsylvania. 

 

bottom of page