top of page

When AI Gets It Wrong: Legal Risk from Automated Business Decisions

  • Todd Nurick
  • Dec 19, 2025
  • 4 min read

Business executive reviewing automated decision systems and legal risk considerations.

Artificial intelligence is now embedded in everyday business operations. Companies use automated tools to screen job applicants, forecast revenue, flag compliance risks, analyze contracts, monitor cybersecurity threats, and guide strategic decisions. The appeal is obvious. AI promises speed, efficiency, and scale.

But when AI gets it wrong, the legal consequences do not fall on the software. They fall on the business.


As a business attorney licensed in Pennsylvania and New York, Todd Nurick of Nurick Law Group works with companies nationwide that are integrating AI into operations without fully appreciating where legal responsibility remains firmly human.


Automation Does Not Eliminate Legal Accountability

One of the most common misconceptions is that automated decisions somehow dilute responsibility. They do not.

Courts and regulators consistently treat AI as a tool, not an independent actor. If an AI system produces a discriminatory hiring outcome, a misleading disclosure, a flawed compliance decision, or an inaccurate report relied upon by third parties, the company remains legally responsible.

This principle applies across federal law, Pennsylvania law, and New York law.

Using AI does not create a defense. It creates an additional layer of risk that must be managed.


Common Business Areas Where AI Failures Create Exposure

AI related legal risk most often appears in predictable places.

Hiring and Employment

Automated screening tools can unintentionally disadvantage protected classes. Regulators have made clear that employers are responsible for the impact of algorithmic decisions, even if the model was purchased from a third party.

Financial Forecasting and Disclosures

Businesses increasingly rely on AI driven projections and analytics. If those outputs are included in investor materials, loan negotiations, or transaction documents, inaccuracies can trigger claims for misrepresentation or breach of contract.

Contract Review and Management

AI assisted contract analysis can miss key provisions, misinterpret obligations, or fail to identify non standard terms. Relying blindly on automated summaries can lead to missed deadlines, unfulfilled obligations, or default events.

Compliance and Risk Monitoring

AI tools are often used to flag regulatory issues or compliance gaps. If the system fails to detect a problem and the business relies on that output, regulators will still hold the company accountable.

Cybersecurity and Data Protection

AI driven security tools are increasingly common. When a breach occurs, regulators and insurers examine whether the company reasonably relied on its systems and whether appropriate oversight existed.


Vendor Tools Do Not Shift the Risk

Many businesses assume that because an AI system is provided by a large, well known vendor, liability must rest with the provider. That assumption is usually wrong.

Most AI vendor agreements include broad disclaimers, limited warranties, and strict limits on liability. In practice, the business using the tool often bears the majority of legal exposure.


This becomes especially important in Pennsylvania and New York, where courts look closely at whether businesses exercised reasonable care in selecting, supervising, and relying on third party systems.


Governance Failures Are the Real Problem

AI errors are rarely the root cause of legal exposure. Governance failures are.

Businesses run into trouble when they fail to:

  • Understand how AI tools function at a high level

  • Identify where human review is required

  • Document decision making processes

  • Train employees on proper use

  • Limit AI use in high risk contexts

  • Escalate questionable outputs for review

Boards and executives increasingly have a duty to oversee how AI is used, particularly when it affects employment, financial reporting, compliance, or consumer interactions.


The Business Judgment Rule Has Limits

Some leaders assume that reliance on advanced tools will be protected by business judgment principles. That protection is not unlimited.

If a company delegates critical decisions entirely to automated systems without reasonable oversight, courts may view that as a failure of care rather than sound judgment. This is especially true when warning signs were ignored or risks were not assessed.

AI does not replace judgment. It tests whether judgment exists at all.


Practical Steps Businesses Should Take Now

Businesses using AI should consider:

  • Conducting an inventory of AI tools currently in use

  • Identifying which decisions rely on automated outputs

  • Implementing human review requirements

  • Reviewing vendor contracts for risk allocation

  • Updating policies governing AI use

  • Training management and staff

  • Documenting oversight and escalation processes


As outside general counsel, Todd Nurick and Nurick Law Group assist businesses in Pennsylvania, New York, and nationally with building AI governance frameworks that reduce legal exposure while preserving operational benefits.


Final Thoughts

Artificial intelligence can be a powerful business tool, but it does not change the fundamental rules of legal responsibility. When AI gets it wrong, regulators, courts, investors, and counterparties will look to the company and its leadership.

Businesses that treat AI as a tool requiring oversight, governance, and legal awareness are far better positioned than those that treat automation as a substitute for judgment.


Todd Nurick and Nurick Law Group work with business owners and executives to ensure that AI adoption strengthens operations without creating unnecessary legal risk.

This article is for informational purposes only and is not legal advice. Reading it does not create an attorney–client relationship. Todd Nurick and Nurick Law Group are not your attorneys unless and until there is a fully executed written fee agreement with Todd Nurick or Nurick Law Group.


Sources

U.S. Equal Employment Opportunity Commission, guidance on artificial intelligence and employment decision tools

Federal Trade Commission, business guidance on artificial intelligence and accountability

National Institute of Standards and Technology, AI Risk Management Framework

White House Office of Science and Technology Policy, Blueprint for an AI Bill of Rights

New York City Local Law 144 regarding automated employment decision tools

Pennsylvania Unfair Trade Practices and Consumer Protection Law

 

© 2025 by Nurick Law Group. ***Nurick Law Group and Todd Nurick do not function as your legal counsel or attorney unless a fee agreement has been established. The information presented on this site is not intended to serve as legal advice. Our objective is to educate businesses and individuals regarding legal issues pertinent to Pennsylvania. 

 

bottom of page