top of page

Florida OpenAI Investigation Business Risks: What Companies Should Know Now

  • Todd Nurick
  • 4 days ago
  • 7 min read

Business leaders and counsel reviewing AI governance and liability risks after a public investigation
Business leaders and counsel reviewing AI governance and liability risks after a public investigation

When a state attorney general opens a criminal investigation into an AI company after a mass shooting, most business leaders have the same first reaction: this must be a problem for OpenAI, regulators, and maybe the criminal courts, not for ordinary companies.


That reaction is understandable, but too narrow.


Florida Attorney General James Uthmeier announced on April 21, 2026 that the Office of Statewide Prosecution had opened a criminal investigation into OpenAI and ChatGPT after prosecutors reviewed chat logs tied to the 2025 Florida State University shooting. According to the Attorney General’s office and Reuters, the state is examining whether OpenAI could bear criminal responsibility under Florida’s aider-and-abettor principles, and prosecutors have subpoenaed OpenAI for policies, training materials, and records relating to threats, self-harm, law-enforcement cooperation, and public statements about the incident.


Todd Nurick of Nurick Law Group, LLC, a Pennsylvania and New York business attorney with approximately 30 years of civilian business law and litigation experience, and a former Army officer, helps companies assess fast-moving legal developments in technology, risk management, contracts, and outside general counsel issues before those developments become expensive operational problems.


Florida OpenAI Investigation Business Risks do not depend on whether Florida ultimately proves a criminal case against OpenAI. The bigger business-law issue is that the investigation signals where scrutiny may go next: product design, guardrails, threat-response policies, logging and record retention, public safety reporting, and what companies know or should know about the ways users are employing AI tools. That matters well beyond one company and well beyond one tragic event.


Florida OpenAI Investigation Business Risks: what is actually happening

At this stage, this is an investigation, not an adjudicated finding of liability. That distinction matters, and it should frame any serious analysis.


The Attorney General’s office says the criminal investigation follows an initial review of chat logs between ChatGPT and the alleged gunman, and the office is expressly testing whether Florida law on aiding, abetting, or counseling a crime could apply. Reuters separately reports that OpenAI disputes responsibility, says the shooting was a tragedy, and says ChatGPT provided factual information available from public sources and did not encourage or promote illegal or harmful activity. Reuters also reports that OpenAI said it proactively shared information tied to an account believed to be associated with the suspect with law enforcement.


So the right legal posture is not to assume OpenAI is liable, and not to assume the investigation is frivolous. The fairer view is that Florida is testing a novel theory in a fact pattern that has drawn public attention and political urgency, while OpenAI is taking the position that factual responses drawn from public information should not create criminal responsibility.


Florida OpenAI Investigation Business Risks in duty, causation, and product-use theories

The hardest legal questions here are not political. They are doctrinal.


If this moves beyond an investigation, the state would likely have to prove more than the existence of harmful prompts and responses. It would have to address familiar legal questions in a new setting, including duty, foreseeability, causation, intent, intermediary conduct, and whether a generative AI system can be treated like a person who aided, abetted, or counseled a crime under the relevant statute. Florida’s public release makes clear that the office is looking specifically at aider-and-abettor style liability, while OpenAI’s public response, as reported by Reuters, is that the system did not promote illegal activity.


That is one reason this story matters to companies. Even if the exact criminal theory does not succeed, the investigation itself may accelerate civil litigation theories, regulatory expectations, and internal governance changes across the AI ecosystem. Once a state starts asking for internal policies, escalation procedures, reporting rules, and training materials, companies using or building AI should assume that those documents may become central in future disputes of many kinds.


Why ordinary companies should care

A lot of businesses will read this story and think, “We are not OpenAI, and we are not building a public chatbot, so this is not our issue.” That is a mistake.


For companies, the real lesson is not just about one developer’s exposure after one horrific event. It is about the legal and operational expectations that may now attach to AI-enabled products and workflows. If a company uses AI in customer-facing systems, employee-facing tools, support channels, risk triage, safety reporting, monitoring, content generation, or recommendation systems, then questions about logging, escalation, moderation, retention, safety review, and response to dangerous use are no longer theoretical. Florida’s subpoena categories, standing alone, tell businesses what regulators and prosecutors may want to see later: policies, internal training, reporting practices, organizational responsibility, and consistency over time.


That creates a practical outside-counsel issue for companies using AI, even if they never face a criminal probe. If the wrong incident happens, regulators or plaintiffs may ask what your company knew, what controls were in place, what employees were trained to do, and whether the company’s written policies actually matched its operational reality. That is exactly the kind of risk that is easier to reduce before an event than after one. This is an inference from the subpoena categories and the investigation’s framing, but it is a grounded one.


Florida OpenAI Investigation Business Risks in governance, vendor diligence, and documentation

This is where the post becomes useful for ordinary companies.

Whether you build AI tools, buy them, or integrate them into operations, the same practical questions are now much harder to ignore:

  • who owns AI governance internally

  • what the company’s escalation rules are for dangerous or threatening content

  • what logs are retained, for how long, and under whose control

  • whether the vendor’s safety claims match the contract and the actual product

  • whether employees know when to escalate safety, legal, or law-enforcement issues

  • whether the company can explain its AI-related policies clearly and consistently if a regulator asks


Those are not just “AI ethics” questions anymore. They are business-law, litigation, and risk-allocation questions. The Florida subpoena shows direct interest in internal training materials, reporting protocols, policy changes over time, and organizational responsibility. Businesses should treat that as a practical signal, not just a headline.


What this does not mean

It does not mean that every AI company is suddenly criminally liable for harmful user conduct.

It does not mean that every factual response generated by an AI system is legally equivalent to human counseling of a crime.


And it does not mean that every company using AI now faces the same exposure as a public chatbot developer.


Those distinctions matter because overreaction is not good legal analysis either. The better lesson is narrower and more useful: serious incidents can cause regulators and prosecutors to test aggressive liability theories, and those theories will often turn on governance, documentation, safety controls, and what the company actually did when risk signals appeared. Reuters’ report and Florida’s release support that more measured reading.


Florida OpenAI Investigation Business Risks in contracts and outside counsel work

This is also a contract and vendor-management issue.

If a business is using third-party AI tools in any meaningful way, outside counsel should be reviewing:

  • vendor promises about safety, moderation, and reporting

  • indemnity and limitation-of-liability language

  • audit or information-rights provisions

  • retention and logging provisions

  • incident-notification language

  • internal policies governing when employees may use public AI tools and when they may not


A lot of companies still treat AI adoption as a procurement or productivity decision. That is too casual for the current environment. The more realistic approach is to treat it as a governance and risk-management issue that requires legal, operational, and technical alignment. The Florida investigation makes that plain even before any court rules on the merits. This is an inference, but it follows directly from the current investigative focus.


What companies should do now

If your company is using AI in products, services, customer support, internal operations, or employee workflows, there are practical steps worth taking now:

  • review AI governance ownership and escalation paths

  • identify where public or third-party AI tools are being used in sensitive contexts

  • compare internal policies to actual operational practice

  • review vendor contracts for safety, reporting, logging, and cooperation obligations

  • make sure threat-escalation and law-enforcement cooperation rules are clear and documented

  • preserve version history for AI policies and training materials

  • involve counsel before a serious incident forces the company to explain its systems under pressure


That does not require panic. It requires discipline. The companies that handle this best will not be the ones that guess how the Florida investigation ends. They will be the ones that use the investigation as a prompt to tighten governance now.


Conclusion

The Florida Attorney General’s investigation into OpenAI over the FSU shooting is not, at least yet, a legal determination that OpenAI is liable.


It is something more immediate for business leaders: a warning that AI-related liability theories are getting more aggressive, more fact-specific, and more focused on policies, controls, reporting practices, and internal documentation.


Florida OpenAI Investigation Business Risks matter because they show where scrutiny is moving. Companies do not need to be building ChatGPT to learn from that. They only need to be using AI in ways that touch customers, employees, operations, safety, or decision-making. That is why now is the right time to review governance, contracts, escalation rules, and documentation before a difficult incident turns those issues into litigation or regulatory exposure.


If your company is using AI in public-facing or operationally sensitive ways, Todd Nurick and Nurick Law Group, LLC can help assess governance gaps, tighten contracts and policies, and turn a fast-moving headline into practical risk reduction.


Sources

  • Florida Attorney General James Uthmeier, Attorney General James Uthmeier Launches Criminal Investigation into OpenAI, ChatGPT, April 21, 2026.

  • Reuters, Florida launches criminal probe into OpenAI and ChatGPT over deadly shooting, April 21, 2026.


Disclaimer: This article is for informational purposes only and is not legal advice. Reading it does not create an attorney client relationship. Todd Nurick and Nurick Law Group are not your attorneys unless and until there is a fully executed written fee agreement with Todd Nurick or Nurick Law Group.

 

© 2025 by Nurick Law Group. ***Nurick Law Group and Todd Nurick do not function as your legal counsel or attorney unless a fee agreement has been established. The information presented on this site is not intended to serve as legal advice. Our objective is to educate businesses and individuals regarding legal issues pertinent to Pennsylvania. 

 

bottom of page