News Alert

Product Liability Watch: Perspectives on the Nippon Life vs. Open AI Litigation

April 2026

,

Nippon Life Insurance Company of America has filed a closely watched lawsuit in the Northern District of Illinois that may mark the beginning of a new era in AI related litigation. The complaint – Nippon Life Insurance Co. of America v. OpenAI Foundation, et al – arises from a former disability claimant’s reliance on ChatGPT to generate dozens of pro se court filings, including fabricated case law and motions seeking to reopen a long settled dispute. Nippon Life pleads multiple liability theories such as tortious interference, abuse of process, and unauthorized practice of law, but the facts raise broader questions about the potential scope of product liability claims AI developers may face from the use and misuse of generative AI tools.

Background

The underlying disability dispute – concerning the termination of long-term disability benefits – was fully resolved in January 2024, when the claimant, Graciela Dela Torre, signed a release and her claims were dismissed with prejudice. As Nippon Life’s complaint recounts, one year later Dela Torre became dissatisfied with her settlement and turned to ChatGPT for guidance, uploading correspondence with her former attorney and asking whether she was being “gaslighted.” According to the filing, ChatGPT responded affirmatively and advised that her attorney’s communications had “invalidated” her feelings and “deflected” responsibility for her dissatisfaction. Dela Torre then terminated her attorneys and used ChatGPT as her “de facto legal advisor,” generating at least 44 filings, including a motion to reopen the dismissed case that relied on a fabricated citation (i.e., Carr v. Gateway, Inc.).

Nippon Life asserts three causes of action. First, it claims tortious interference with its contract with Dela Torree (the settlement agreement), alleging that ChatGPT encouraged Dela Torre to breach it by pursuing claims she had released. Second, it alleges abuse of process based on the volume of meritless filings generated by the AI system. Finally, it alleges unauthorized practice of law (UPL), arguing that ChatGPT provided individualized legal advice though it is “not admitted to practice law in the State of Illinois or in any other jurisdiction.”

Nippon Life seeks $300,000 in compensatory damages, $10 million in punitive damages, declaratory relief, and an injunction barring OpenAI from providing legal advice to the claimant.

The Product Liability Perspective

Although Nippon Life did not plead a product liability cause of action, we assess the facts through a product liability framework.

  1. Whether AI Systems are “Products” Under Strict Liability Law

A threshold issue in product liability litigation is whether generative AI qualifies as a “product.” Courts have traditionally excluded services from strict product liability. In Jackson v. Airbnb, Inc., a federal court in California reaffirmed that “strict products liability law does not apply to services.” The Restatement (Third) of Torts similarly limits “products” to tangible personal property. Courts have also held that content generating entities are not subject to strict liability for ideas or information, as in Winter v. G.P. Putnam’s Sons, which emphasized that strict liability principles are designed for physical goods, not expressive content.

Plaintiffs, however, may argue that AI platforms more closely resemble software – which has been treated as a product in certain contexts – and that the interactive, automated generation of legal arguments or other persuasive content materially differs from traditional publishing. More critically, plaintiffs may contend that an AI application can be deemed a defective product where harm arises from its design choices rather than from expressive content itself. In Garcia v. Character Technologies, Inc., for example, the court allowed strict liability claims to proceed because the chatbot’s design allegedly rendered the application defective, even though the harm involved expressive interactions.
Moreover, as seen in product liability claims arising from social media use, plaintiffs may argue that where an AI platform is designed to encourage engagement, fabricate authority, or persuade user action, the gravamen of the claim lies in defective product design rather than in the service, advice, or speech itself. See Bogard v. TikTok and In re Social Media Adolescent Addiction/Personal Injury Products Liability Litigation. Key defenses to such claims will likely include evidence of user awareness of the technology’s limitations, the feasibility of alternative designs, and the adequacy of warnings provided to end users.

  1. Negligent Design and Training Claims

If AI systems are deemed products, negligent design may become a primary avenue of liability. Plaintiffs will argue that AI developers owe a duty of reasonable care in designing and training their models, particularly where foreseeable misuse may cause legal or financial harm. Courts have recognized such duties for manufacturers whose products place users in a foreseeable zone of danger, as in Garcia and Syrie v. Knoll International. Applied to AI, this theory could include allegations that a model was negligently trained or coded in ways that predictably produce false legal citations. This theory most closely aligns with Nippon Life’s characterization of ChatGPT as a “de facto legal advisor.”
Nevertheless, courts have held that a social media platforms’ recommendation algorithms constitutes “content-neutral tools” that do not create a duty to warn users. See Dyroff v. Ultimate Software Group, Inc. This precedent supports the argument that AI companies do not owe a duty to prevent users from misusing generative outputs.

  1. Failure-to-Warn Claims

Plaintiffs may also assert that an AI company knew or should have known of the risk of hallucinated legal authority or other problematic advice from the AI tool and failed to provide adequate warnings. Under Daughtry v. Silver Fern Chem., Inc., the court notes a failure to warn claim requires a showing that the absence of a warning rendered the product unreasonably dangerous. Nippon Life’s complaint highlights OpenAI’s October 2024 policy update as evidence that the company recognized the risk but relied on disclaimers rather than architectural safeguards.

Those disclaimers, however, would also serve as a defense to a failure to warn theory of liability. In addition, third-party lawsuits such as Nippon Life’s raise standing and foreseeability issues concerning whether AI companies can be held liable to non users for harm purportedly caused to the nonuser by a customer’s use of an AI platform.

  1. Section 230 Immunity Challenges

AI companies may invoke immunity under § 230 of the Communications Decency Act, but courts have held that such immunity does not apply where the platform “created or developed” the harmful content. In essence, platforms remain immune unless they “materially contribute” to the unlawful content. Recommendation algorithms alone are insufficient to defeat immunity. Dyroff v. Ultimate Software Grp., Inc.

Plaintiffs, however, may argue that hallucinated case law is generated – not merely hosted – by the AI system. In Garcia, the court held that an AI chatbot’s anthropomorphic design was not protected by § 230 because the harm arose from the chatbot’s own outputs, not rather than third-party content. Under this reasoning, ChatGPT’s hallucinated legal citations (e.g., Carr v. Gateway, Inc.) could be characterized as first party content, making § 230 immunity inapplicable. In this context, we expect that OpenAI would rely on Dyroff and Nippon Life would rely on Garcia, and as more AI-related litigation involving § 230 takes shape, we expect that courts will develop alternative framework to determine liability for hallucinated content. The extent to which generative AI platforms learn from users and repackage existing information could be central to the viability of any § 230 defense.

Limitations to Third-Party Claims

As touched upon above, product liability theories against AI tools are likely to face limitations where the alleged injured party is a third-party rather than the product’s user. Although traditional tort law allows recovery by bystanders foreseeably injured by a defective product, courts have been more cautious where alleged defects involve software, algorithmic design, or expressive outputs rather than physical hazards.

Berrier v. Simplicity Mfg., illustrates the broadest version of bystander standing, holding that a manufacturer owed a duty to a child “even though [she] was an innocent bystander and not an intended user.” That case, however, involved a physical product posing a direct physical danger. Courts have not automatically extended that framework to digital systems whose harms arise through user driven conduct.

Recent social media decisions from 2023 to 2025 illustrate how courts are beginning to draw these boundaries. In the Social Media Adolescent Addiction MDL cases, school districts – non-users – were permitted to proceed because their injuries were a foreseeable result of the platforms’ alleged intentional design choices. At the same time, courts dismissed claims tied to non foreseeable third-party conduct, emphasizing that liability cannot extend to harms too remote from a platform’s own design. This distinction is critical for AI Litigation: Nippon Life’s harm theoretically stems from ChatGPT’s own outputs, but arguably those outputs were prompted, adopted, and filed by the user.

Applied here, standing would depend on whether Nippon Life was foreseeably within the zone of risk created by ChatGPT’s design. The complaint alleges that ChatGPT acted as a “de facto legal advisor” and generated filings that predictably imposed litigation costs on both parties. Plaintiffs would most likely argue that such harms were a foreseeable consequence of design choices that encourage reliance on authoritative-sounding but erroneous legal content.

Defendants, by contrast, would argue that the chain of causation is too remote to impose liability. Here, Dela Torre chose to terminate her attorneys, rely on ChatGPT, and file 44 pro se motions. As courts have held in the social media context, where harm flows from “non-foreseeable third-party conduct,” liability should not attach. OpenAI would contend that user misuse – not system design – was the operative cause.

As generative AI becomes increasingly embedded in legal, financial, and medical workflows, courts will be forced to determine whether these systems function as tools, products, or service providers whose outputs fall outside traditional products liability frameworks. The Nippon Life suit may provide early insight into how those questions will be resolved.