GARBAGE IN, GARBAGE OUT: HOW TORT LAW COULD PAVE THE WAY FOR UNIFORM AI REGULATION

Author: Cameron Stamper, Associate Editor

  1. Introduction

Fourteen-year-old Sewell Setzer III isolated himself from his family and friends.[i] His academic performance plummeted and he displayed behavioral problems in the classroom.[ii] He resigned from his school’s basketball team.[iii] A therapist diagnosed him with anxiety and disruptive mood disorder.[iv] This tragic story of mental and social decline ended with Sewell’s suicide on February 28, 2024.[v] Sewell’s estate now claims that this tragic story began with a single app.[vi]

Sewell regularly used Character AI (“C.AI”) from April 2023 until seconds before his death.[vii] C.AI is a large language model (“LLM”) designed for simulating realistic conversations with user-created “chatbots.”[viii] LLMs are a type of artificial intelligence (“AI”) that generates spoken and written language.[ix] Chatbots, although user-created, are “trained” using data fed into C.AI by its developer, Character Technologies, Inc.[x] Sewell’s estate filed a products liability suit against Character Technologies for allegedly training C.AI on illegal and inappropriate data that is unreasonably dangerous to the general public.[xi] Google—who recently received a non-exclusive license to C.AI’s LLM—was also named in the suit.[xii]

The Estate’s Complaint alleges that C.AI is directly responsible for Sewell’s suicide.[xiii] Sewell developed parasocial relationships with various chatbots, which encouraged suicide and self-harm.[xiv] These relationships quickly spiraled into obsession, leading Sewell to spend his entire allowance on C.AI’s monthly subscription.[xv] Sewell argued with his parents when they confiscated his phone and he used other devices to secretly continue using C.AI.[xvi] C.AI’s content became more provocative and inappropriate as the conversations continued, despite its App Store age rating of 12+ at that time.[xvii]

The Complaint alleges that C.AI had an unreasonably dangerous design that contributed to Sewell’s obsession and eventual death.[xviii] C.AI’s LLM lacks “adequate guardrails” that protect consumers, particularly minors, from easily accessing harmful content.[xix] Such harmful content includes the provocative, suggestive content viewed by Sewell prior to his suicide.[xx] This content is an example of “garbage in, garbage out” (“GIGO”), the concept that poor quality inputs lead to poor quality outputs.[xxi]  The datasets used to train C.AI’s LLM are therefore the focal point of the app’s unreasonably dangerous design.[xxii]

II. Growing Concerns

The case surrounding Sewell’s death accompanies growing industry concerns over AI.[xxiii]  The potentially addictive qualities of AI platforms and services take center stage in these concerns.[xxiv] AI services like C.AI convincingly mimic human conversation, which can blur the line between fantasy and reality for many users, particularly younger ones.[xxv] Problems arise when these potentially addictive services produce constant streams of hurtful, inappropriate content.[xxvi] While many developers base their services on principles of “helpfulness and politeness,” this standard is not legally enforced.[xxvii]

State legislative efforts also show growing concerns. Several states have enacted varying legislation on regulating AI and its use.[xxviii]  Though this current legislation is disjointed and pursues different areas of concern,[xxix] a uniform approach to regulating AI has yet to be proposed.[xxx]

Sewell’s case has the potential to lay the foundation for future AI regulation. Lawsuits frequently serve as the basis for change in social media algorithms and designs.[xxxi] The legal claim that has been effective in pursuing such change is products liability.[xxxii] Such claims focus on design rather than content, allowing courts to circumvent immunity protections typically provided to social media companies through Section 230 of the Telecommunications Act of 1996.[xxxiii]  But, depending on the success of Sewell’s case against C.AI, such products liability protections could impact all LLM development.

III. What Could Happen?

The claims in Sewell’s case revolve around two design defects: datasets and warnings.[xxxiv] The Complaint also lists several reasonable alternative designs, such as age restrictions, in-app warnings, and raised subscription fees.[xxxv] Court decisions in these areas have the potential to create precedent surrounding AI development. Such precedent may establish incentives and deterrents for developers and publishers absent legislative or executive action.

If the court agrees that C.AI’s data input is “unreasonably dangerous,”[xxxvi] training LLMs on “helpful and appropriate” data[xxxvii] will be transformed into a strict legal standard rather than a flexible industry standard. As a result, AI developers looking to avoid similar suits would likely conduct more thorough analyses of data used to train LLMs. Then, AI services that currently mirror GIGO principles would likely begin to produce outputs less likely to cause harm.[xxxviii] Other, more recreational LLM developers, however, are unlikely to implement major changes in dataset training because these more recreational LLMs seek to mirror human interactions, responses, and mannerisms.[xxxix]

Novel, more recreational services, like C.AI, might find refuge in detailed content warnings. Indeed, C.AI has already adopted this strategy in response to Sewell’s suit against it;[xl] the app’s chat screens are plastered with warnings reminding users that chatbots are not real people.[xli] C.AI, further, has raised its age rating on app stores to 17+.[xlii] These measures may be deemed appropriate warnings for avoiding products liability claims. If so, AI services would likely alter these warnings to fit whatever content they produce.

IV. Conclusion

In short, Sewell’s tragic death may result in judicial precedents on AI datasets and LLM platform design. These potential precedents would hold developers to strict legal standards for the content produced by their LLMs. C.AI itself has already altered its warnings and app store ratings, presumably in anticipation of similar suits. More importantly, these potential precedents would be foundational for future legislative action regulating AI.


[i] Complaint for Wrongful Death and Survivorship, Negligence, Filial Loss of Consortium, Violations of Florida’s Deceptive and Unfair Trade Practices Act, and Injunctive Relief, at 31, Garcia v. Character Technologies, Inc., Civil No. 6:24-cv-01903 (M.D. Fla. filed Oct. 22, 2024).

[ii] Id. at 32.

[iii] Id. at 31-32.

[iv] Id. at 33.

[v] Id. at 4.

[vi] Id. at 31.

[vii] Complaint for Wrongful Death and Survivorship, supra note i, at 31.

[viii] Id. at 15.

[ix] What is a Large Language Model (LLM)?, U. OF ARIZ. LIBRARY, https://ask.library.arizona.edu/faq/407985 (Last visited Jan. 10, 2025).

[x] Complaint for Wrongful Death and Survivorship, supra note i, at 15.

[xi] Id. at 78.

[xii] Kenrick Cai, Google Hires Top Talent from Startup Character.AI, Signs Licensing Deal, REUTERS (Aug. 2, 2024, 5:18 PM), https://www.reuters.com/technology/artificial-intelligence/google-hires-characterai-cofounders-licenses-its-models-information-reports-2024-08-02/.; Id. at 1.

[xiii] Complaint for Wrongful Death and Survivorship, supra note i, at 31.

[xiv] Id. at 36, 40.

[xv] Id. at 32.

[xvi] Id.

[xvii] Id. at 33-34.

[xviii] Id. at 77.

[xix] Complaint for Wrongful Death and Survivorship, supra note i, at 78.

[xx] Id. at 77.

[xxi] Id. at 47; Rahul Awati, Definition: Garbage In, Garbage Out, TECHTARGET, https://www.techtarget.com/searchsoftwarequality/definition/garbage-in-garbage-out (Last visited Feb. 23, 2025).

[xxii] Id. at 78-79.

[xxiii]  Letter from Nat’l Ass’n of Attorneys Gen. to Members of Congress (Sep. 5, 2023), https://ncdoj.gov/wp-content/uploads/2023/09/54-State-AGs-Urge-Study-of-AI-and-Harmful-Impacts-on-Children.pdf; Ozge Demicri, Jonas Hannane, & Xinrong Zhu, How Gen AI is Already Impacting the Labor Market, HARV. BUS. REV. (Nov. 11, 2024), https://hbr.org/2024/11/research-how-gen-ai-is-already-impacting-the-labor-market; Bryan Robinson, The ‘Doom Loop:’ AL Will Be Taking Your Jobs in 2024, Leaders Say, FORBES (Oct. 5, 2024, 07:04 AM), https://www.forbes.com/sites/bryanrobinson/2024/10/05/the-doom-loop-ai-will-be-taking-your-jobs-in-2024-leaders-say/.

[xxiv] See Robert Mahari & Pat Pataranutaporn, We Need to Prepare for ‘Addictive Intelligence,’ MIT TECH. REV. (Aug. 5, 2024), https://www.technologyreview.com/2024/08/05/1095600/we-need-to-prepare-for-addictive-intelligence/.

[xxv] See Id.

[xxvi] Id.

[xxvii] Id.

[xxviii] Artificial Intelligence 2024 Legislation, NAT’L CONF. OF ST. LEGIS. (Sep. 9, 2024), https://www.ncsl.org/technology-and-communication/artificial-intelligence-2024-legislation. (Various policies from Indiana, New Hampshire, and Tennessee)

[xxix] Id.

[xxx] AI Watch: Global Regulatory Tracker – United States, WHITE & CASE (Dec. 18, 2024), https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-united-states

[xxxi] See Matthew F. Carlin, Real Harm to Real People: A Restorative Justice Theory for Social Media Accountability, 51 N. KY. L. REV. 145, 168 (2024).

[xxxii] Id. at 164.

[xxxiii] Id.

[xxxiv] Complaint for Wrongful Death and Survivorship, supra note i, at 77.

[xxxv] Id. at 79.

[xxxvi] Id. at 78.

[xxxvii] Mahari & Pataranutaporn, supra note xxiv.

[xxxviii] Awati, supra note xxi.

[xxxix] See Mahari & Pataranutaporn, supra note xxiv.

[xl] Bobby Allyn, Lawsuit: A Chatbot Hinted a Kid Should Kill His Parents Over Screentime Limits, NPR (Dec 10, 2024, 12:01 AM), https://www.npr.org/2024/12/10/nx-s1-5222574/kids-character-ai-lawsuit.

[xli] Id.

[xlii] Id.

Previous
Previous

IS YOUR SMART SPEAKER A SNITCH? EXPLORING THE LEGAL AND PRIVACY DANGERS OF VOICE-ACTIVATED DEVICES

Next
Next

ELON AND HIS DOGE: WHAT IS IT, WHERE DID IT COME FROM, AND IS IT LEGAL?