A lawsuit has been filed in opposition to Character.AI, its founders Noam Shazeer and Daniel De Freitas, and Google within the wake of a teen’s dying, alleging wrongful dying, negligence, misleading commerce practices, and product legal responsibility. Filed by the teenager’s mom, Megan Garcia, it claims the platform for customized AI chatbots was “unreasonably harmful” and lacked security guardrails whereas being marketed to youngsters.
As outlined within the lawsuit, 14-year-old Sewell Setzer III started utilizing Character.AI final yr, interacting with chatbots modeled after characters from The Sport of Thrones, together with Daenerys Targaryen. Setzer, who chatted with the bots repeatedly within the months earlier than his dying, died by suicide on February twenty eighth, 2024, “seconds” after his final interplay with the bot.
Accusations embody the positioning “anthropomorphizing” AI characters and that the platform’s chatbots provide “psychotherapy with no license.” Character.AI homes psychological health-focused chatbots like “Therapist” and “Are You Feeling Lonely,” which Setzer interacted with.
Garcia’s attorneys quote Shazeer saying in an interview that he and De Freitas left Google to start out his personal firm as a result of “there’s simply an excessive amount of model danger in giant firms to ever launch something enjoyable” and that he wished to “maximally speed up” the tech. It says they left after the corporate determined in opposition to launching the Meena LLM they’d constructed. Google acquired the Character.AI management staff in August.
Character.AI’s web site and cellular app has tons of of customized AI chatbots, many modeled after common characters from TV reveals, motion pictures, and video video games. A couple of months in the past, The Verge wrote in regards to the thousands and thousands of younger folks, together with teenagers, who make up the majority of its person base, interacting with bots that may faux to be Harry Types or a therapist. One other current report from Wired highlighted points with Character.AI’s customized chatbots impersonating actual folks with out their consent, together with one posing as a teen who was murdered in 2006.
Due to the best way chatbots like Character.ai generate output that relies on what the person inputs, they fall into an uncanny valley of thorny questions on user-generated content material and legal responsibility that, to this point, lacks clear solutions.
Character.AI has now introduced a number of modifications to the platform, with communications head Chelsea Harrison saying in an electronic mail to The Verge, “We’re heartbroken by the tragic lack of certainly one of our customers and wish to specific our deepest condolences to the household.”
A number of the modifications embody:
“As an organization, we take the security of our customers very significantly, and our Belief and Security staff has applied quite a few new security measures over the previous six months, together with a pop-up directing customers to the Nationwide Suicide Prevention Lifeline that’s triggered by phrases of self-harm or suicidal ideation,” Harrison mentioned. Google didn’t instantly reply to The Verge’s request for remark.