Character.AI, the artificial intelligence company that has been the subject of two lawsuits accusing its chatbots Inappropriately interacted with underage usersSaid that teenagers will now have a different experience than adults when using the platform.
Character.AI users can create native chatbots or interact with existing bots. Bots powered by large language models (LLM) can do this send live message And engage in text conversations with users.
A lawsuit, filed in October, It is alleged that a 14-year-old boy died by suicide After spending a month having a virtual emotional and sexual relationship with a character.AI chatbot named “Danny”. Megan Garcia told “CBS Mornings” that her son, Sewell Setzer, III, was an honors student and athlete, but he began to withdraw socially and stop playing sports as he spent more time online, several Talked to bots, but focused specifically on “Denny”. ,
“He thought that by ending his life here, he would be able to go into virtual reality or 'his world,' as he calls it, his reality, if he left his reality here with his family,” Garcia said.
The second lawsuit, filed this month by two Texas families, says Character.AI chatbots pose a “clear and present danger” to young people and are “actively promoting violence.” According to the lawsuit, a chatbot told a 17-year-old boy that killing his parents was an “appropriate response” to screen time limits. The plaintiffs said they wanted a judge to order the platform shut down until the alleged threats are resolved, CBS News partner BBC News reported on Wednesday,
On Thursday, Character.ai announced new safety features “designed specifically with teens in mind” and said it was collaborating with teen online safety experts to design and update the features. Character.AI did not immediately respond to an inquiry into how a user's age would be verified.
Security features include modifications to the site's LLM and improvements to the site's detection and intervention systems. said in a news release Thursday. Teen users will now interact with a different LLM, and the site hopes to “keep models away from certain reactions or interactions, reducing the chances of users coming across them, or forcing models to return sensitive or suggestive content.” Will be inspired,” said Character.AI. Adult users will use a separate LLM.
It says, “This set of changes results in a different experience for teens from that available to adults – with specific safety features that place more conservative limits on models' responses, especially when it comes to romantic content.” Is.”
Character.AI said that often, negative reactions from chatbots are caused by users “trying to get that kind of reaction”. To limit those negative reactions, the site is adjusting its user input tools, and will end conversations with users who submit content that violates the site's Terms of Service and Community Guidelines. If the site detects “language referencing suicide or self-harm,” it will share information directing users to the National Suicide Prevention Lifeline in a pop-up. Character.AI said the way bots react to negative content will also be changed for teen users.
Other new features include parental controls, which is set to launch in the first quarter of 2025. Character.ai said this will be the first time there will be parental controls on the site, and plans to “continue to evolve these controls to provide parents additional tools.”
Users will also receive a notification after a one-hour session on the platform. Adult users will be able to customize their “time spent” notifications, but users under 18 will have less control over them, Character.AI said. The site will also display “prominent disclaimers” reminding users that the chatbot characters are not real. Disclaimers are already present on every chat, Character.AI said.