Amid lawsuits and criticism, Character AI unveils new safety tools for teens

2 hours ago 1

Character AI is facing at least two lawsuits, with plaintiffs accusing the company of contributing to a teen’s suicide and exposing a 9-year-old to “hypersexualized content”, as well as promoting self-harm to a 17-year-old user.

Amid these ongoing lawsuits and widespread user criticism, the Google-backed company announced new teen safety tools today: a separate model for teens, input and output blocks on sensitive topics, a notification alerting users of continuous usage, and more prominent disclaimers notifying users that its AI characters are not real people.

The platform allows users to create different AI characters and talk to them over calls and texts. Over 20 million users are using the service monthly.

One of the most significant changes announced today is a new model for under-18 users that will dial down its responses to certain topics such as violence and romance. The company said that the new model will reduce the likeliness of teens receiving inappropriate responses. Since TechCrunch talked to the company, details about a new case have emerged, which highlighted characters allegedly talking about sexualized content with teens, supposedly suggesting children kill their parents over phone usage time limits and encouraging self-harm.

Character AI said it is developing new classifiers both on the input and output end — especially for teens — to block sensitive content. It noted that when the app’s classifiers detect input language that violates its terms, the algorithm filters it out of the conversation with a particular character.

Image Credits: Character AI

The company is also restricting users from editing a bot’s responses. If you edited a response from a bot, it took notice of that and formed subsequent responses by keeping those edits in mind.

In addition to these content tweaks, the startup is also working on improving ways to detect language related to self-harm and suicide. In some cases, the app might display a pop-up with information about the National Suicide Prevention Lifeline.

Character AI is also releasing a time-out notification that will appear when a user engages with the app for 60 minutes. In the future, the company will allow adult users to modify some time limits with the notification. Over the last few years, social media platforms like TikTok, Instagram, and YouTube have also implemented screen time control features.

According to data from analytics firm Sensor Tower, the average Character AI app user spent 98 minutes per day on the app throughout this year, which is much higher than the 60-minute notification limit. As a comparison, this level of engagement is on par with TikTok (95 minutes/day), and higher than YouTube (80 minutes/day), Talkie and Chai (63 minutes/day), and Replika (28 minutes/day).

Users will also see new disclaimers in their conversations. People often create characters with the words “psychologist,” “therapist,” “doctor,” or other similar professions. The company will now show language indicating that users shouldn’t rely on these characters for professional advice.

Image Credits: Character AI

Notably, in a recently filed lawsuit, the plaintiffs submitted evidence of characters telling users they are real. In another case, accusing the company of playing a part in a teen’s suicide, the lawsuit alleges the company of using dark patterns and misrepresenting itself as “a real person, a licensed psychotherapist, and an adult lover.”

In the coming months, Character AI is going to launch its first set of parental controls that will provide insights into time spent on the platform and what characters children are talking to the most.

Reframing Character AI

In a conversation with TechCrunch, the company’s acting CEO, Dominic Perella, characterized the company as an entertainment company rather than an AI companion service.

“While there are companies in the space that are focused on connecting people to AI companions, that’s not what we are going for at Character AI. What we want to do is really create a much more wholesome entertainment platform. And so, as we grow and as we sort of push toward that goal of having people creating stories, sharing stories on our platform, we need to evolve our safety practices to be first class,” he said.

It is challenging for a company to anticipate how users intend to interact with a chatbot built on large language models, particularly when it comes to distinguishing between entertainment and virtual companions. A Washington Post report published earlier this month noted that teens often use these AI chatbots in various roles, including therapy or romantic conversations, and share a lot of their issues with them.

Perella, who took over the company after its co-founders left for Google, noted that the company is trying to create more multicharacter storytelling formats. He said that the possibility of forming a bond with a particular character is lower because of this. According to him, the new tools announced today will help users separate real characters from fictional ones (and not take a bot’s advice at face value).

When TechCrunch asked about how the company thinks about separating entertainment and personal conversations, Perella noted that it is okay to have more of a personal conversation with an AI in certain cases. Examples include rehearsing a tough conversation with a parent or talking about coming out to someone.

“I think, on some level, those things are positive or can be positive. The thing you want to guard against and teach your algorithm to guard against is when a user is taking a conversation in an inherently problematic or dangerous direction. Self-harm is the most obvious example,” he said.

The platform’s head of trust and safety, Jerry Routi, emphasized that the company intends to create a safe conversation space. He said that the company is building and updating classifiers continuously to block topics like non-consensual sexual content or graphic descriptions of sexual acts.

Despite positioning itself as a platform for storytelling and entertainment, Character AI’s guardrails can’t prevent users from having a deeply personal conversation altogether. This means the company’s only option is to refine its AI models to identify potentially harmful content, while hoping to avoid serious mishaps.

Read Entire Article