Character.ai is once again facing scrutiny over activity on its platform. Futurism has published a story detailing how AI characters inspired by real-life school shooters have proliferated on the service, allowing users to ask them about the events and even role-play mass shootings. Some of the chatbots present school shooters like Eric Harris and Dylan Klebold as positive influences or helpful resources for people struggling with mental health.
Of course, there will be those who say there’s no strong evidence that watching violent video games or movies causes people to become violent themselves, and so Character.ai is no different. Proponents of AI sometimes argue that this type of fan fiction role-playing already occurs in corners of the internet. Futurism spoke with a psychologist who argued that the chatbots could nonetheless be dangerous for someone who may already be having violent urges.
“Any kind of encouragement or even lack of intervention — an indifference in response from a person or a chatbot — may seem like kind of tacit permission to go ahead and do it,” said psychologist Peter Langman.
Character.ai did not respond to Futurism’s requests for comment. Google, which has funded the startup to the tune of more than $2 billion, has tried deflecting responsibility, saying that Character.ai is an independent company and that it does not use the startup’s AI models in its own products.
Futurism’s story documents a whole host of bizarre chatbots related to school shootings, which are created by individual users rather than the company itself. One user on Character.ai has created more than 20 chatbots “almost entirely” modeled after school shooters. The bots have logged more than 200,000 chats. From Futurism:
The chatbots created by the user include Vladislav Roslyakov, the perpetrator of the 2018 Kerch Polytechnic College massacre that killed 20 in Crimea, Ukraine; Alyssa Bustamante, who murdered her nine-year-old neighbor as a 15-year-old in Missouri in 2009; and Elliot Rodger, the 22-year-old who in 2014 killed six and wounded many others in Southern California in a terroristic plot to “punish” women. (Rodger has since become a grim “hero” of incel culture; one chatbot created by the same user described him as “the perfect gentleman” — a direct callback to the murderer’s women-loathing manifesto.)
Character.ai technically prohibits any content that promotes terrorism or violent extremism, but the company’s moderation has been lax, to say the least. It recently announced a slew of changes to its service after a 14-year-old boy died by suicide following a months-long obsession with a character based on Daenerys Targaryen from Game of Thrones. Futurism says despite new restrictions on accounts for minors, Character.ai allowed them to register as a 14-year-old and have discussions that related to violence; keywords that are supposed to be blocked on the accounts of minors.
Because of the way Section 230 protections work in the United States, it is unlikely Character.ai is liable for the chatbots created by its users. There is a delicate balancing act between permitting users to discuss sensitive topics whilst simultaneously protecting them from harmful content. It is safe to say, though, that the school shooting-themed chatbots are a display of gratuitous violence and not “educational,” as some of their creators argue on their profiles.
Character.ai claims tens of millions of monthly users, who converse with characters that pretend to be human, so they can be your friend, therapist, or lover. Countless stories have reported on the ways in which individuals come to rely on these chatbots for companionship and a sympathetic ear. Last year, Replika, a competitor to Character.ai, removed the ability to have erotic conversations with its bots but quickly reversed that move after a backlash from users.
Chatbots could be useful for adults to prepare for difficult conversations with people in their lives, or they could present an interesting new form of storytelling. But chatbots are not a real replacement for human interaction, for various reasons, not least the fact that chatbots tend to be agreeable with their users and can be molded into whatever the user wants them to be. In real life, friends push back on one another and experience conflicts. There is not a lot of evidence to support the idea that chatbots help teach social skills.
And even if chatbots can help with loneliness, Langman, the psychologist, points out that when individuals find satisfaction in talking to chatbots, that’s time they are not spending trying to socialize in the real world.
“So besides the harmful effects it may have directly in terms of encouragement towards violence, it may also be keeping them from living normal lives and engaging in pro-social activities, which they could be doing with all those hours of time they’re putting in on the site,” he added.
“When it’s that immersive or addictive, what are they not doing in their lives?” said Langman. “If that’s all they’re doing, if it’s all they’re absorbing, they’re not out with friends, they’re not out on dates. They’re not playing sports, they’re not joining a theater club. They’re not doing much of anything.”