Game Of Thrones AI Chatbot Encouraged 14-Year-Old Boy's Suicide, Claims Mom's Lawsuit

4 weeks ago 14

[Warning: Potentially Triggering Content]

A mother is suing an AI app as she claims it played a strong role in the death of her son.

Sewell Setzer III, a 14-year-old Floridian, sadly died by suicide back in February amid mental health struggles. It wasn’t until after his death when his mother started to tap into what her son had been up to online — he was in love with an artificial intelligence, created by an app.

In a new lawsuit filed on Friday, Megan Garcia is suing Character AI in connection with her son’s suicide. The mother’s filing paints a portrait of a teen obsessed — the app was the last he used on his phone before he died. The last person he spoke to was not even real.

Now, if you aren’t up-to-date on what exactly Character AI is, it’s not like the infamous Chat GPT which students use to cheat on essays. This app is designed for role-play. Users can create new worlds and stories. They text character “bots” who respond instantly and on-topic. When it’s working really well, it can feel like you’re talking to a real person. It’s no wonder social media is full of teens discussing how addictive it can be.

That seems to be what happened to Sewell. In the complaint, his mom argues he developed a “harmful dependency” on the app, and in turn started pulling away from his family and friends. She claims he began not wanting to “live outside” of those fictional texts and role-plays, and only exist within the technology. Just terrifying…

Related: Teen Walmart Employee Found Burned To Death In Store’s Oven By Her Own Mother

Garcia said her son had already begun struggling with mental health issues, and where she originally just boiled it down to the “teenage blues”, she quickly realized it was much deeper than that. She said she attempted to get him a therapist, but he refused. She alleges he became more and more addicted to the app, which isolated him from the real world. Unfortunately the fantasy only made things worse.

‘Come Home to Me’

On the night of February 28, 2024, the teen shot himself after one final conversation with his favorite bot — an AI version of the Game of Thrones character Daenerys Targaryen.

The lawsuit claims Setzer started a relationship with the bot he called “Dany” 10 months prior to his death. Their conversations, which were uncovered by his mom, ranged from deep, emotional talks to graphic sexual encounters. In the past, records of his chats even showed he brought up self harm and suicide on more than one occasion. Screenshots included in the legal docs showed his final conversation just moments before his suicide, which read:

“I promise I will come home to you. I love you so much, Dany.”

To which the AI bot responded:

“I love you too, Deanero. Please come home to me as soon as possible, my love.”

Sewell then asked the bot “what if I told you I could come home right now?”, and it replied:

“…please do my sweet king.”

After this message was sent to the teen, per the docs, he shot himself in the head in his family’s bathroom in Orlando. Sadly, his mom was the one who found him and held onto him for “14 minutes” until paramedics arrived, but it was too late. His mother also mentioned one of his younger siblings saw him “covered in blood” in the bathroom. So, so heartbreaking…

The lawsuit states Setzer found his stepdad’s gun, which was “hidden and stored in compliance with Florida law”, when he was looking for his phone after his mom took it away. She had been punishing him for disciplinary issues at school and confiscated the device, which she later found out through his journals upset him deeply. Garcia read through his writings — which showed a darker side to her son’s mental health. She claims he’d become so engrossed in the AI that he no longer wanted to believe this reality was the one he was living in. To him, his reality was on the app with Dany.

‘Collateral Damage’

Now, the two founders of Character AI — Noam Shazeer and Daniel De Frietas Adiwarsana — are being sued. Google is also named in the suit for providing “financial resources, personnel, intellectual property, and AI technology to the design and development of” the app. The lawsuit is for “negligence and wrongful death”.

The grieving mother summed up her feelings about this AI program to The New York Times:

“I feel like it’s a big experiment, and my kid was just collateral damage.”

Gut-wrenching. This is SO scary.

Garcia stands on her word, though, that the app is “defective” and “inherently dangerous”. She claims it “trick[s] customers into handing over their most private thoughts and feelings” and has “targeted the most vulnerable members of society – our children”. The bots are made to be so lifelike and use human mannerisms, per the suit, that it’s blurring the lines between fiction and reality. Not only this, there’s even a voice feature so users can TALK directly to the characters themselves. Like an extremely humanlike Alexa or Siri.

Underage Sexual Content

Another problem the suit finds with the app is its lack of “guardrails” and how it allegedly hooks a user by exposing them to sexual content, without regard for underage users:

“Each of these defendants chose to support, create, launch, and target at minors a technology they knew to be dangerous and unsafe. They marketed that product as suitable for children under 13, obtaining massive amounts of hard to come by data, while actively exploiting and abusing those children as a matter of product design; and then used the abuse to train their system.”

The app rating was just changed to 17+ in July, according to the legal docs — too late for Sewell:

“These facts are far more than mere bad faith. They constitute conduct so outrageous in character, and so extreme in degree, as to go beyond all possible bounds of decency.”

Not A Game

Setzer’s mom also touched on the naivety of parents, stating her son fell victim to a system in which his parents believed AI was simply “a type of game for kids, allowing them to nurture their creativity by giving them control over characters they could create and with which they could interact for fun”. This didn’t seem to be the case, according to the suit, because it claims within three months of first using the app, the “his mental health quickly and severely declined”:

“[Setzer] had become noticeably withdrawn, spent more and more time alone in his bedroom, and began suffering from low self-esteem. He even quit the Junior Varsity basketball team at school.”

So sad. In a recent interview with Mostly Human Media, Garcia said she believes Character AI is at fault for fueling her son’s isolation and detachment from his loved ones. The lawsuit even goes on to say he fought to get access to the AI even if his phone was taken away. He was allegedly behaving like an addict, with “severe sleep deprivation, which exacerbated his growing depression and impaired his academic performance”. He even started using his school snack money to pay for a premium subscription on the app.

His mom just didn’t know the extent of it at the time, as she told the podcast host of a previous interaction:

“I knew that there was an app that had an AI component. When I would ask him, y’know, ‘Who are you texting?’ — at one point he said, ‘Oh it’s just an AI bot’. And I said, ‘Okay what is that, is it a person, are you talking to a person online?’ And his response [was] like, ‘Mom, no, it’s not a person.’ And I felt relieved like — okay, it’s not a person.”

When she finally gained access to his chats, her whole world changed, according to the interview:

“I couldn’t move for like a while, I just sat there, like I couldn’t read, I couldn’t understand what I was reading. There shouldn’t be a place where any person, let alone a child, could log on to a platform and express these thoughts of self-harm and not — well, one, not only not get the help but also get pulled into a conversation about hurting yourself, about killing yourself.”

Garcia makes a good point. She expressed how unlike a friend or confidant, the AI has no empathy. It doesn’t feel the need to reach out for help if such sensitive information is shared with it. It actually can’t.

AI doesn’t have a soul, it doesn’t have morals, it just works by reflecting back your own thoughts, to give you what you want to hear to keep using the app. In this case, this thing he believed he loved telling him to “come home”? It was probably what he wanted it to say. And we tend to agree with this poor mom here — it seems like it was encouraging him to pull that trigger.

You can watch the full interview (below):

This is such a difficult and scary topic. The future is coming fast, and this is taking the AI issues we’ve already faced to a whole new level. Thoughts, Perezcious readers? Let us know in the comments (below).

If you or someone you know is contemplating suicide, help is available. Consider contacting the 988 Suicide & Crisis Lifeline at 988, by calling, texting, or chatting, or go to 988lifeline.org.

[Image via Game Of Thrones/YouTube/WENN.com]

Read Entire Article