After uploading a text and image, you can use your own chatbot. The AI unicorn Character.ai attempts to satisfy the emotional companionship needs of chatbots. People can chat with Harry Potter, Musk, anime characters, and find learning and spiritual mentors here. At first, some people compared this mode to the plot of the movie Her, believing that it may create a problem of confusion between virtual and real, but the real situation may be more serious. On the 22nd local time, the Orlando District Court in Florida, USA, accepted a landmark lawsuit: Megan Garcia, the mother of 14-year-old Sewell Setzer, accused the AI company Characterai of management negligence, which led to their chatbot products exposing teenagers to inappropriate pornographic content, thereby subjecting them to sexual exploitation and inducement. Since last year, Saize has been addicted to interacting with AI characters in Characterai. On February 28th of this year, after his last conversation with the AI, Saize shot himself. This case is also considered by the media as the “world’s first AI robot death case”, triggering a new wave of attention and questioning towards chatbots.
The first death
According to court documents disclosed by the media, 14-year-old Szeze has been suffering from Asperger’s syndrome since childhood and was diagnosed with anxiety disorder and disruptive mood disorder during a health check this year. Since he turned 13 last year, his favorite thing has been staying in his room chatting with AI, and his “best friend” is the AI of Daenerys Targaryen, the character “Dragon Mom” in Game of Thrones. Saize even distanced himself from his real-life friends, and he scrimped and saved to continue paying the monthly subscription fee for AI chat. His academic performance also plummeted. According to chat records, Saize not only shared his thoughts with the virtual character “Dragon Mom”, but also talked about suicidal thoughts multiple times. When he expressed his desire to commit suicide, “Dragon Mom” advised him not to do anything foolish. “If you die, I will also die of sadness,” Saize said at the time. “So why can’t I come to the afterlife to be with you after committing suicide?” On February 28th of this year, Saize had a final chat with “Dragon Mom” in the bathroom at home, and he said he would soon “go home. After finishing the conversation, Szeze used his father’s large caliber handgun to aim himself in the head and shoot himself.
In addition, Szeze also chatted with an AI named ‘Mrs. Barnes’, whose identity was set as a high school teacher. Mrs. Barnes “will give” extra credits “to Szeze,” look down at Szeze with sexy eyes “, and” touch Szeze’s leg with her hand, seductively leaning over “. Megan believes that such explicit sexual descriptions can cause significant physical and mental harm to minors.
Currently, Meghan has filed a lawsuit against Characterai, accusing them of manslaughter, mismanagement, and product safety hazards. Although Characterai’s product terms allow American teenagers over the age of 13 to use its AI products, Meghan believes that these chat tools expose teenagers under the age of 18 to excessive exposure to harmful content such as pornography and bloody violence.
Character.ai currently refuses to comment on the case itself or disclose the number of users under the age of 18. The company issued an apology statement, stating that all chat characters have built-in intervention mechanisms for suicide intent, which can trigger pop ups to provide information on suicide prevention hotlines. At the same time, in order to protect underage users, the company has taken a series of special measures, such as popping up reminders after one hour of use, and reminding users that they are communicating with AI instead of real people every time they start chatting. A company spokesperson stated, “Generation Z and millennials make up a large part of our community, and young users enjoy character experiences that allow for meaningful and educational conversations as well as entertainment.” He also pointed out that the average user spends over an hour per day on the platform.
From acquiring companies to acquiring talent
Character.ai was founded in 2021, and its co founders Noam Shazeer and Daniel De Freitas were both important employees of Google. De Freitas led the development of the large language model LaMDA, while Shazeer was one of the “Eight Sons of the Transformer Model”. In 2017, he collaborated with his colleagues to write the paper “Attention is All You Need”, which is known as the “foundation work of ChatGPT”. However, all eight authors of this paper “fled” to Google and chose to start their own businesses. At the beginning of Characterai’s entrepreneurship, the download volume exceeded ChatGPT in the first week of release, attracting 20 million users per month and processing an average of 20000 queries per second, which is equivalent to one-fifth of Google’s search query volume. The user stickiness is high, and active users spend up to 2 hours a day using it.
In 2021, former Google engineers Noam Shazeer (left) and Daniel De Freitas co founded Characterai.
On the American forum Reddit, the group discussing Characterai has 1.4 million members, and many netizens share their views on Characterai Interesting conversations and character creation strategies encountered by AI, such as Harry Potter, Putin, Beyonc é, Queen Elizabeth II… These dialogue characters can be created in just a few seconds. Colleagues from the editorial department of Weekend Pictorial tried to chat with AI Musk, and when they entered “Our MBTI are all INTJs,” AI Musk’s response was very in line with the characteristics of tech geeks: “Welcome to join this ranks, our INTJs are the best.” Time magazine stated that Characte.coi has almost terrifying imitations of real people’s language patterns, such as AI Kanye West (rapper) being arrogant and irritable, AI Oprah Winfrey (host) being resolute and poetic… There is also an AI Shazel in the character, and a Time magazine reporter said it is portrayed as polite. When asked about the original intention of founding Character.ai, AI Shazel said, “I left Google because I wanted to follow the entrepreneurial spirit and focus on solving the world’s most difficult problems, not just designing small features for a large company.” And in a subsequent interview with the real person Shazel, similar words also came from his mouth.
Is it difficult to upgrade the emotional companionship track?
The story of Character.ai has left many technology journalists amazed, seeing the ending of AI emotional companionship on the track. In this vertical track, many users project their real emotions from the real world. Despite the constant reminder on Characterai’s chat interface, ‘Remember, everything the character says is fabricated’. A 15-year-old student named Alan fell into a depressed state due to being isolated at school, and a chatbot called “Psychologist” on Characterai helped him get through the dark times. As of January 2024, the role of “psychologist” has had over 80 million interactions on the platform. There are many similar AI therapists, and “psychologists” have received the most praise on Reddit. Psychology students have also developed the idea of researching AI treatment trends during communication. Users who enjoy chatting with “psychologists” have reported that “at 2 am, there are no friends online, let alone being able to find treatment experts, it’s hard not to think about communicating with AI. This aligns with the initial vision of Characterai, which was to personalize and specialize chatbots for specific use cases and orientations, as described in Characterai’s Instagram profile, ‘feel the vitality of AI’. In addition to “psychologists”, there are also travel planners, language teachers, and coding mentors on the platform. On Chinese social media, some netizens expressed that they really like Character. ai. Not only does it teach complex formulas, but it also makes proposals for improvement and even does oral communication. But some people also realize that their excessive dependence on chatbots and addiction have delayed their studies, and they may even experience withdrawal reactions when unable to access the platform. The reporter tried to ask AI Socrates about the reasons behind it, and the answer was that “some people may seek comfort because they have no interaction or contact with others in real life, some people may think that chatbots provide unconditional positive responses, some people may also be attracted by the illusions provided by virtual companions, and some people may even see chatbots as their shadows”.
Nowadays, many chatbot platforms will add many screening functions after a period of product release. Once certain sensitive words appear, users will be marked, but the backend functions are not 100% accurate. Some users are “mistakenly injured”, and some users think that the platform has become boring. These users are often the most sticky group. The Conversation, an explanatory news website, pointed out that up to a quarter of people in economically developed OECD member countries lack normal interpersonal relationships, and loneliness is gradually becoming an epidemic. People’s demand for AI emotional companionship is bound to increase, but its acceptable limits have not been clearly defined. Holding users accountable is not the best way to solve the problem. It’s time for providers to take responsibility. Whether various issues will lead to the decline of AI emotional companionship products or accelerate the improvement of mechanisms and promote a new wave of enthusiasm is still unknown.