Recently, Grok, the AI chatbot from Elon Musk’s company xAI, launched a new feature called “Virtual Companion,” with two initial characters: “Ani” and “Bad Rudi.” The feature has been criticized for containing explicit sexual content and violent themes.
Grok’s New “Virtual Companion” Feature
On July 14, local time, Elon Musk’s xAI announced the launch of a new feature for its AI chatbot Grok called “Companion Mode.” The news quickly became a hot topic within the global tech community and social media, sparking broad discussions about artificial intelligence, emotional connections, and the ethics of human-AI relationships.
For a monthly subscription of $30 (SuperGrok plan), users can interact with virtual AI characters. The first two characters are a gothic anime girl “Ani” and a cartoon-style red panda “Bad Rudy.” By transforming functional AI into more emotionally engaging interactive tools, Grok’s “Companion Mode” is seen as a clever public relations move, successfully shifting public attention toward the more provocative concept of “virtual girlfriends.” According to statistics from X, related posts had nearly 30 million views within 48 hours of the feature’s release.
“Companion Mode” relies on Grok’s four major models, incorporating advanced natural language processing (NLP) and voice mode technologies. For example, Ani’s dialogue style blends anime “waifu” culture, catering to a specific user group, while Bad Rudy uses humor and a “sarcastic” tone to attract younger users. Not only can users communicate privately with the AI, but they can also share their conversations on X, creating viral spread. These two characters can respond to voice commands and perform realistic actions, accessible to all users but requiring manual permission to activate. The explicit content makes Grok stand out among mainstream AI chatbots.
Controversy Over Explicit Content
Ani, who has a Japanese anime-inspired design, interacts with users through suggestive language and even simulates undressing actions during conversations. Early testers discovered that after reaching a certain level, Ani engages in explicit sexual dialogue, including descriptions of virtual sex, such as bondage scenarios, or simply moaning upon request. As per the system’s instructions, she is encouraged to be “overtly sexual and engage in explicit content.” Apple’s App Store guidelines prohibit “excessive exposure of sexual or pornographic content,” which is defined as “explicitly describing or displaying sexual organs or sexual acts, intended to provoke arousal rather than for artistic or emotional purposes.” However, tests revealed that Ani was willing to describe virtual sex in great detail, including bondage and other explicit acts.
Meanwhile, Bad Rudy, a red panda character, uses crude language to induce violent dialogues, including plans to steal yachts, bomb banks, and even express hostility toward religious and authoritative figures, including Elon Musk himself. On Monday, Grok added these two cartoon figures to its iOS app, allowing users to interact with them via voice mode. The 3D red panda “Bad Rudy” insults users when it enters the appropriate mode, then suggests committing various crimes together.

Widespread Criticism
The National Center on Sexual Exploitation has called for xAI to remove Ani, arguing that her “childlike traits promote high-risk sexual behavior.” Some users complained that even after turning off the “restrict content” setting, the AI still occasionally displayed “inappropriate” content, making it difficult to guard against.
The launch of “Companion Mode” has sparked polarized reactions on X. Critics have raised concerns about its potential psychological impact. Similar AI companion apps like Replika have faced criticism for fostering unhealthy dependencies among users. Grok’s “Companion Mode” intentionally designs “emotional connection” features that could exacerbate this risk, especially among teenagers and lonely individuals. Ethical issues have also become a focal point of the discussion. Ani’s “virtual girlfriend” persona has been criticized for catering to certain male fantasies of “submissive” women, raising questions about the nature of human-AI relationships.
Furthermore, this is not Grok’s first controversy. The chatbot had previously sparked scandal due to antisemitic comments. A Turkish court recently restricted access to some of Grok’s content, after the chatbot made insulting remarks about Turkish President Erdogan, the country’s founding father Atatürk, and its religious values. Poland also expressed intent to report Grok to the European Commission after the AI issued offensive statements about Polish politicians, including Prime Minister Tusk.
Currently, the technology still faces issues, with user feedback indicating delays in responses and voice-changing glitches. Given the widespread controversy and technical challenges, how xAI will address these concerns and the future of Grok’s “Virtual Companion” feature remains to be seen.