Gurugram: The imperative for safety and regulation in artificial intelligence (AI) must be integrated into platforms from the outset rather than treated as an afterthought, emphasized Karandeep Anand, CEO of Character.AI, during a recent interview. Speaking on the sidelines of the Synapse India Conclave on February 21, 2025, Anand highlighted the responsibility AI and tech companies bear in ensuring user safety, particularly in interactive environments like Character.AI. The two-day conference, founded by journalist Shoma Chaudhury in 2023, aimed to address various facets of technology and its societal impacts.
Character.AI allows users to engage with chatbots, participate in AI-driven games, and roleplay as fictional characters. Anand pointed out a fundamental distinction between AI and traditional social media: “The fundamental difference between AI and social media is that on social media platforms, you were sure of the fact that it was humans on the other side. On AI platforms, it’s just an infinite number of bots on the other end.” Before joining Character.AI, Anand was the vice president and head of business products at Meta.
Since Anand’s appointment as CEO less than a year ago, Character.AI has faced significant challenges, including a wrongful death lawsuit following a tragic incident in 2024 where an American teenager, who had been engaging in sexual and romantic conversations with a Character.AI chatbot, committed suicide. The company has also endured criticism for violent content, the use of deceased personalities as character options, and privacy issues.
In response to these challenges, Anand’s leadership has initiated several safety measures, such as prohibiting use of Character.AI by individuals under 18. “Someone needs to take the lead in AI safety,” he asserted. “Companies can’t just say that their competition isn’t doing it, so they won’t do it. We at Character.AI took the step of imposing restrictions for customer safety, and Meta followed suit. Someone needs to set the bold example.”
Character.AI aims to nurture communities and fandoms, reflecting a shift in user expectations. Anand noted, “The new generation is not happy just passively consuming content. They want to interact with it, create it, and engage with it. That’s what our platform serves.” However, he acknowledged the complexities surrounding emotional connections users can form with AI characters. “Even when we watch a TV show or movie, we could become attached and develop parasocial relationships with the characters in it. That’s the nature of our engagement with content,” he explained.
Yet, Anand cautioned that the perceived reciprocity in interactions with AI can foster deeper attachments, which underscores the need for robust safety regulations. He stressed that while government oversight is crucial, it is equally important for companies to take proactive measures. “The issue with policy is that AI is evolving so fast that by the time you develop the policy, the AI landscape has changed beyond it,” Anand added. This dual responsibility of companies and governments presents a complex challenge as the technology continues to evolve rapidly.
The conversation around AI safety and regulation is gaining traction as both industry leaders and policymakers grapple with the implications of increasingly sophisticated AI systems. As platforms like Character.AI redefine user interaction and emotional engagement, the call for integrated safety measures will likely become more pressing. As the technology landscape evolves, the stakes for user safety will only increase, prompting a broader discussion on how best to navigate the challenges posed by AI.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health














































