Connect with us

Hi, what are you looking for?

AI Regulation

India Orders X to Revise Grok AI Tool Following Obscene Content Complaints

India mandates Elon Musk’s X to revamp its Grok AI after users report obscene content, threatening legal immunity if compliance fails.

India has ordered Elon Musk’s X to implement immediate technical and procedural changes to its AI chatbot Grok following concerns from users and lawmakers over the generation of “obscene” content, including AI-altered images of women. On Friday, the Indian Ministry of Information Technology directed X to restrict the creation of content involving “nudity, sexualization, sexually explicit, or otherwise unlawful” material. The ministry provided a 72-hour window for the platform to submit a report detailing actions taken to prevent hosting or disseminating content classified as “obscene, pornographic, vulgar, indecent, sexually explicit, pedophilic, or otherwise prohibited under law.”

The order, which was reviewed by TechCrunch, warned that noncompliance could jeopardize X’s “safe harbor” protections, which provide legal immunity from liability for user-generated content under Indian law. This directive follows complaints from users who cited examples of Grok being used to alter images of individuals, predominantly women, to appear as if they were wearing bikinis. The issue gained further traction when Indian parliamentarian Priyanka Chaturvedi lodged a formal complaint. Separate reports also highlighted instances where the chatbot generated sexualized images involving minors, a problem that X acknowledged, attributing it to lapses in safeguards. Although the offending images were subsequently removed, those depicting women in altered bikini attire remained accessible on X at the time of publication.

This latest directive follows an advisory issued by the Indian IT ministry earlier in the week, which reminded social media platforms that adherence to local laws governing obscene and sexually explicit content is crucial for maintaining legal immunity from liability for user-generated material. The advisory urged companies to enhance internal safeguards and cautioned that failure to comply could lead to legal action under India’s IT and criminal laws. “It is reiterated that non-compliance with the above requirements shall be viewed seriously and may result in strict legal consequences against your platform, its responsible officers, and users who violate the law, without any further notice,” the order emphasized.

The Indian government also warned that failure to comply could result in action against X under India’s IT law and criminal statutes. As one of the world’s largest digital markets, India presents a significant test case for how far governments are willing to hold platforms accountable for AI-generated content. Any tightening of enforcement could have implications for global technology companies operating in diverse jurisdictions.

This order comes amid ongoing legal challenges from Musk’s X regarding aspects of India’s content regulation rules, wherein the platform argues that federal government takedown powers could represent overreach, despite complying with the majority of blocking directives. Concurrently, Grok has gained traction among X users for real-time fact-checking and commentary on news events, increasing the visibility and political sensitivity of its outputs compared to standalone AI tools.

As of now, X and xAI have not responded to inquiries regarding the Indian government’s latest order. The developments underscore the growing scrutiny that AI technologies face globally, as regulatory frameworks evolve to address the complexities of content moderation in an increasingly digital world.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Regulation

Apple pressures Musk's Grok to enhance AI safety protocols as both companies face scrutiny over content moderation amid rising exploitation of image generation tools.

Top Stories

A new BMJ Open study reveals that five AI chatbots, including ChatGPT and Grok, deliver 49.6% problematic health responses, raising urgent oversight concerns.

AI Education

NxtWave launches comprehensive tech education platform in 11,300+ pincodes across India, breaking language barriers to empower students in 650 districts.

AI Generative

Musk's Grok AI generates over 3 million non-consensual sexualized images in just 11 days, despite promises of robust safeguards from xAI.

Top Stories

Google DeepMind hires philosopher Henry Shevlin to guide ethical AI development and explore machine consciousness as AGI approaches reality

AI Finance

RBI's Swaminathan warns that opaque AI systems in finance could undermine trust and accountability, urging immediate regulatory frameworks for responsible use.

AI Education

Google showcases Gemini for Education and NotebookLM at key tech events, empowering students with personalized AI tools to enhance learning outcomes.

AI Marketing

Agentic AI revolutionizes marketing as brands like Hyundai and Adobe drive efficiency, with AI systems reducing reporting efforts and enhancing personalization at scale.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.