Karnataka is set to introduce a comprehensive draft bill aimed at enhancing digital safety and fostering responsible social media usage, with a notable emphasis on artificial intelligence (AI). As reported by the Times of India, the proposed Karnataka Responsible Social Media & Digital Safety Bill, 2026, has been submitted to Chief Minister Siddaramaiah and is expected to undergo scrutiny by the state legal department, with a potential introduction during the upcoming monsoon session of the legislature, anticipated to start between June and July.
The draft bill, which remains unpublished, outlines a phased implementation strategy. Initial steps will focus on awareness initiatives and establishing the necessary institutional framework, followed by technology integration, with full enforcement of the new regulations expected to occur later. A central aspect of the bill is the incorporation of AI technologies to enhance content moderation, enabling quicker responses to harmful online content.
Among the significant AI-driven proposals are mandatory content labeling requirements, which aim to combat deepfakes and synthetic content by instituting clear legal definitions and penalties for violations. Social media platforms would be required to address harmful content within a strict timeframe of 24 to 48 hours. Moreover, the establishment of the Karnataka Digital Safety and Social Media Regulatory Authority is planned to oversee compliance and identify potential threats across various platforms.
Users will also gain the ability to report harmful content and seek protection against harassment and misinformation, with a focus on ensuring that grievance redressal occurs within a clearly defined timeline. The bill has been designed with an awareness of the mental well-being of younger users, linking digital safety with mental health initiatives. It prioritizes digital literacy and media awareness, proposing programs that promote fact-checking, critical thinking, and responsible online behavior.
This initiative by Karnataka is not an isolated case; two other Indian states—Goa and Andhra Pradesh—are also contemplating similar regulations aimed at curbing online addiction and cyberbullying while enhancing the safety of minors using social media. This growing trend suggests that social media regulation is increasingly becoming a priority for state governments across India. During the state budget presentation in March, Chief Minister Siddaramaiah even proposed a ban on social media for users under 16 years old, a move echoed by the legislative discussions in Goa and Andhra Pradesh, where bills targeting minors between the ages of 13 and 16 are under consideration.
At the national level, the Indian government is also reviewing draft IT rules for 2026, which include provisions for a three-hour content takedown requirement, expanded definitions of stakeholders, and new regulations concerning synthetically generated information (SGI). These developments reflect an overarching trend of increasing scrutiny over social media platforms, as both state and central governments seek more localized oversight and accountability while existing laws, such as the Information Technology Act, 2000, and the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, continue to be enforced.
The convergence of these initiatives indicates a significant shift in how digital safety and social media governance are being approached in India. As states propose their own frameworks, they aim to balance the benefits of social media with the pressing need for user protection, particularly for vulnerable populations like children and young adults. This movement toward more stringent regulations reflects a growing recognition of the complexities of digital interactions and the imperative to create a safer online environment for all users.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health















































