A recent survey by Jiji Press has uncovered significant gaps in the transparency of social media operators regarding their strategies for managing defamation and challenges associated with generative artificial intelligence (AI). This survey, conducted via email by mid-March, coincided with the impending first anniversary of the enforcement of the information distribution platform law, which aims to combat the proliferation of illegal and harmful online content. The law came into effect on Wednesday, marking a pivotal moment in the regulatory landscape for digital platforms.
Among the nine companies surveyed that fall under this legislation, five major players—Google, LY, Meta Platforms, TikTok, and CyberAgent—provided responses affirming their compliance with existing legal frameworks. However, the responses did not clarify specific measures these companies have implemented to address issues related to defamation and the misuse of generative AI technologies. This lack of detailed transparency could raise concerns among users and regulators alike, especially given the increasing scrutiny on social media firms in light of rising incidents of harmful content spreading online.
The information distribution platform law is designed to enhance accountability among social media companies and provide users with clearer pathways for reporting harmful content. However, the survey results suggest that despite regulatory efforts, companies may still be grappling with how best to communicate their compliance and operational practices to the public. As generative AI continues to evolve, the challenges posed by misinformation and defamation become more complex, necessitating a robust and transparent response from these platforms.
Industry experts have pointed out that the responses—or lack thereof—from these companies could indicate a larger issue regarding the readiness of social media platforms to confront the ramifications of advanced technologies. Generative AI, which has the capability to produce text, images, and other forms of media, poses unique risks, particularly when it comes to creating deceptive or misleading content. As such, the need for clear guidelines and robust measures to counteract these challenges is imperative.
The findings of the Jiji Press survey resonate with broader debates about the role of social media in shaping public discourse and their responsibility to prevent the spread of harmful content. While some companies have indicated compliance, the vague nature of their responses may lead to further questions about their operational effectiveness and commitment to user safety.
Looking ahead, the enforcement of the information distribution platform law could prompt a shift in how social media companies approach transparency and accountability. As they navigate the complexities of generative AI and its implications, enhanced clarity in their operations may not only foster greater trust among users but also align them more closely with regulatory expectations. The ongoing evolution of digital communication underscores the necessity for a proactive and transparent approach to governance in the rapidly changing landscape of social media.
See also
Sam Altman Praises ChatGPT for Improved Em Dash Handling
AI Country Song Fails to Top Billboard Chart Amid Viral Buzz
GPT-5.1 and Claude 4.5 Sonnet Personality Showdown: A Comprehensive Test
Rethink Your Presentations with OnlyOffice: A Free PowerPoint Alternative
OpenAI Enhances ChatGPT with Em-Dash Personalization Feature




















































