Connect with us

Hi, what are you looking for?

AI Generative

Social Media Platforms Show Varied Transparency on Defamation and AI Compliance

Jiji Press survey reveals five major social media firms, including Google and Meta, lack transparency on defamation strategies and generative AI compliance as new regulations take effect.

A recent survey by Jiji Press has uncovered significant gaps in the transparency of social media operators regarding their strategies for managing defamation and challenges associated with generative artificial intelligence (AI). This survey, conducted via email by mid-March, coincided with the impending first anniversary of the enforcement of the information distribution platform law, which aims to combat the proliferation of illegal and harmful online content. The law came into effect on Wednesday, marking a pivotal moment in the regulatory landscape for digital platforms.

Among the nine companies surveyed that fall under this legislation, five major players—Google, LY, Meta Platforms, TikTok, and CyberAgent—provided responses affirming their compliance with existing legal frameworks. However, the responses did not clarify specific measures these companies have implemented to address issues related to defamation and the misuse of generative AI technologies. This lack of detailed transparency could raise concerns among users and regulators alike, especially given the increasing scrutiny on social media firms in light of rising incidents of harmful content spreading online.

The information distribution platform law is designed to enhance accountability among social media companies and provide users with clearer pathways for reporting harmful content. However, the survey results suggest that despite regulatory efforts, companies may still be grappling with how best to communicate their compliance and operational practices to the public. As generative AI continues to evolve, the challenges posed by misinformation and defamation become more complex, necessitating a robust and transparent response from these platforms.

Industry experts have pointed out that the responses—or lack thereof—from these companies could indicate a larger issue regarding the readiness of social media platforms to confront the ramifications of advanced technologies. Generative AI, which has the capability to produce text, images, and other forms of media, poses unique risks, particularly when it comes to creating deceptive or misleading content. As such, the need for clear guidelines and robust measures to counteract these challenges is imperative.

The findings of the Jiji Press survey resonate with broader debates about the role of social media in shaping public discourse and their responsibility to prevent the spread of harmful content. While some companies have indicated compliance, the vague nature of their responses may lead to further questions about their operational effectiveness and commitment to user safety.

Looking ahead, the enforcement of the information distribution platform law could prompt a shift in how social media companies approach transparency and accountability. As they navigate the complexities of generative AI and its implications, enhanced clarity in their operations may not only foster greater trust among users but also align them more closely with regulatory expectations. The ongoing evolution of digital communication underscores the necessity for a proactive and transparent approach to governance in the rapidly changing landscape of social media.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Business

Meta and Google mandate AI tool usage in performance reviews to boost productivity, amid concerns that 90% of firms lack measurable returns on AI...

AI Cybersecurity

Iran's IRGC publicly threatens cyberattacks against Apple, Google, and Microsoft starting April 1, jeopardizing billions in cloud infrastructure and enterprise services.

AI Research

Google invests $50,000 in UH Mānoa's AI and robotics research led by Assistant Professor Huaijin Chen to enhance robotic perception for agriculture and healthcare.

Top Stories

Nvidia stock plunges 9% to $165 as a critical head-and-shoulders pattern emerges amid a market sell-off triggered by Google’s TurboQuant announcement.

AI Business

Runway launches a $10M Builders fund to accelerate AI video startups, positioning itself as a leader in innovative, real-time 'video intelligence' applications.

Top Stories

Artlist integrates Google’s Lyria 3 Pro, enabling creators to generate original, studio-quality tracks up to three minutes long, enhancing its AI Toolkit for 50...

AI Technology

A Quinnipiac poll reveals 55% of Americans fear AI will harm jobs and education, as tech giants invest $650 billion in AI infrastructure this...

Top Stories

Google's Gemini aces a nationwide mock exam with an 87.8 average, outperforming ChatGPT's 59.5 and Perplexity's 43.7, highlighting a tech divide in AI education.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.