The internet, once heralded as a tool for empowerment and activism, has increasingly become hostile for marginalized communities, particularly women and LGBTQI+ activists. As the digital landscape evolves, issues such as deepfake pornography, targeted harassment, and misinformation campaigns have rendered these platforms unsafe, exacerbating gender-based violence defined as technology-facilitated gender-based violence (TFGBV). Major tech companies are accused of shadow-banning women’s health information while promoting male-centric content, signaling a troubling trend in digital safety.
According to the UN Women 2023 TFVAW Report, TFGBV encompasses acts of violence against individuals based on their gender, facilitated through digital technologies. Instances of this alarming trend are evident across major platforms. For example, LinkedIn has been criticized for censoring women’s voices, and Meta has discontinued its fact-checking program, eroding the credibility of information shared online. The exodus of organizations from X, formerly Twitter, due to increased hate speech under Elon Musk’s ownership further underscores the precarious state of digital safety.
A recent experiment using MidJourney, a Generative AI text-to-image platform, sought to examine how artificial intelligence envisions safe digital spaces for women activists. Instead of generating images representative of diverse online activism, the tool defaulted to stereotypical visuals of women protesting. Despite efforts to refine prompts with ChatGPT to depict inclusive environments, MidJourney consistently produced images featuring only women, failing to illustrate mixed-gender settings where women might feel safe.
The most striking instance arose when a prompt described a futuristic tech hub where women activists work on ethical AI to combat cyber misogyny. MidJourney flagged this scenario for violating community guidelines, revealing deep biases inherent in AI design and digital platforms. Such limitations raise critical questions about who influences the future of online spaces and whether technology is perpetuating exclusion.
This pattern is not confined to MidJourney; various AI systems across platforms reflect long-standing societal hierarchies, perpetuating biases that result in the suppression of women’s voices. A 2025 report by the Center for Intimacy Justice found that platforms like Meta, Google, Amazon, and TikTok systematically suppress women’s health content while allowing comparable men’s content to thrive. This not only restricts women’s access to vital information about their health but also highlights a broader public health concern.
Professional networking platforms are also implicated in this bias. Analysis of LinkedIn’s algorithm has shown that posts related to women, including topics on sexism and workplace culture, receive lower visibility compared to more traditionally masculine-coded professional content. This results in reduced reach and credibility for women, thereby limiting their opportunities. Campaigners argue this issue extends beyond mere moderation; it constitutes a form of TFGBV at a systemic level.
Addressing these issues requires immediate action from both platforms and governments. Platforms must reinstate independent fact-checking as a core aspect of their operations, ensuring credible verification bodies are involved in moderating content. Furthermore, safety protocols must be mandated across dating apps and AI chatbots, with transparent accountability measures to ensure timely responses to reports of abuse.
Governments also hold a critical role in fostering safer digital environments. They must enact and enforce stronger digital safety laws that criminalize acts like deepfake pornography and cyberstalking. For instance, Pakistan’s PECA Amendment Act 2025 introduces penalties for online harassment, demonstrating how legislative frameworks can hold perpetrators accountable. Additionally, regulators should demand transparency from tech companies regarding content moderation processes and the training of AI systems.
Men, too, have a vital role in cultivating safer digital spaces. Many experience digital safety as a norm due to systems built without their constraints. This privilege can breed complicity in harmful behaviors. Men must actively challenge misogyny and harassment online, supporting systemic reforms that advocate for ethical AI and transparent safety audits. Their involvement is crucial in fostering an inclusive digital environment where women and gender minorities can thrive.
Ultimately, digital safety must be embedded in the foundational design of technology rather than treated as an afterthought. The challenges highlighted by the limitations of platforms like MidJourney demonstrate the urgent need to rethink how we construct digital spaces. With algorithmic invisibility affecting public health and safety, the stakes have never been higher. To ensure justice and equity in the digital realm, it is imperative that all stakeholders confront the patriarchal structures inherent in our digital systems. Only then can we move towards a future where online spaces are genuinely inclusive and safe for everyone.
See also
Aehr Test Systems Reports 27% Stock Surge on Record $37.2M Bookings and AI Demand
Google’s AI Overviews Achieve 90% Accuracy but Generate Millions of Errors Annually
Germany”s National Team Prepares for World Cup Qualifiers with Disco Atmosphere
95% of AI Projects Fail in Companies According to MIT
AI in Food & Beverages Market to Surge from $11.08B to $263.80B by 2032


















































