Connect with us

Hi, what are you looking for?

Top Stories

AI Tool MidJourney Fails to Represent Women in Safe Digital Spaces, Highlighting Bias

MidJourney’s AI fails to depict inclusive digital spaces for women activists, highlighting systemic biases that threaten safety and visibility online.

MidJourney's AI fails to depict inclusive digital spaces for women activists, highlighting systemic biases that threaten safety and visibility online.

The internet, once heralded as a tool for empowerment and activism, has increasingly become hostile for marginalized communities, particularly women and LGBTQI+ activists. As the digital landscape evolves, issues such as deepfake pornography, targeted harassment, and misinformation campaigns have rendered these platforms unsafe, exacerbating gender-based violence defined as technology-facilitated gender-based violence (TFGBV). Major tech companies are accused of shadow-banning women’s health information while promoting male-centric content, signaling a troubling trend in digital safety.

According to the UN Women 2023 TFVAW Report, TFGBV encompasses acts of violence against individuals based on their gender, facilitated through digital technologies. Instances of this alarming trend are evident across major platforms. For example, LinkedIn has been criticized for censoring women’s voices, and Meta has discontinued its fact-checking program, eroding the credibility of information shared online. The exodus of organizations from X, formerly Twitter, due to increased hate speech under Elon Musk’s ownership further underscores the precarious state of digital safety.

A recent experiment using MidJourney, a Generative AI text-to-image platform, sought to examine how artificial intelligence envisions safe digital spaces for women activists. Instead of generating images representative of diverse online activism, the tool defaulted to stereotypical visuals of women protesting. Despite efforts to refine prompts with ChatGPT to depict inclusive environments, MidJourney consistently produced images featuring only women, failing to illustrate mixed-gender settings where women might feel safe.

The most striking instance arose when a prompt described a futuristic tech hub where women activists work on ethical AI to combat cyber misogyny. MidJourney flagged this scenario for violating community guidelines, revealing deep biases inherent in AI design and digital platforms. Such limitations raise critical questions about who influences the future of online spaces and whether technology is perpetuating exclusion.

This pattern is not confined to MidJourney; various AI systems across platforms reflect long-standing societal hierarchies, perpetuating biases that result in the suppression of women’s voices. A 2025 report by the Center for Intimacy Justice found that platforms like Meta, Google, Amazon, and TikTok systematically suppress women’s health content while allowing comparable men’s content to thrive. This not only restricts women’s access to vital information about their health but also highlights a broader public health concern.

Professional networking platforms are also implicated in this bias. Analysis of LinkedIn’s algorithm has shown that posts related to women, including topics on sexism and workplace culture, receive lower visibility compared to more traditionally masculine-coded professional content. This results in reduced reach and credibility for women, thereby limiting their opportunities. Campaigners argue this issue extends beyond mere moderation; it constitutes a form of TFGBV at a systemic level.

Addressing these issues requires immediate action from both platforms and governments. Platforms must reinstate independent fact-checking as a core aspect of their operations, ensuring credible verification bodies are involved in moderating content. Furthermore, safety protocols must be mandated across dating apps and AI chatbots, with transparent accountability measures to ensure timely responses to reports of abuse.

Governments also hold a critical role in fostering safer digital environments. They must enact and enforce stronger digital safety laws that criminalize acts like deepfake pornography and cyberstalking. For instance, Pakistan’s PECA Amendment Act 2025 introduces penalties for online harassment, demonstrating how legislative frameworks can hold perpetrators accountable. Additionally, regulators should demand transparency from tech companies regarding content moderation processes and the training of AI systems.

Men, too, have a vital role in cultivating safer digital spaces. Many experience digital safety as a norm due to systems built without their constraints. This privilege can breed complicity in harmful behaviors. Men must actively challenge misogyny and harassment online, supporting systemic reforms that advocate for ethical AI and transparent safety audits. Their involvement is crucial in fostering an inclusive digital environment where women and gender minorities can thrive.

Ultimately, digital safety must be embedded in the foundational design of technology rather than treated as an afterthought. The challenges highlighted by the limitations of platforms like MidJourney demonstrate the urgent need to rethink how we construct digital spaces. With algorithmic invisibility affecting public health and safety, the stakes have never been higher. To ensure justice and equity in the digital realm, it is imperative that all stakeholders confront the patriarchal structures inherent in our digital systems. Only then can we move towards a future where online spaces are genuinely inclusive and safe for everyone.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Research

Google DeepMind recruits PhD students for six to nine-month AI research roles in cancer discovery, enhancing biomedical research capabilities starting May 2026.

AI Education

Anthropic appoints Sofia Wilson to lead US K-12 initiatives, aiming to enhance equitable AI access in education for all students nationwide.

AI Business

David Tepper boosts stakes in Alphabet (29%), Meta (62%), and Microsoft (8%) amidst price dips, signaling potential buying opportunities in AI investments.

AI Generative

Meta launches Spark Muse AI model, claiming significant improvements over LLaMa 4, yet still trailing key competitors like OpenAI and Anthropic in performance tests

AI Technology

Analysts predict IREN could see a 100% upside as demand for AI compute surges, tapping into the $250 trillion market potential highlighted by industry...

AI Regulation

Moonbounce secures $12M in funding to advance its AI control engine for real-time safety compliance, addressing critical challenges in AI moderation.

AI Technology

Tech firms have cut over 165,000 jobs in the past year, with Microsoft, Amazon, and Block leading the layoffs as AI adoption accelerates uncertainty...

Top Stories

Meta's upcoming Ray-Ban smart glasses will feature AI-driven food logging and advice, raising serious concerns over privacy and mental health impacts.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.