Artificial intelligence (AI) has increasingly woven itself into the fabric of daily life, and a recent global survey by market research firm Ipsos indicates that public sentiment is shifting positively. Approximately 55 percent of respondents view AI-powered solutions as offering more benefits than drawbacks. This growing acceptance suggests that, despite ongoing anxieties about AI, consumers are intrigued by its capabilities. In response, companies are positioning their products to highlight efficiency and usability, capitalizing on the surge of private investment in AI over the past decade.
However, not everyone is on board with this optimistic view. Members of the lesbian, gay, bisexual, transgender, and queer+ (LGBTQ+) community are voicing concerns about the negative implications of AI. Many issues stem from the data used to train AI models, which often reflect harmful stereotypes and misconceptions about LGBTQ+ individuals. Additionally, AI’s “offline” impacts, particularly its integration into surveillance systems targeting community members, raise alarms. These challenges highlight that AI-enhanced tools frequently do more harm than good for LGBTQ+ populations. Without stringent regulations, the risks associated with AI could outweigh its benefits.
Reinforcing Harmful Stereotypes
The adverse effects of AI on LGBTQ+ individuals can be traced back to the training data. For instance, a report from Wired revealed that popular image generation tools, such as Midjourney, distort representations of the LGBTQ+ community. When tasked with depicting queer individuals, these models often produce reductive and offensive imagery, such as portraying lesbian women as stern figures covered in tattoos. This issue arises from data scraped from the internet, which is heavily influenced by stereotypes. Consequently, tools like Midjourney are likely to perpetuate these biases. Even improved data labeling may fall short due to the vast quantity of derogatory content available online.
This skewed portrayal is not an isolated incident. Research by the United Nations Educational, Scientific and Cultural Organization (UNESCO) highlights that widely used large language models (LLMs), such as Meta’s Llama 2 and OpenAI’s GPT-2, exhibit heteronormative biases. UNESCO’s studies found that these models generated negative content about gay individuals over half the time, underscoring the entrenched homophobia present in the training data. This not only indicates the challenges facing developers but also raises questions about their commitment to addressing these significant issues.
See also
Oracle Loses $300 Billion in Market Value After OpenAI Partnership AnnouncementAI’s Role in Surveillance
The potential damage of AI extends beyond digital representations and into real-world implications. AI systems capable of “automatic gender recognition” (AGR) are gaining traction. These systems analyze audiovisual material, such as footage from security cameras, to infer a person’s gender based on facial features and vocal patterns. However, organizations like Forbidden Colours, a Belgian non-profit advocating for LGBTQ+ rights, caution that understanding a person’s gender identity cannot be boiled down to superficial characteristics. The very premise of these systems is flawed and can lead to serious privacy violations.
Notably, AGR systems have attracted supporters, including governments that oppose LGBTQ+ rights. For instance, Hungarian Prime Minister Viktor Orbán has endorsed AI-enabled biometric monitoring at local Pride events, justifying it as a measure for public safety against the so-called “LGBTQ+ agenda.” In reality, such policies enable government surveillance of artists, activists, and everyday attendees. Although there are ongoing reviews of this policy within the European Union, it serves as a stark reminder of how AI can be weaponized against marginalized communities.
Addressing the Challenges
For LGBTQ+ individuals, the trade-offs associated with AI are particularly pronounced. While the technology may be beneficial for the broader population, it poses unique challenges that could adversely affect queer users. Tools like image and text generators often recycle damaging stereotypes that are hard to eliminate entirely. Moreover, AI’s inclusion in surveillance operations presents significant risks, compromising individual privacy and safety. These factors collectively illustrate that many AI solutions lack inclusivity in their design.
To reverse this trend, collaborative efforts between developers and LGBTQ+ stakeholders are essential. Partnerships can help ensure that training data accurately reflects the lived experiences of queer individuals. Furthermore, implementing robust safeguards against the misuse of AI for surveillance is crucial. Strict prohibitions on systems equipped with gender detection capabilities must be enforced to protect individual privacy rights. Continuous input from LGBTQ+ individuals throughout the AI development process will not only mitigate potential harms but also help the community view AI technology as a valuable asset.


















































