The recent UN Forum on Business and Human Rights raised pressing questions about the dual nature of Artificial Intelligence (AI) in today’s society. Held in Geneva, the forum featured UN Human Rights Commissioner Volker Türk, who characterized AI as a “Frankenstein’s monster” capable of both manipulation and distortion, while also acknowledging its vast potential. This year’s discussions sought not just to raise concerns about AI, but also to cultivate tangible solutions for businesses striving to develop the technology in a safe and equitable manner.
Participants at the Forum highlighted the prominent role of governments as both regulators and customers of AI technology. As public authorities increasingly collaborate with technology firms to create their own AI systems, they bear a responsibility to establish ethical standards by ensuring transparency in sensitive areas. However, research conducted by the AI and Equality project alongside the University of Cambridge indicated that there is a lack of evidence showing public authorities actively pursuing this responsibility. Instead, AI adoption has been described as occurring almost unconsciously across both public and private sectors.
Several contributions to the Forum pointed out that the rapid integration of AI systems is often simply a function of routine IT updates, frequently devoid of any special consideration or awareness from purchasing organizations. Existing resources within the UN system, such as the UN Working Group on Business and Human Rights report on the human rights impacts of AI, published in June, could help bridge these gaps. The report emphasizes the need for companies to be more cognizant of how AI is developed within and for their organizations, coupled with warnings regarding potential litigation risks linked to poor practices.
Luda Svystunova, Head of Social Research at investor Amundi, emphasized the necessity for direct dialogue between human rights experts and AI developers to mitigate risks associated with the opaque nature of AI systems. The discussions underscored concerns regarding discrimination resulting from decision-making driven by large language models trained on unrepresentative data. This issue is compounded by low levels of AI literacy among vulnerable groups, risking a widening of existing social divides.
The consensus among participants indicated a shift from focusing predominantly on AI companies to a broader responsibility encompassing all organizations involved in the deployment of this technology. Notably, John Morrison, a veteran in the business and human rights domain, expressed the urgent need for a basic multi-stakeholder initiative to address AI issues more effectively.
Emerging from this year’s Forum was the concept of “labour behind AI,” which parallels previous campaigns advocating for improved working conditions in the apparel supply chain. A side event organized by UNI Global Union brought attention to the harsh realities faced by data annotators and content moderators, who often endure severe psychological stress. One participant, Eliza from Portugal, described her experience working with disturbing content on a daily basis, highlighting the unacceptable work conditions and unrealistic targets set by employers.
Trade unions continue to advocate for platform workers to transition from precarious, casual positions to formal employment with appropriate protections, fair compensation, and the right to organize. Suggestions from consultations with workers included rotating staff between more and less harmful content, limiting working hours, ensuring adequate rest, and providing access to independent mental health support.
Christy Hoffman, General Secretary of the global trade union federation, stressed the importance of transparency in the supply chains of technology companies, arguing that discussions about trust in AI often overlook the human element involved in its creation. Her colleague, Ben Parton, bluntly stated that if those responsible for AI development are treated poorly, the outputs of these systems are bound to reflect that negligence.
Protecting the public from harmful content has become a societal priority, aligning with technology companies’ imperative to ensure that large language models are trained on accurate information. This year’s Forum underscored the necessity of acknowledging the human labor underpinning AI technologies, which serves both ethical and operational interests. As discussions continue, the focus on integrating human rights into AI development practices could pave the way for more responsible and equitable technological advancement.
See also
Kulicke and Soffa’s EMIB Strategy Signals Potential Shift in AI Chip Packaging Landscape
China’s Deepseek and ‘Little Dragons’ Surpass US in Open AI Model Adoption
Holiverse Develops Groundbreaking Offline AI Device to Reinstate User Data Control
Midwest Communities Mobilize Against Surge of AI Data Center Proposals Amid Environmental Concerns
Mistral AI Deploys Voxtral Models on Amazon SageMaker with Advanced Multimodal Capabilities


















































