Connect with us

Hi, what are you looking for?

Top Stories

Business Ethics in AI: UN Forum Urges Transparency Amid Rising Adoption Risks

UN Forum on Business and Human Rights emphasizes urgent need for transparency in AI development, with experts warning of significant risks from unregulated adoption.

The recent UN Forum on Business and Human Rights raised pressing questions about the dual nature of Artificial Intelligence (AI) in today’s society. Held in Geneva, the forum featured UN Human Rights Commissioner Volker Türk, who characterized AI as a “Frankenstein’s monster” capable of both manipulation and distortion, while also acknowledging its vast potential. This year’s discussions sought not just to raise concerns about AI, but also to cultivate tangible solutions for businesses striving to develop the technology in a safe and equitable manner.

Participants at the Forum highlighted the prominent role of governments as both regulators and customers of AI technology. As public authorities increasingly collaborate with technology firms to create their own AI systems, they bear a responsibility to establish ethical standards by ensuring transparency in sensitive areas. However, research conducted by the AI and Equality project alongside the University of Cambridge indicated that there is a lack of evidence showing public authorities actively pursuing this responsibility. Instead, AI adoption has been described as occurring almost unconsciously across both public and private sectors.

Several contributions to the Forum pointed out that the rapid integration of AI systems is often simply a function of routine IT updates, frequently devoid of any special consideration or awareness from purchasing organizations. Existing resources within the UN system, such as the UN Working Group on Business and Human Rights report on the human rights impacts of AI, published in June, could help bridge these gaps. The report emphasizes the need for companies to be more cognizant of how AI is developed within and for their organizations, coupled with warnings regarding potential litigation risks linked to poor practices.

Luda Svystunova, Head of Social Research at investor Amundi, emphasized the necessity for direct dialogue between human rights experts and AI developers to mitigate risks associated with the opaque nature of AI systems. The discussions underscored concerns regarding discrimination resulting from decision-making driven by large language models trained on unrepresentative data. This issue is compounded by low levels of AI literacy among vulnerable groups, risking a widening of existing social divides.

The consensus among participants indicated a shift from focusing predominantly on AI companies to a broader responsibility encompassing all organizations involved in the deployment of this technology. Notably, John Morrison, a veteran in the business and human rights domain, expressed the urgent need for a basic multi-stakeholder initiative to address AI issues more effectively.

Emerging from this year’s Forum was the concept of “labour behind AI,” which parallels previous campaigns advocating for improved working conditions in the apparel supply chain. A side event organized by UNI Global Union brought attention to the harsh realities faced by data annotators and content moderators, who often endure severe psychological stress. One participant, Eliza from Portugal, described her experience working with disturbing content on a daily basis, highlighting the unacceptable work conditions and unrealistic targets set by employers.

Trade unions continue to advocate for platform workers to transition from precarious, casual positions to formal employment with appropriate protections, fair compensation, and the right to organize. Suggestions from consultations with workers included rotating staff between more and less harmful content, limiting working hours, ensuring adequate rest, and providing access to independent mental health support.

Christy Hoffman, General Secretary of the global trade union federation, stressed the importance of transparency in the supply chains of technology companies, arguing that discussions about trust in AI often overlook the human element involved in its creation. Her colleague, Ben Parton, bluntly stated that if those responsible for AI development are treated poorly, the outputs of these systems are bound to reflect that negligence.

Protecting the public from harmful content has become a societal priority, aligning with technology companies’ imperative to ensure that large language models are trained on accurate information. This year’s Forum underscored the necessity of acknowledging the human labor underpinning AI technologies, which serves both ethical and operational interests. As discussions continue, the focus on integrating human rights into AI development practices could pave the way for more responsible and equitable technological advancement.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Marketing

AI transforms customer experience as 83% of consumers trust brands delivering excellent service, with giants like Netflix and Amazon leading the charge.

AI Business

Leading investment teams are adopting AI Concierge systems to enhance research efficiency and decision-making, addressing the need for structured workflows amid rising market complexity.

AI Education

Generative AI is transforming education by reshaping teaching methods and student interactions, prompting educators to reevaluate learning design and ethical practices.

AI Regulation

Ward and Smith's Mayukh Sircar highlights the urgent need for robust AI governance strategies amid evolving regulations to mitigate risks like IP infringement and...

AI Regulation

Cambridge's study reveals GenAI toys, like Curio Interactive's Gabbo, struggle with emotional responses, prompting calls for urgent regulations and safety standards.

AI Business

Galgotias University student Keshav Madan launches Saivyy Technologies, an AI-driven startup aiming to revolutionize data management for businesses through advanced technologies.

AI Research

UCL AI Festival showcases breakthrough policing tool "Sentrix" by winning team, driving crime prevention with AI, and aligns innovation with ethical responsibilities.

AI Cybersecurity

Maritime sector faces a staggering 60% rise in AI-driven cyberattacks within 48 hours, threatening operational continuity and costing companies millions.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.