Connect with us

Hi, what are you looking for?

Top Stories

Business Ethics in AI: UN Forum Urges Transparency Amid Rising Adoption Risks

UN Forum on Business and Human Rights emphasizes urgent need for transparency in AI development, with experts warning of significant risks from unregulated adoption.

The recent UN Forum on Business and Human Rights raised pressing questions about the dual nature of Artificial Intelligence (AI) in today’s society. Held in Geneva, the forum featured UN Human Rights Commissioner Volker Türk, who characterized AI as a “Frankenstein’s monster” capable of both manipulation and distortion, while also acknowledging its vast potential. This year’s discussions sought not just to raise concerns about AI, but also to cultivate tangible solutions for businesses striving to develop the technology in a safe and equitable manner.

Participants at the Forum highlighted the prominent role of governments as both regulators and customers of AI technology. As public authorities increasingly collaborate with technology firms to create their own AI systems, they bear a responsibility to establish ethical standards by ensuring transparency in sensitive areas. However, research conducted by the AI and Equality project alongside the University of Cambridge indicated that there is a lack of evidence showing public authorities actively pursuing this responsibility. Instead, AI adoption has been described as occurring almost unconsciously across both public and private sectors.

Several contributions to the Forum pointed out that the rapid integration of AI systems is often simply a function of routine IT updates, frequently devoid of any special consideration or awareness from purchasing organizations. Existing resources within the UN system, such as the UN Working Group on Business and Human Rights report on the human rights impacts of AI, published in June, could help bridge these gaps. The report emphasizes the need for companies to be more cognizant of how AI is developed within and for their organizations, coupled with warnings regarding potential litigation risks linked to poor practices.

Luda Svystunova, Head of Social Research at investor Amundi, emphasized the necessity for direct dialogue between human rights experts and AI developers to mitigate risks associated with the opaque nature of AI systems. The discussions underscored concerns regarding discrimination resulting from decision-making driven by large language models trained on unrepresentative data. This issue is compounded by low levels of AI literacy among vulnerable groups, risking a widening of existing social divides.

The consensus among participants indicated a shift from focusing predominantly on AI companies to a broader responsibility encompassing all organizations involved in the deployment of this technology. Notably, John Morrison, a veteran in the business and human rights domain, expressed the urgent need for a basic multi-stakeholder initiative to address AI issues more effectively.

Emerging from this year’s Forum was the concept of “labour behind AI,” which parallels previous campaigns advocating for improved working conditions in the apparel supply chain. A side event organized by UNI Global Union brought attention to the harsh realities faced by data annotators and content moderators, who often endure severe psychological stress. One participant, Eliza from Portugal, described her experience working with disturbing content on a daily basis, highlighting the unacceptable work conditions and unrealistic targets set by employers.

Trade unions continue to advocate for platform workers to transition from precarious, casual positions to formal employment with appropriate protections, fair compensation, and the right to organize. Suggestions from consultations with workers included rotating staff between more and less harmful content, limiting working hours, ensuring adequate rest, and providing access to independent mental health support.

Christy Hoffman, General Secretary of the global trade union federation, stressed the importance of transparency in the supply chains of technology companies, arguing that discussions about trust in AI often overlook the human element involved in its creation. Her colleague, Ben Parton, bluntly stated that if those responsible for AI development are treated poorly, the outputs of these systems are bound to reflect that negligence.

Protecting the public from harmful content has become a societal priority, aligning with technology companies’ imperative to ensure that large language models are trained on accurate information. This year’s Forum underscored the necessity of acknowledging the human labor underpinning AI technologies, which serves both ethical and operational interests. As discussions continue, the focus on integrating human rights into AI development practices could pave the way for more responsible and equitable technological advancement.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Generative

Researchers from Cambridge and Hunan University benchmarked 19 multimodal models, revealing a two-stage trimodal fusion method that optimizes emotion recognition accuracy by significantly leveraging...

Top Stories

Nearly 60% of organizations are overhauling cybersecurity strategies as risks surge to $9.5 trillion by 2024, driven by AI and geopolitical tensions.

AI Technology

Siléane expands its European presence by launching its first international subsidiary in Western Switzerland, aiming to enhance automation solutions amid a €40M revenue surge.

Top Stories

Utah's groundbreaking AI legislation mandates strict data safeguards for mental health therapy bots, as 85% of Chinese professionals leverage generative AI, highlighting a potential...

AI Technology

Bae Jae-kyu urges long-term investments in tech ETFs as ACE ETF's assets surge from 3 trillion to 22 trillion won, capitalizing on AI market...

Top Stories

Malta initiates a study on AI's impact on its labor market to enhance productivity and bridge the skills gap, aiming for a competitive workforce...

AI Government

New Economy Forum reveals ESG investments face sustainability concerns, while AI continues to thrive, prompting urgent discussions on balancing government debt and innovation.

AI Technology

Pope Leo XIV warns of AI bias in healthcare, urging professionals to prioritize human dignity and ethical integrity amid rising technological influence.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.