Connect with us

Hi, what are you looking for?

AI Regulation

UNDP Reports AI Could Exacerbate Global Inequality, Urges Immediate Action from Governments

UNDP warns that unmanaged AI could reverse decades of development, with 3.7 billion people excluded from the digital economy and risks of increased global inequality.

Governments and technology companies are facing increasing scrutiny over the rapid development of artificial intelligence (AI), as two new reports issued this week highlight potential risks associated with its unchecked growth. A United Nations assessment and a separate safety index indicate that without appropriate regulatory measures, AI could exacerbate global inequality and pose significant safety threats.

The United Nations Development Programme (UNDP) warned that unmanaged AI could reverse decades of developmental progress. According to the report, the pace of AI adoption is outstripping many countries’ capacities to adapt. Philip Schellekens, UNDP’s chief economist for Asia and the Pacific, emphasized that the “central fault line” lies in capability, suggesting that nations with robust infrastructure and governance will benefit from AI, while others may lag behind.

The risks are particularly pronounced in the Asia-Pacific region, which, despite housing over half of the world’s population, sees only 14 percent of individuals utilizing AI tools. Approximately 3.7 billion people remain excluded from the digital economy, with a quarter of the population offline. In South Asia, significant gender disparities persist, with women being up to 40 percent less likely than men to possess a smartphone.

Despite these challenges, the UNDP report posits that AI-driven growth is feasible, projecting that AI could contribute an additional 2 percentage points to annual GDP growth and enhance productivity in sectors such as health and finance. ASEAN economies, for example, could potentially accrue nearly $1 trillion over the next decade. However, substantial structural hurdles remain, including 1.3 billion workers in informal employment, 770 million women not participating in the labor force, and 200 million individuals living in extreme poverty.

The report also highlights severe digital and gender divides within Asia-Pacific, noting that women and young people are particularly vulnerable to job disruptions caused by AI. Jobs typically held by women are nearly twice as susceptible to automation compared to those held by men, while employment for youth aged 22 to 25 is already declining in high-exposure roles. Bias in AI models further compounds these issues, as algorithms trained predominantly on data from urban male borrowers misclassify women entrepreneurs and rural farmers as high-risk, effectively denying them access to financial support.

Moreover, rural and Indigenous communities face exclusion due to their absence in datasets used to train AI systems. The ongoing digital divide continues to impact health and education outcomes, with over 1.6 billion individuals unable to afford a healthy diet and 27 million youth remaining illiterate. Many countries depend on imported AI models that fail to account for local languages and cultural contexts, thereby diminishing the efficacy of AI in delivering essential services. Despite growing interest in AI, a shortage of digital skills impedes progress across the region.

In Europe, the UNDP report identifies uneven preparedness for AI, with only a limited number of countries having established comprehensive regulations. It warns that by 2027, over 40 percent of AI-related data breaches could arise from the misuse of generative AI. Countries like Denmark, Germany, and Switzerland have emerged as leaders in AI readiness, while Albania and Bosnia and Herzegovina lag behind. Kanni Wignaraja, U.N. Assistant Secretary-General and UNDP’s regional director for Asia and the Pacific, remarked that these widening gaps are not inevitable and noted that many countries remain “at the starting line.”

A separate report from the Future of Life Institute (FLI) reveals that major AI companies are failing to adhere to their safety commitments, raising further concerns. The 2025 Winter AI Safety Index evaluated eight prominent firms, including **Anthropic**, **OpenAI**, **Google DeepMind**, **xAI**, **Meta**, **DeepSeek**, **Alibaba Cloud**, and **Z.ai**. Evaluators noted that none of these companies had developed a testable plan to ensure human control over advanced AI systems.

Stuart Russell, a computer science professor at the University of California, Berkeley, criticized the firms for claiming they can develop superhuman AI without demonstrating how to maintain control over such systems, with risks of losing control estimated as high as “one in three.” The index assessed companies across six categories, including risk assessment, current harms, governance, and information sharing, finding some progress yet emphasizing inconsistent implementation.

Anthropic, OpenAI, and Google DeepMind received the highest overall scores, though each faced specific criticisms. Anthropic was criticized for discontinuing human uplift trials, while Google DeepMind improved its safety framework but still relies on evaluators compensated by the company. OpenAI, on the other hand, was faulted for unclear safety thresholds and lobbying against state-level AI safety regulations. A spokesperson from OpenAI declared that safety is a crucial aspect of their operations, asserting investments in frontier safety research and rigorous model testing.

Meanwhile, the remaining firms demonstrated mixed results. xAI introduced its first structured safety framework, though reviewers noted a lack of clear mitigation triggers. Z.ai allowed external evaluations but has yet to disclose its complete governance structure. Meta launched a frontier safety framework but faced calls for clearer methodologies, while DeepSeek has not documented basic safety measures. Alibaba Cloud, though contributing to national watermarking standards, requires stronger performance on fairness and safety metrics.

FLI President Max Tegmark expressed concerns that AI is currently “less regulated than sandwiches” in the United States, citing ongoing lobbying against mandatory safety standards. He pointed out that public apprehensions about superintelligence are rising, as evidenced by a petition organized by FLI that garnered signatures from thousands of influential figures, including politicians and scientists, urging companies to decelerate development. Tegmark warned of the potential consequences, such as economic and political instability, stemming from the unregulated advancement of these systems, which could lead to workforce displacement and an increased dependence on government systems across diverse ideological lines.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Regulation

Thomson Reuters' CoCounsel reaches over one million legal professionals worldwide, marking a pivotal shift towards trusted, specialized AI in the legal sector.

AI Research

Apex Group's report reveals 85% of private credit firms fully integrate AI into operations, transforming investment decisions and risk management.

AI Cybersecurity

Proofpoint reveals 90% of 5,735 reported cyberattacks stem from human error, urging a human-centric cybersecurity approach in the AI era.

AI Cybersecurity

IBM's X-Force reveals a 44% surge in cyberattacks exploiting vulnerabilities in Asia-Pacific, with AI tools accelerating threats to critical services.

AI Cybersecurity

IBM reports a 27% surge in cyberattacks in Asia-Pacific, driven by AI exploitation of basic security flaws, threatening critical infrastructure.

AI Technology

Sharon AI partners with WWT to deploy high-performance compute infrastructure across Australia and Asia-Pacific, enhancing AI capabilities with NVIDIA Blackwell technology.

AI Research

AI-driven scientific discovery market to surge from $4.80B in 2025 to $34.78B by 2035, fueled by 21.90% CAGR and advancements from NVIDIA and IBM.

AI Technology

AMD reports record $34.6B revenue for 2025, unveiling AI infrastructure strategy and partnerships to meet soaring demand across the APAC region.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.