Connect with us

Hi, what are you looking for?

AI Regulation

UNDP Reports AI Could Exacerbate Global Inequality, Urges Immediate Action from Governments

UNDP warns that unmanaged AI could reverse decades of development, with 3.7 billion people excluded from the digital economy and risks of increased global inequality.

Governments and technology companies are facing increasing scrutiny over the rapid development of artificial intelligence (AI), as two new reports issued this week highlight potential risks associated with its unchecked growth. A United Nations assessment and a separate safety index indicate that without appropriate regulatory measures, AI could exacerbate global inequality and pose significant safety threats.

The United Nations Development Programme (UNDP) warned that unmanaged AI could reverse decades of developmental progress. According to the report, the pace of AI adoption is outstripping many countries’ capacities to adapt. Philip Schellekens, UNDP’s chief economist for Asia and the Pacific, emphasized that the “central fault line” lies in capability, suggesting that nations with robust infrastructure and governance will benefit from AI, while others may lag behind.

The risks are particularly pronounced in the Asia-Pacific region, which, despite housing over half of the world’s population, sees only 14 percent of individuals utilizing AI tools. Approximately 3.7 billion people remain excluded from the digital economy, with a quarter of the population offline. In South Asia, significant gender disparities persist, with women being up to 40 percent less likely than men to possess a smartphone.

Despite these challenges, the UNDP report posits that AI-driven growth is feasible, projecting that AI could contribute an additional 2 percentage points to annual GDP growth and enhance productivity in sectors such as health and finance. ASEAN economies, for example, could potentially accrue nearly $1 trillion over the next decade. However, substantial structural hurdles remain, including 1.3 billion workers in informal employment, 770 million women not participating in the labor force, and 200 million individuals living in extreme poverty.

The report also highlights severe digital and gender divides within Asia-Pacific, noting that women and young people are particularly vulnerable to job disruptions caused by AI. Jobs typically held by women are nearly twice as susceptible to automation compared to those held by men, while employment for youth aged 22 to 25 is already declining in high-exposure roles. Bias in AI models further compounds these issues, as algorithms trained predominantly on data from urban male borrowers misclassify women entrepreneurs and rural farmers as high-risk, effectively denying them access to financial support.

Moreover, rural and Indigenous communities face exclusion due to their absence in datasets used to train AI systems. The ongoing digital divide continues to impact health and education outcomes, with over 1.6 billion individuals unable to afford a healthy diet and 27 million youth remaining illiterate. Many countries depend on imported AI models that fail to account for local languages and cultural contexts, thereby diminishing the efficacy of AI in delivering essential services. Despite growing interest in AI, a shortage of digital skills impedes progress across the region.

In Europe, the UNDP report identifies uneven preparedness for AI, with only a limited number of countries having established comprehensive regulations. It warns that by 2027, over 40 percent of AI-related data breaches could arise from the misuse of generative AI. Countries like Denmark, Germany, and Switzerland have emerged as leaders in AI readiness, while Albania and Bosnia and Herzegovina lag behind. Kanni Wignaraja, U.N. Assistant Secretary-General and UNDP’s regional director for Asia and the Pacific, remarked that these widening gaps are not inevitable and noted that many countries remain “at the starting line.”

A separate report from the Future of Life Institute (FLI) reveals that major AI companies are failing to adhere to their safety commitments, raising further concerns. The 2025 Winter AI Safety Index evaluated eight prominent firms, including **Anthropic**, **OpenAI**, **Google DeepMind**, **xAI**, **Meta**, **DeepSeek**, **Alibaba Cloud**, and **Z.ai**. Evaluators noted that none of these companies had developed a testable plan to ensure human control over advanced AI systems.

Stuart Russell, a computer science professor at the University of California, Berkeley, criticized the firms for claiming they can develop superhuman AI without demonstrating how to maintain control over such systems, with risks of losing control estimated as high as “one in three.” The index assessed companies across six categories, including risk assessment, current harms, governance, and information sharing, finding some progress yet emphasizing inconsistent implementation.

Anthropic, OpenAI, and Google DeepMind received the highest overall scores, though each faced specific criticisms. Anthropic was criticized for discontinuing human uplift trials, while Google DeepMind improved its safety framework but still relies on evaluators compensated by the company. OpenAI, on the other hand, was faulted for unclear safety thresholds and lobbying against state-level AI safety regulations. A spokesperson from OpenAI declared that safety is a crucial aspect of their operations, asserting investments in frontier safety research and rigorous model testing.

Meanwhile, the remaining firms demonstrated mixed results. xAI introduced its first structured safety framework, though reviewers noted a lack of clear mitigation triggers. Z.ai allowed external evaluations but has yet to disclose its complete governance structure. Meta launched a frontier safety framework but faced calls for clearer methodologies, while DeepSeek has not documented basic safety measures. Alibaba Cloud, though contributing to national watermarking standards, requires stronger performance on fairness and safety metrics.

FLI President Max Tegmark expressed concerns that AI is currently “less regulated than sandwiches” in the United States, citing ongoing lobbying against mandatory safety standards. He pointed out that public apprehensions about superintelligence are rising, as evidenced by a petition organized by FLI that garnered signatures from thousands of influential figures, including politicians and scientists, urging companies to decelerate development. Tegmark warned of the potential consequences, such as economic and political instability, stemming from the unregulated advancement of these systems, which could lead to workforce displacement and an increased dependence on government systems across diverse ideological lines.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Cybersecurity

Group-IB's report reveals a staggering 263% surge in supply chain cyber attacks across Asia-Pacific, reshaping the cybersecurity landscape with interconnected threats.

AI Research

Asia Pacific's AI market is set to skyrocket from $63.09B in 2024 to $890.7B by 2033, driven by 34.2% CAGR and robust government initiatives.

Top Stories

South Korea's AI Basic Act, effective January 2026, establishes Asia's first comprehensive AI legislation to enhance data governance amid rising data sovereignty concerns.

Top Stories

CIOs in Asia/Pacific are set to increase sovereign AI investments by 50% by 2028 to navigate governance risks and comply with new regulations.

AI Marketing

Moltbook launches an AI-only social network, attracting 1.6 million AI agents and 15,549 sub-communities, reshaping future human-AI interactions.

Top Stories

Global leaders emphasize urgent collaboration to close a $1.6 trillion digital infrastructure gap vital for equitable AI adoption and ethical technology practices.

AI Cybersecurity

Akamai's Reuben Koh warns that by 2026, AI will automate cyberattacks, commoditizing ransomware and compressing breach timelines from weeks to hours in APAC.

AI Research

NINEby9's report reveals that women hold just 24.4% of managerial roles in AI, highlighting a widening gender gap in APAC amid rising tech adoption.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.