Governments and technology companies are facing increasing scrutiny over the rapid development of artificial intelligence (AI), as two new reports issued this week highlight potential risks associated with its unchecked growth. A United Nations assessment and a separate safety index indicate that without appropriate regulatory measures, AI could exacerbate global inequality and pose significant safety threats.
The United Nations Development Programme (UNDP) warned that unmanaged AI could reverse decades of developmental progress. According to the report, the pace of AI adoption is outstripping many countries’ capacities to adapt. Philip Schellekens, UNDP’s chief economist for Asia and the Pacific, emphasized that the “central fault line” lies in capability, suggesting that nations with robust infrastructure and governance will benefit from AI, while others may lag behind.
The risks are particularly pronounced in the Asia-Pacific region, which, despite housing over half of the world’s population, sees only 14 percent of individuals utilizing AI tools. Approximately 3.7 billion people remain excluded from the digital economy, with a quarter of the population offline. In South Asia, significant gender disparities persist, with women being up to 40 percent less likely than men to possess a smartphone.
Despite these challenges, the UNDP report posits that AI-driven growth is feasible, projecting that AI could contribute an additional 2 percentage points to annual GDP growth and enhance productivity in sectors such as health and finance. ASEAN economies, for example, could potentially accrue nearly $1 trillion over the next decade. However, substantial structural hurdles remain, including 1.3 billion workers in informal employment, 770 million women not participating in the labor force, and 200 million individuals living in extreme poverty.
The report also highlights severe digital and gender divides within Asia-Pacific, noting that women and young people are particularly vulnerable to job disruptions caused by AI. Jobs typically held by women are nearly twice as susceptible to automation compared to those held by men, while employment for youth aged 22 to 25 is already declining in high-exposure roles. Bias in AI models further compounds these issues, as algorithms trained predominantly on data from urban male borrowers misclassify women entrepreneurs and rural farmers as high-risk, effectively denying them access to financial support.
Moreover, rural and Indigenous communities face exclusion due to their absence in datasets used to train AI systems. The ongoing digital divide continues to impact health and education outcomes, with over 1.6 billion individuals unable to afford a healthy diet and 27 million youth remaining illiterate. Many countries depend on imported AI models that fail to account for local languages and cultural contexts, thereby diminishing the efficacy of AI in delivering essential services. Despite growing interest in AI, a shortage of digital skills impedes progress across the region.
In Europe, the UNDP report identifies uneven preparedness for AI, with only a limited number of countries having established comprehensive regulations. It warns that by 2027, over 40 percent of AI-related data breaches could arise from the misuse of generative AI. Countries like Denmark, Germany, and Switzerland have emerged as leaders in AI readiness, while Albania and Bosnia and Herzegovina lag behind. Kanni Wignaraja, U.N. Assistant Secretary-General and UNDP’s regional director for Asia and the Pacific, remarked that these widening gaps are not inevitable and noted that many countries remain “at the starting line.”
A separate report from the Future of Life Institute (FLI) reveals that major AI companies are failing to adhere to their safety commitments, raising further concerns. The 2025 Winter AI Safety Index evaluated eight prominent firms, including **Anthropic**, **OpenAI**, **Google DeepMind**, **xAI**, **Meta**, **DeepSeek**, **Alibaba Cloud**, and **Z.ai**. Evaluators noted that none of these companies had developed a testable plan to ensure human control over advanced AI systems.
Stuart Russell, a computer science professor at the University of California, Berkeley, criticized the firms for claiming they can develop superhuman AI without demonstrating how to maintain control over such systems, with risks of losing control estimated as high as “one in three.” The index assessed companies across six categories, including risk assessment, current harms, governance, and information sharing, finding some progress yet emphasizing inconsistent implementation.
Anthropic, OpenAI, and Google DeepMind received the highest overall scores, though each faced specific criticisms. Anthropic was criticized for discontinuing human uplift trials, while Google DeepMind improved its safety framework but still relies on evaluators compensated by the company. OpenAI, on the other hand, was faulted for unclear safety thresholds and lobbying against state-level AI safety regulations. A spokesperson from OpenAI declared that safety is a crucial aspect of their operations, asserting investments in frontier safety research and rigorous model testing.
Meanwhile, the remaining firms demonstrated mixed results. xAI introduced its first structured safety framework, though reviewers noted a lack of clear mitigation triggers. Z.ai allowed external evaluations but has yet to disclose its complete governance structure. Meta launched a frontier safety framework but faced calls for clearer methodologies, while DeepSeek has not documented basic safety measures. Alibaba Cloud, though contributing to national watermarking standards, requires stronger performance on fairness and safety metrics.
FLI President Max Tegmark expressed concerns that AI is currently “less regulated than sandwiches” in the United States, citing ongoing lobbying against mandatory safety standards. He pointed out that public apprehensions about superintelligence are rising, as evidenced by a petition organized by FLI that garnered signatures from thousands of influential figures, including politicians and scientists, urging companies to decelerate development. Tegmark warned of the potential consequences, such as economic and political instability, stemming from the unregulated advancement of these systems, which could lead to workforce displacement and an increased dependence on government systems across diverse ideological lines.
See also
Harvard’s Koh Warns AI Threatens Human Rights Amid Rising Authoritarianism
Florida House Discusses AI Regulation and Redistricting Ahead of 2026 Midterms
Australia’s New AI Plan Abandons Mandatory Regulations, Favors Self-Regulation Strategy
EU Launches Antitrust Probe into Meta’s WhatsApp AI Policies Over Competition Concerns
Compliance Week Review: FCPA Enforcement Trends and Key Insights for December 2025



















































