The rise of generative artificial intelligence poses significant risks to human rights worldwide, warns Harold Hongju Koh, a Yale Law School professor and former U.S. State Department legal adviser. Speaking at Harvard Law School on November 13 during a session titled “Human Rights Under Stress in the Age of AI,” Koh highlighted the ethical dilemmas presented by AI, particularly amid a global increase in authoritarianism that could exploit the technology for misinformation, surveillance, and targeted attacks on dissenters.
Koh, who has held key positions in U.S. diplomacy and currently represents Ukraine in international law proceedings, stressed that while international law could offer frameworks for protecting rights affected by AI, consensus among nations is crucial. This consensus is increasingly elusive as many states challenge fundamental principles of human rights, including truth and accountability. “The concept of human rights is being attacked by a whole group of authoritarians,” said Koh, referencing the void left by the United States’ waning commitment to these principles.
Introduced by fellow classmate Gerald L. Neuman, the J. Sinclair Armstrong Professor of International, Foreign, and Comparative Law at Harvard, Koh’s address underscored the urgency of addressing the ethical implications of AI development. He noted that AI contributes to serious concerns regarding bias and the excessive consumption of energy and water resources. “AI has the potential to serve as a ‘gigantic megaphone’ for misinformation campaigns,” he warned, referring to technologies like deepfakes that can distort reality.
Koh’s assertions come amid heightened fears that new technologies could exacerbate existing inequalities and human rights abuses. He emphasized the need for international legal frameworks to hold states accountable for abuses enabled by AI. However, achieving agreement on fundamental principles remains a challenge as authoritarian regimes exploit the gaps left by more liberal democracies.
“The concept of human rights is being attacked by a whole group of authoritarians … that is surrounding and filling the gaps created by America’s nonparticipation.”
In response to the complex intersection of AI and human rights, Koh has been involved in the Oxford Process on International Law Protections in Cyberspace, a collaborative effort that emerged during the COVID-19 pandemic. The group has produced guidelines clarifying how international law applies to cyber operations affecting various sectors, including healthcare and elections. Koh expressed the need for grassroots movements to push for new norms, stating, “When governments are faced with legal norms firmly declared, they tend to accept them.”
As the discussion moved towards the implications of AI in military applications, Koh raised alarms about the potential for “video-game wars,” where warfare could become the least expensive option in international disputes. “War will become the cheapest option,” he noted, cautioning that if diplomatic solutions become more challenging and costly, the world may face ongoing conflicts driven by technological advancements.
“Where we are headed is video-game wars. … War will become the cheapest option. If war is the cheapest option, and diplomacy is more difficult and more expensive, then we will have perpetual war.”
Amid these concerns, Koh advocates for the establishment of treaties similar to the Anti-Personnel Mine Ban Treaty, emphasizing the need for accountability in the use of autonomous weapons. He pointed to the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, issued by the Biden administration in 2024, which aims to foster international consensus on responsible military AI implementation.
Throughout his career, Koh has maintained that the principles of truth, consistency in addressing human rights violations, and the establishment of preventive measures are vital. He concluded his remarks with a poignant reflection from his time in Kosovo, where new judges opted to be sworn in on foundational human rights documents rather than on religious texts or national symbols. “In this troubled world, this is the only faith we share,” he recounted, underscoring the enduring importance of human rights in the age of AI.
See also
Florida House Discusses AI Regulation and Redistricting Ahead of 2026 Midterms
Australia’s New AI Plan Abandons Mandatory Regulations, Favors Self-Regulation Strategy
EU Launches Antitrust Probe into Meta’s WhatsApp AI Policies Over Competition Concerns
Compliance Week Review: FCPA Enforcement Trends and Key Insights for December 2025
Study Reveals AI Art Attracts Greater Legal Scrutiny in Copyright Cases




















































