Connect with us

Hi, what are you looking for?

Top Stories

Denmark Proposes Landmark Law to Protect Rights to Image and Combat AI Deepfakes

Denmark proposes landmark copyright law granting individuals rights over their own images, aiming to combat the 96% non-consensual deepfake crisis and restore accountability.

November 18, 2025

4 min read

To Solve the Deepfake Problem, People Need the Rights to Their Own Image

When anyone can forge reality, society can’t self-govern. Borrowing Denmark’s approach could help the U.S. restore accountability around deepfakes

As generative artificial intelligence technology advances, the ability to create deepfakes—deceptively realistic images, videos, and audio—is becoming increasingly sophisticated. These fabrications pose significant risks, not only to individuals but to societal norms and democratic processes. The ability to manipulate perceptions of reality undermines trust, complicating self-governance and critical discourse within the public sphere.

Denmark is pioneering a legislative response aimed at addressing the deepfake dilemma. In June, the Danish government put forth an amendment to its copyright law, proposing that individuals would gain rights over their own faces and voices. This amendment intends to make it unlawful to create deepfakes of individuals without their explicit consent and establishes penalties for violators. This legal framework asserts the principle that every person has ownership over their own image and likeness.

The Impact of Copyright Law on Deepfakes

The Danish approach has a notable advantage as it engenders corporate accountability through copyright law. A 2024 study published on arXiv.org examined the removal of deepfake content from social media platforms. Researchers found that when deepfakes were reported as copyright violations, platforms like X took action swiftly. In contrast, reports focusing on non-consensual nudity did not yield the same results, highlighting the effectiveness of copyright claims in ensuring content removal.

This legal recognition of personal likeness is crucial given the stark realities surrounding deepfake abuse. Victims often face exploitation for financial gain or harassment. Deepfakes have been linked to severe psychological harm, including instances of suicides among victims, particularly teenage boys targeted by scammers. Alarmingly, research indicates that 96 percent of deepfakes are non-consensual, and a staggering 99 percent of sexual deepfakes feature women.

The prevalence of this issue is alarming. A survey of over 16,000 individuals across ten countries revealed that 2.2 percent had experienced deepfake pornography. Additionally, the Internet Watch Foundation reported a 400 percent increase in AI-generated deepfakes of child sexual abuse from the first half of 2024 to 2025, underscoring the urgency for legislative intervention.

Moreover, deepfakes pose a threat to democratic integrity. A notable instance occurred shortly before the 2024 U.S. presidential election when Elon Musk shared a deepfake video of Vice President Kamala Harris. Despite violating the platform’s own guidelines, the content remained on the site, illustrating the challenges of misinformation and its potential impact on public opinion.

Legislative Movements in the U.S. and Abroad

In the face of these challenges, the U.S. has begun making strides. The bipartisan TAKE IT DOWN Act, enacted this year, criminalizes the publication or threat of non-consensual intimate images, including deepfakes. Individual states are also taking action, such as Texas’ law against deceptive AI videos aimed at influencing elections and California’s requirements for platforms to manage misleading AI-generated content.

Despite these developments, this fragmented approach lacks comprehensiveness. Advocates are urging the introduction of a federal law to protect individuals’ rights to their own likeness, which would streamline the process for victims to seek removal and compensation. Proposed legislation like the NO FAKES Act aims to extend protections to all individuals, not just public figures, while the Protect Elections from Deceptive AI Act seeks to eliminate deepfakes targeting federal candidates.

Internationally, the E.U. AI Act mandates that synthetic media must be identifiable, and the Digital Services Act requires major platforms to counteract manipulated media. The U.S. should adopt similar measures to ensure accountability and consumer protection.

Addressing the proliferation of explicit deepfake sites is also necessary. San Francisco’s city attorney has successfully shut down several such platforms, while California’s AB 621 aims to restrict services enabling deepfake creation. Companies like Meta are also taking legal action against those facilitating the production of exploitative content.

Ultimately, Denmark’s approach serves as a vital model for safeguarding personal rights against the misuse of emerging technologies. While no legal framework can eradicate the issue entirely, establishing accountability through legislation is essential to prevent societal upheaval and protect individual dignity.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

One-third of U.S. teens engage with AI chatbots daily for emotional support, raising alarm over mental health risks and the need for stricter safeguards.

AI Technology

BigBear.ai acquires Ask Sage for $250M to enhance secure AI solutions, targeting a projected $25M in annual recurring revenue by 2025.

AI Technology

Western Digital shares fell 2.2% to $172.27 as investors reassess profit-taking after a year where stock value tripled amid AI-driven storage demand.

Top Stories

China launches a super-powered AI system integrated with its National Supercomputing Network, enabling autonomous scientific research for over 1,000 institutions.

Top Stories

Doug Kelly warns that the U.S. must accelerate AI development to remain competitive with China and preserve freedom, as 77,000 Wyoming small businesses rely...

Top Stories

AI drives a 17% surge in the S&P 500 as Nvidia's stock climbs 36%, raising market value by $1 trillion amid growing bubble concerns...

Top Stories

By 2026, blockchain is set to transform financial markets with stablecoins surging from $300B to $450B, streamlining compliance and capital allocation.

Top Stories

NAVEX's recent webinar reveals that as the Digital Operational Resilience Act takes effect, compliance leaders must urgently adapt to new global accountability standards driven...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.