Connect with us

Hi, what are you looking for?

AI Government

Government Launches Investigation into Sexual Deepfakes Under New AI Law, Says Kihara

Japan’s government will investigate sexual deepfakes under a new AI law, aiming to protect citizens’ rights as generative AI misuse surges.

The Japanese government is preparing to tackle the growing issue of sexual deepfakes, which involve the creation of fake obscene images or videos of real individuals using generative AI. Chief Cabinet Secretary Minoru Kihara announced at a press conference on January 7, 2026, that the government will assess the situation in line with the AI law enacted in May 2025. This law mandates the investigation of cases where citizens’ rights are infringed upon by AI technologies, as well as providing guidance to businesses involved.

Kihara emphasized the importance of a coordinated response from relevant ministries and agencies, drawing on past investigative experiences. “The relevant ministries and agencies must properly address this issue, coordinating their efforts and capitalizing on past investigative experience,” he stated. The government’s proactive stance reflects a growing awareness of the risks associated with the misuse of AI technologies, particularly in creating non-consensual and harmful content.

The use of generative AI to create deepfakes has surged in recent years, raising serious concerns regarding privacy, consent, and potential harm to individuals. This technology allows users to produce realistic images or videos by manipulating existing media, often leading to the exploitation of victims, particularly women. Kihara’s comments come amid increasing scrutiny and calls for regulation of AI technologies globally, as various countries grapple with similar issues.

As part of this assessment, the Japanese government will analyze both domestic and international trends related to sexual deepfakes. The initiative aims to develop a comprehensive understanding of the problem, enabling the government to formulate effective policies and regulations to safeguard citizens’ rights. This approach aligns with broader global efforts to address the societal impacts of AI, particularly as the technology continues to evolve rapidly.

The implications of sexual deepfakes extend beyond individual privacy violations; they can also contribute to a larger societal discourse on consent, representation, and the ethical use of AI. The Japanese government’s recognition of these issues signals a commitment to addressing the intersection of technology and human rights, as well as the need for clearer legal frameworks in the digital age.

Looking ahead, the government’s actions will likely influence the ongoing debate over AI regulation and its role in society. Kihara’s announcement indicates a pivotal moment, as Japan joins other nations in confronting the challenges posed by emerging technologies. The effectiveness of the government’s strategy will be closely monitored, particularly regarding its ability to protect individuals from the harms associated with sexual deepfakes while fostering innovation in the AI sector.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Generative

AI technologies significantly enhance hypertension management, with studies showing a 30% improvement in patient adherence through AI-driven interventions on platforms like WeChat.

AI Tools

Open-source AI models are democratizing machine learning by 2025, enabling startups to innovate affordably while enhancing data privacy and control.

AI Regulation

Health AI Atlas streamlines compliance amid federal deregulation, helping health tech firms navigate complex state laws to enhance AI-driven patient care.

AI Technology

AI's surge in 2025 has intensified the data center power crisis, prompting major outages at AWS and Cloudflare and driving urgent investments in resilient...

AI Regulation

California mandates AI firms like Google and OpenAI to disclose disaster plans and risk assessments, imposing fines up to $1M for noncompliance.

Top Stories

Perplexity AI CEO Aravind Srinivas warns that on-device AI significantly increases data center vulnerabilities, urging urgent cybersecurity adaptations to prevent breaches.

AI Regulation

South Carolina's Small Business Chamber demands AI regulation amid Trump’s executive order, citing urgent risks from AI in mental health and child safety.

AI Marketing

Interact Marketing warns that unchecked AI content creation threatens brand integrity, with a notable decline in quality standards and rising consumer fatigue.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.