Connect with us

Hi, what are you looking for?

AI Generative

ASU’s Yang Reveals Digital Fingerprints to Identify AI-Generated Media with Precision

ASU’s Yezhou Yang develops digital fingerprints for AI-generated media, addressing $200M in deepfake losses and enhancing content authenticity.

As advancements in artificial intelligence (AI) continue to blur the lines between reality and fabrication, experts are sounding alarms about the implications of this technology. A study published last year in the *Communications of the Association for Computing Machinery* revealed that people can distinguish AI-generated images from real photos only 51% of the time, roughly equivalent to chance. This growing difficulty in discerning authentic content has catalyzed a rise in fraudulent activities, such as increased returns in online retail and significant financial losses related to deepfakes, which amounted to over $200 million in just three months of 2025.

At the forefront of addressing these issues is Yezhou “YZ” Yang, a researcher at Arizona State University (ASU). Yang is leading initiatives to develop technical standards aimed at making AI-generated media identifiable, an essential step as the technology evolves. His work focuses on the concept of embedding detectable signals—akin to digital fingerprints—into the media created by generative AI systems. “It’s like a wireless protocol,” Yang explained. “If everyone agrees to the protocol, then every model generating images would embed something like a watermark that detectors can read later.”

Yang’s research dates back to 2020 and is based on identifying subtle statistical patterns left behind by generative models—patterns that are not visible to humans but can be detected by machines. However, as AI models grow more sophisticated, these detectable traces are becoming increasingly elusive, prompting Yang to consider broader solutions beyond mere detection.

His latest research delves into a concept known as machine unlearning, which aims to teach AI systems to selectively forget problematic data or harmful concepts. Traditional retraining of AI models can take months and be financially burdensome, but machine unlearning offers a more efficient alternative by specifically targeting unwanted information. “Whatever data is learned—the good and the bad—it sticks. Unlearning gives us a way to go back and fix that,” Yang noted.

Yang’s team has been among the early contributors applying unlearning techniques to text-to-image models, an area that has not received as much attention as large language models. One notable project, named Robust Adversarial Concept Erasure (RACE), focuses on removing sensitive concepts, such as explicit imagery, from generative models while making it difficult for users to recover them through adversarial prompts. This method strengthens previous attempts by anticipating and blocking potential recovery strategies.

In another initiative called EraseFlow, Yang’s team treats unlearning as an evolving process, reshaping how an AI model generates images over time. Instead of merely blocking certain outputs, this system redirects the model away from unwanted concepts while maintaining overall image quality. These innovative approaches suggest a future where AI systems are not only transparent but also adaptable post-deployment, a capability that holds significant implications for privacy and regulation.

Yang is also committed to ensuring that these technical advancements reach beyond academic circles. His team collaborates with organizations like the Coalition for Content Provenance and Authenticity and the World Privacy Forum. Their goal is to foster international discussions around AI transparency and data rights, aiming to establish common standards for the behavior of AI systems throughout their entire lifecycle.

“The technology starts with computer scientists. But the impact on society requires a much bigger conversation,” Yang remarked, underscoring the need for a comprehensive dialogue on these essential issues.

As AI-generated media becomes more realistic and widely disseminated, the challenge extends beyond identifying fakes. It’s crucial to maintain trust in an environment where content can be fabricated or altered. Yang’s vision includes developing systems that can not only identify synthetic media but also adapt and self-correct over time. “At some point, society will have to solve this,” he asserted. “We can’t have a world where anyone can generate convincing fake evidence.”

Ross Maciejewski, the director of ASU’s School of Computing and Augmented Intelligence, echoed Yang’s sentiments, emphasizing that addressing the risks associated with AI is not merely a technical challenge, but a societal one. “Our school is uniquely positioned to bring together the research, policy, and real-world partnerships needed to tackle these issues,” he stated, highlighting the importance of initiatives like Yang’s in steering critical discussions while developing scalable solutions.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Research

ASU researcher warns that overtrust in AI led to a U.S. military strike on an Iranian school, killing 170, predominantly children.

AI Research

Arizona State University digitizes 60 years of Jane Goodall's chimpanzee research using AI, transforming thousands of handwritten records into searchable data for enhanced analysis.

AI Education

ASU's FOLC Fest showcased AI-driven innovations aimed at enhancing lifelong learning and accessibility, gathering over 800 education professionals to redefine learner engagement.

AI Research

ASU's AAIR Lab, led by Siddharth Srivastava, advances AI autonomy by developing adaptable systems that can efficiently handle multiple tasks in real-world environments.

AI Research

Researchers Gu, Nie, and Yang unveil a groundbreaking study in 2026 that leverages multimodal deep learning radiomics to standardize traditional Chinese medicine constitution identification,...

Top Stories

Arizona State University launches the AI Acceleration Student Innovation Challenge, empowering 16 students to develop transformative AI tools for campus life.

AI Research

Researchers Yang and Li reveal that deep learning technologies can boost student engagement and performance in entrepreneurship education by personalizing learning experiences.

AI Education

AI integration in language education surges 75%, revealing urgent ethical concerns and transforming personalized learning experiences through advanced technologies.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.