Connect with us

Hi, what are you looking for?

AI Generative

UMass Researchers Unveil Diffence: A New Defense Against Membership Inference Attacks

UMass researchers introduce Diffence, reducing membership inference attack accuracy by 15.8% while adding just 57 ms to inference time, enhancing AI privacy.

Researchers from the University of Massachusetts Amherst have introduced a novel defense strategy against membership inference attacks (MIAs), a significant challenge for deep learning models that raises privacy concerns. The framework, named Diffence, was presented during Session 12C at the Network and Distributed System Security Symposium (NDSS) 2025. MIAs allow adversaries to determine whether a specific data point was included in a model’s training set, potentially exposing sensitive information.

Led by authors Yuefeng Peng, Ali Naseh, and Amir Houmansadr, the research emphasizes the importance of addressing privacy without compromising model utility. Traditional defenses, which modify either the training process or the model’s output post-inference, fail to adequately balance privacy and performance. In contrast, Diffence operates pre-inference, modifying input samples before they reach the model without altering the model itself.

This innovative approach focuses on diminishing the distinctions between member and non-member inputs, effectively mitigating the vulnerabilities exploited by MIAs. By re-generating input samples, Diffence enhances membership privacy while preserving the accuracy and confidence of model predictions. The researchers conducted extensive experiments, demonstrating that Diffence decreases MIA attack accuracy against an undefended model by an average of 15.8% and reduces attack area under the curve (AUC) by 14.0% across three datasets.

One of the unique advantages of Diffence lies in its compatibility with existing defense mechanisms. The researchers showed that when integrated with the state-of-the-art SELENA defense, Diffence further reduces MIA attack accuracy by 9.3% and AUC by 10.0%. This synergistic effect showcases Diffence’s potential as a robust plug-and-play solution for enhancing data privacy in machine learning applications.

Importantly, Diffence incurs a minimal computational overhead, adding only an average of 57 milliseconds to the inference time per sample. This efficiency makes it a viable option for real-world applications, where both speed and privacy are critical concerns.

The NDSS Symposium serves as a platform for the exchange of ideas among researchers and practitioners focused on the practical aspects of network and distributed system security. By fostering collaboration within the community, the symposium aims to advance the deployment of cutting-edge security technologies.

The introduction of Diffence could represent a significant step forward in addressing the pressing issue of privacy in AI. As the capabilities of deep learning models continue to grow, so too does the need for effective protections against data breaches and unauthorized access. The ongoing research in this domain will likely prompt further innovations and discussions on how to safeguard sensitive information while leveraging the full potential of artificial intelligence.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Generative

Nanjing University’s GAP-Diff framework significantly enhances JPEG image protection against diffusion-based customization, outperforming existing methods to safeguard privacy.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.