Connect with us

Hi, what are you looking for?

AI Generative

Faster R-CNN by He Kaiming and Team Wins NeurIPS 2025 Test of Time Award

NeurIPS 2025 honors He Kaiming’s “Faster R-CNN” with the Test of Time Award, showcasing its enduring impact on AI research and methodologies.

NeurIPS 2025 has announced its best papers, highlighting significant contributions in AI, particularly from Chinese researchers. The prestigious conference, now in its 39th year, is being held in two locations: the San Diego Convention Center from December 2 to 7 and in Mexico City from November 30 to December 5. The organizing committee revealed the four best papers of the year, with the paper “Faster R-CNN,” co-authored by Ren Shaoqing, He Kaiming, Ross Gisshick, and Sun Jian, receiving the “Test of Time Award” for its lasting impact in the field.

This year’s winners represent a diverse range of topics, including diffusion models, self-supervised reinforcement learning, attention mechanisms, and the reasoning abilities of large language models (LLMs). Alongside the four best papers, there are three runner-up papers awarded for their notable contributions.

The winning paper, “Artificial Hivemind: The Open-Ended Homogeneity of Language Models (and Beyond),” was authored by Liwei Jiang and colleagues from institutions such as the University of Washington and Carnegie Mellon University. This research addresses the limitations of large language models in generating diverse and human-like creative content. The authors introduced Infinity-Chat, a large-scale dataset comprising 26,000 open-ended user queries that allow for multiple reasonable answers. This initiative aims to provide insights into the long-term risks posed by homogeneous outputs from LLMs, termed the “Artificial Hivemind effect.”

Another notable work, titled “Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink-Free,” authored by Zihan Qiu and his team, investigates the effects of gating mechanisms in attention models. By exploring various configurations in a 15B mixture-of-experts model and a 1.7B dense model, the researchers demonstrated that introducing a head-specific Sigmoid gating after scaled dot-product attention improves model performance and training stability. This mechanism helps mitigate “activation explosion” and enhances long-context performance.

The third awarded paper, “1000 Layer Networks for Self-Supervised RL: Scaling Depth Can Enable New Goal-Reaching Capabilities,” by Kevin Wang and a team from Princeton University and Warsaw University of Technology, explores the potential of deeper networks in self-supervised reinforcement learning. The study reveals that increasing network depth to 1024 layers results in significant improvements in performance, particularly in unsupervised goal-conditioned settings. This research indicates that deeper architectures can unlock new capabilities in RL, demonstrating a substantial enhancement in the agent’s ability to achieve specified goals.

Lastly, the paper “Why Diffusion Models Don’t Memorize: The Role of Implicit Dynamical Regularization in Training,” authored by Tony Bonnaire and colleagues, delves into the training dynamics of diffusion models. The authors examine the underlying regularization effects that prevent memorization, contributing to the understanding of how these models generalize in practice.

The mix of awarded papers emphasizes the growing influence of Chinese researchers in the AI domain and highlights the innovative strides being made in machine learning methodologies. As the NeurIPS 2025 conference unfolds, attendees can expect rich discussions centered around these groundbreaking findings, paving the way for future research in artificial intelligence.

Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Research

Google introduces nested learning to enhance LLMs' adaptability, achieving superior performance with its HOPE architecture, surpassing competitors like Transformer++ and RetNet.

AI Research

Apple unveils 10 groundbreaking AI papers at NeurIPS 2025, advancing privacy-preserving ML and high-resolution image generation techniques.

AI Research

Apple will present groundbreaking AI research at NeurIPS 2025, including a controversial paper on reasoning models and live demos of its MLX framework on...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.