Connect with us

Hi, what are you looking for?

Top Stories

Adaption Labs Secures $50M Seed Funding to Develop Adaptive AI Models

Sara Hooker’s Adaption Labs secures $50M seed funding to revolutionize AI with adaptive, cost-effective models that reduce reliance on large-scale training.

Sara Hooker, an AI researcher known for advocating cheaper systems that require less computing power, has launched her own startup, Adaption Labs, following her tenure as vice president of research at Cohere and a role at Google DeepMind. The San Francisco-based company has secured $50 million in seed funding, led by Emergence Capital Partners, with additional investments from Mozilla Ventures, Fifty Years, Threshold Ventures, Alpha Intelligence Capital, E14 Fund, and Neo.

Hooker, along with cofounder Sudip Roy, former director of inference computing at Cohere, aims to develop AI systems that are not only less costly to operate but also more adaptable than current leading models. Their goal is to design systems that can learn continuously without the need for expensive retraining or extensive prompt engineering, a challenge that Hooker identifies as vital in the field of AI.

This approach marks a significant departure from the prevailing industry belief that enhancing AI capabilities requires larger models trained on more data. While many tech giants continue to invest billions in scaling their models, Hooker argues that this strategy is yielding diminishing returns. “Most labs won’t quadruple the size of their model each year, mainly because we’re seeing saturation in the architecture,” she stated.

Hooker described the current moment in AI as a “reckoning point,” where innovations need to focus on creating systems that can adapt more readily and affordably to specific tasks. Adaption Labs is categorized as a “neolab,” a term used for new frontier AI labs that follow in the footsteps of established companies like OpenAI and Anthropic. Other notable figures from major AI companies are also launching similar ventures; for instance, Jerry Tworek from OpenAI has founded Core Automation, while David Silver from Google DeepMind has started Ineffable Intelligence.

The startup’s research is structured around three “pillars,” according to Hooker: adaptive data, where AI systems generate and manipulate necessary data dynamically; adaptive intelligence, which adjusts compute resources based on task complexity; and adaptive interfaces, enabling models to learn from user interactions. This framework reflects her longstanding critique of the “scale is all you need” mentality prevalent among AI researchers.

In her influential 2020 paper, “The Hardware Lottery,” Hooker argued that many AI advancements depend more on compatibility with existing hardware than on the ideas themselves. More recently, she published “On the Slow Death of Scaling,” contending that smaller models employing superior training techniques can outperform larger counterparts. Her work at Cohere, particularly through the Aya project, showcased how compact models could deliver advanced AI capabilities across multiple languages, emphasizing innovative data curation and training methods as alternatives to sheer scale.

One of Adaption Labs’ innovative avenues is exploring “gradient-free learning,” which diverges from traditional neural network training reliant on gradient descent. This conventional method requires immense computational power and time, as it adjusts billions of internal weights in search of optimal outputs. Once trained, these weights remain fixed, often necessitating expensive fine-tuning or prompt engineering to adapt the model to specific tasks. Hooker refers to this as “prompt acrobatics,” which can quickly become obsolete with new model iterations. Her aim is to eliminate the need for such complex prompt engineering.

Gradient-free learning proposes to modify the model’s behavior at the moment of response, known as “inference time,” without altering the core weights of the model. This allows the system to adapt dynamically based on the task at hand. “How do you update a model without touching the weights?” Hooker asked, highlighting the potential for innovative architectural advancements that maximize efficiency in computation.

Among the techniques Adaption Labs is investigating are “on-the-fly merging,” which allows models to select from a repertoire of small, separately trained adapters to shape the core model’s outputs, and “dynamic decoding,” which alters output probabilities based on the specific task without changing the model’s internal weights. Hooker envisions a future where AI is not just a static model but evolves in real-time based on user interaction.

By shifting focus to these methods, Hooker believes the economic landscape of AI could be transformed. “The most costly compute is pretraining compute,” she explained, asserting that inference-based computation offers considerably better returns on each unit of computing power. Roy, serving as Adaption’s CTO, brings expertise in optimizing AI systems for real-time performance, which is crucial for the startup’s vision.

With the recent funding, Adaption Labs plans to hire additional AI researchers and engineers, as well as designers to explore innovative user interfaces beyond the standard chat-based models. As the AI landscape continues to evolve, Hooker’s commitment to building more adaptable and cost-effective systems positions Adaption Labs as a potential disruptor in the industry.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

Google DeepMind introduces Reinforced Attention Learning, a breakthrough model that enhances AI memory retention, outperforming traditional systems in long-term tasks.

Top Stories

Google DeepMind introduces two new benchmarks for AI decision-making in poker and Werewolf, evaluating agent performance in uncertainty with 900,000 Texas Hold'em hands.

AI Research

OpenAI plans to invest in pharmaceutical firms to leverage AI for drug development, aiming for royalties from future treatments and breakthroughs.

AI Technology

OpenAI evaluates AMD, Cerebras, and Groq to enhance real-time inference performance, signaling a shift in AI hardware dynamics amid rising consumer demand.

AI Regulation

Anthropic expands office space amid rapid growth, raising $4 billion with Amazon Web Services while grappling with safety concerns in AI development.

AI Technology

Ex-Google engineer Linwei Ding convicted of stealing over 2,000 pages of AI technology secrets and sharing them with Chinese firms, facing up to 210...

AI Education

Google and Google DeepMind launch Project Genie, enabling real-time interactive environments for AI Ultra subscribers, revolutionizing education and training.

Top Stories

Darren Aronofsky's Primordial Soup partners with Google DeepMind to launch the AI-driven series "On This Day … 1776," dramatizing Revolutionary War moments on Time's...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.