Sara Hooker, an AI researcher known for advocating cheaper systems that require less computing power, has launched her own startup, Adaption Labs, following her tenure as vice president of research at Cohere and a role at Google DeepMind. The San Francisco-based company has secured $50 million in seed funding, led by Emergence Capital Partners, with additional investments from Mozilla Ventures, Fifty Years, Threshold Ventures, Alpha Intelligence Capital, E14 Fund, and Neo.
Hooker, along with cofounder Sudip Roy, former director of inference computing at Cohere, aims to develop AI systems that are not only less costly to operate but also more adaptable than current leading models. Their goal is to design systems that can learn continuously without the need for expensive retraining or extensive prompt engineering, a challenge that Hooker identifies as vital in the field of AI.
This approach marks a significant departure from the prevailing industry belief that enhancing AI capabilities requires larger models trained on more data. While many tech giants continue to invest billions in scaling their models, Hooker argues that this strategy is yielding diminishing returns. “Most labs won’t quadruple the size of their model each year, mainly because we’re seeing saturation in the architecture,” she stated.
Hooker described the current moment in AI as a “reckoning point,” where innovations need to focus on creating systems that can adapt more readily and affordably to specific tasks. Adaption Labs is categorized as a “neolab,” a term used for new frontier AI labs that follow in the footsteps of established companies like OpenAI and Anthropic. Other notable figures from major AI companies are also launching similar ventures; for instance, Jerry Tworek from OpenAI has founded Core Automation, while David Silver from Google DeepMind has started Ineffable Intelligence.
The startup’s research is structured around three “pillars,” according to Hooker: adaptive data, where AI systems generate and manipulate necessary data dynamically; adaptive intelligence, which adjusts compute resources based on task complexity; and adaptive interfaces, enabling models to learn from user interactions. This framework reflects her longstanding critique of the “scale is all you need” mentality prevalent among AI researchers.
In her influential 2020 paper, “The Hardware Lottery,” Hooker argued that many AI advancements depend more on compatibility with existing hardware than on the ideas themselves. More recently, she published “On the Slow Death of Scaling,” contending that smaller models employing superior training techniques can outperform larger counterparts. Her work at Cohere, particularly through the Aya project, showcased how compact models could deliver advanced AI capabilities across multiple languages, emphasizing innovative data curation and training methods as alternatives to sheer scale.
One of Adaption Labs’ innovative avenues is exploring “gradient-free learning,” which diverges from traditional neural network training reliant on gradient descent. This conventional method requires immense computational power and time, as it adjusts billions of internal weights in search of optimal outputs. Once trained, these weights remain fixed, often necessitating expensive fine-tuning or prompt engineering to adapt the model to specific tasks. Hooker refers to this as “prompt acrobatics,” which can quickly become obsolete with new model iterations. Her aim is to eliminate the need for such complex prompt engineering.
Gradient-free learning proposes to modify the model’s behavior at the moment of response, known as “inference time,” without altering the core weights of the model. This allows the system to adapt dynamically based on the task at hand. “How do you update a model without touching the weights?” Hooker asked, highlighting the potential for innovative architectural advancements that maximize efficiency in computation.
Among the techniques Adaption Labs is investigating are “on-the-fly merging,” which allows models to select from a repertoire of small, separately trained adapters to shape the core model’s outputs, and “dynamic decoding,” which alters output probabilities based on the specific task without changing the model’s internal weights. Hooker envisions a future where AI is not just a static model but evolves in real-time based on user interaction.
By shifting focus to these methods, Hooker believes the economic landscape of AI could be transformed. “The most costly compute is pretraining compute,” she explained, asserting that inference-based computation offers considerably better returns on each unit of computing power. Roy, serving as Adaption’s CTO, brings expertise in optimizing AI systems for real-time performance, which is crucial for the startup’s vision.
With the recent funding, Adaption Labs plans to hire additional AI researchers and engineers, as well as designers to explore innovative user interfaces beyond the standard chat-based models. As the AI landscape continues to evolve, Hooker’s commitment to building more adaptable and cost-effective systems positions Adaption Labs as a potential disruptor in the industry.
See also
Berkeley Law’s Alex Belkin Launches Blockchain & AI Groups, Pioneering Tech Legal Education
Invest $1,000 in These 3 AI Stocks: Palantir, Nvidia, and Alphabet for Explosive Growth
OpenAI Wins Legal Battle, Keeps Confidential Attorney Messages Private in Copyright Case
Germany”s National Team Prepares for World Cup Qualifiers with Disco Atmosphere
95% of AI Projects Fail in Companies According to MIT


















































