Connect with us

Hi, what are you looking for?

AI Research

Thinking Machines Lab Launches Tinker, Secures $2B Funding at $12B Valuation

Thinking Machines Lab secures $2B funding at a $12B valuation and launches Tinker, a groundbreaking tool for efficient AI model customization.

Thinking Machines Lab, an AI startup co-founded by former OpenAI CTO Mira Murati, has quickly emerged as a notable player in the competitive landscape of artificial intelligence. Established in February 2025, the company raised $2 billion within its first five months, achieving a valuation of $12 billion. This rapid funding underscores the significant interest in its mission to innovate how AI models are fine-tuned and customized for specific tasks.

Distinct from other AI labs that focus on developing larger models requiring immense data and computing resources, Thinking Machines Lab is dedicated to creating smarter, more efficient models. Its flagship product, Tinker, allows developers to adjust AI models tailored for various applications without the burdensome costs and complexities typically associated with distributed computing.

Murati, who briefly served as interim CEO of OpenAI following the contentious firing and reinstatement of Sam Altman, helped assemble a team of over 30 researchers, drawing talent from Meta, Mistral, and other leading AI organizations. The company has gained attention not only for its innovative technology but also for its unconventional governance structure; Murati retains voting powers that surpass those of the board of directors, granting her significant influence over the company’s direction.

In an industry where speculation runs rampant, Thinking Machines Lab has maintained an air of mystery while gradually revealing its ambitious goals. The company aims to develop multimodal AI systems capable of collaboration across various fields, emphasizing the need for more accessible and customizable AI solutions. Its commitment to transparency is reflected in its research publications, covering topics from GPU optimization to novel fine-tuning methods like Low-Rank Adaptation (LoRA), which offers an efficient alternative to traditional model retraining.

LoRA allows for the freezing of model weights while introducing new parameters, making it feasible to fine-tune models without the vast computational overhead associated with full retraining. This method has been incorporated into Tinker, which is already being utilized by researchers at prestigious institutions such as Princeton, Stanford, and UC Berkeley.

Tinker was officially launched in October 2025 and enables users to customize a variety of open-source models, including those from Meta and OpenAI. By providing a user-friendly interface for fine-tuning, Tinker facilitates the specialization of AI models, making them more applicable to niche domains such as healthcare and legal studies. While it streamlines the customization process, users still need a foundational understanding of machine learning to effectively navigate its features.

As Thinking Machines Lab approaches its one-year mark, the company is eyeing an additional $5 billion in funding, aspiring to reach a valuation of $50 billion. This ambition aligns with plans for 2026, which include the introduction of proprietary models and enhanced capabilities for Tinker to support users with limited technical expertise in fine-tuning processes.

John Schulman, the company’s chief scientist, hinted at the potential for Tinker to revolutionize AI accessibility, making it easier for a broader audience to adapt advanced AI technology to meet specific needs. The progress and innovations at Thinking Machines Lab are being closely monitored within the AI sector, as its unique approach to customization and efficiency may very well redefine industry standards.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Research

Stanford and Yale warn that OpenAI’s GPT, Anthropic's Claude, and others can reproduce extensive copyrighted texts, raising potential billion-dollar legal liabilities.

Top Stories

DeepSeek's V4 model, launching February 17, 2024, may surpass ChatGPT and Claude in long-context coding, aiming for over 80% accuracy in Software Engineering tasks.

Top Stories

DeepSeek's V4 model, launching by February 17, aims to outperform Claude and ChatGPT in coding, leveraging innovative training to boost accuracy beyond 80.9%.

Top Stories

DeepSeek's V4 model, launching February 17, aims to surpass Claude and GPT in coding performance, leveraging a $6 million development cost and innovative mHC...

Top Stories

Nvidia, Broadcom, and Amazon are set to lead the AI market's explosive growth, with Nvidia's EPS projected to soar 45% and Broadcom's AI revenue...

AI Business

As enterprises double down on AI investments, OpenAI faces intensified competition from Google's Gemini and Microsoft's Copilot, threatening its market dominance.

Top Stories

Anthropic seeks $10 billion in funding to boost its valuation to $350 billion amid rising concerns of an AI bubble, as competition with OpenAI...

AI Technology

Meta secures groundbreaking electricity contracts, becoming the largest corporate nuclear energy buyer in U.S. history with over 6 GW capacity to power AI infrastructure.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.