Moonbounce has successfully closed a $12 million funding round to advance its AI control engine and expand its platform globally. This investment underscores the growing demand for systems capable of enforcing safety rules and policies, particularly as generative artificial intelligence becomes increasingly integrated into digital products. The funding round was co-led by Amplify Partners and StepStone Group, with participation from angel investors including PrimeSet and Josh Leslie, the former CEO of Cumulus Networks and Gremlin.
The leadership team at Moonbounce features CEO Brett Levenson, who previously headed Trust & Safety at Meta, and CTO Ash Bhardwaj, who led cloud and AI infrastructure at Apple. Their experience positions the company to address the growing challenges posed by rapid AI adoption across various sectors.
As AI technologies such as large language models, chatbots, and generative image models proliferate, many companies are grappling with the potential for unwanted outcomes. Issues like behavioral toxicity, bias, misinformation, and regulatory violations have prompted organizations to seek effective moderation solutions. Moonbounce is focusing on these challenges by implementing a method known as “policy as code.” This approach translates human-written content policies into structured logic, allowing software to enforce guidelines in real time.
At the core of Moonbounce’s technology is its AI control engine, which integrates explicit policy directives to ensure that decision-making is predictable and consistent with a company’s objectives. Unlike traditional methods that rely on historical data to guide machine learning models, Moonbounce’s system embeds policies directly into its engine, thereby mitigating risks before they manifest.
The technology operates by collecting a company’s policy rules and converting them into a format that can be evaluated as content is produced. When a human user or an AI model generates content, the engine assesses it at high speeds—typically under 300 milliseconds—to ensure compliance with established rules. Depending on the evaluation, the system can take immediate action, such as blocking potentially harmful content, deploying a human reviewer, or adjusting the AI’s response to align with safety standards.
This proactive approach contrasts sharply with earlier moderation models that often delayed intervention until after content had been published, leading to potential harm or regulatory repercussions. By providing a preventive layer, Moonbounce aims to enact effective moderation before issues arise.
The recent funding round highlights investor confidence in AI safety infrastructure as an essential component of future technological frameworks. Both Amplify Partners and StepStone Group view their investment as a means to help companies navigate operational, reputational, and regulatory challenges as AI continues to scale.
Levenson emphasized the critical need for systems that ensure AI operates “predictably, every time, without exception,” stressing the possibility of harmonizing control and innovation. With this latest capital influx, Moonbounce aims to enhance its product offerings, expand its engineering team, and amplify its go-to-market strategies on a global scale.
The intersection of AI and safety remains a pressing concern for many industries. As companies increasingly deploy AI products, effective moderation will be critical to sustaining trust and compliance in a rapidly evolving digital landscape. Moonbounce’s innovative approach could play a pivotal role in shaping how organizations manage their AI systems responsibly.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health


















































