In a rapid succession of developments in the AI landscape, Sycamore announced a $65 million seed round on March 30 to create what its founder describes as an operating system for autonomous enterprise AI. Shortly thereafter, on April 8, Anthropic launched its Managed Agents in public beta at a rate of eight cents per session hour. Just seven days later, OpenAI introduced an updated open-source Agents SDK that included a model-native harness, offering it without any additional first-party runtime fees beyond standard API and tool pricing.
These three initiatives, unveiled within a span of sixteen days, highlight a shared observation about the evolving market. However, the laboratories behind these products have publicly disagreed on their respective monetization strategies. Anthropic has incorporated a separately billed runtime within its own infrastructure, while tech giants Google and Microsoft have integrated the runtime into their offerings, charging for various components such as sessions and tool usage. In contrast, OpenAI has opted to provide its runtime as open source, charging solely for model and tool calls.
The term “harness” gained traction in February when OpenAI detailed a production system engineered without any human-written code. This terminology resonates because it encapsulates a practice that had existed unlabelled until then. Martin Fowler further defined harness engineering as encompassing all elements surrounding an AI model, aside from the model itself. A harness serves as the control mechanism that ensures agents operate reliably in production, covering aspects like model invocation, context management, tool orchestration, and error recovery.
For the past 18 months, cloud and framework providers have offered fragmented elements of this control layer, but many teams deploying production agents were forced to create their own solutions. Startups began to raise funds aiming to deliver a comprehensive version, while internal teams assembled their own harnesses from open-source components. The emergence of the harness as a viable market reflects the lack of cohesive solutions available to developers.
Anthropic’s Managed Agents, now in beta on the Claude Platform, aims to fill this gap. Developers can define agents, tools, and guardrails while Anthropic manages long-running sessions, sandboxed code execution, and end-to-end tracing. Notable early adopters include Notion, Rakuten, Sentry, Asana, and Atlassian. Pricing is straightforward: standard token rates apply for model inference, coupled with an eight-cent fee per session hour while active. Some advanced features are gated behind a separate access request.
OpenAI’s approach, introduced a week later, includes the updated open-source Agents SDK featuring a model-native harness. This solution also targets long-running agents but allows developers to utilize their own infrastructure, offering flexibility with support for various sandbox providers. While the SDK itself is free, costs arise from the developer’s choice of compute and storage providers. OpenAI’s model refrains from imposing a separate runtime fee, marking a deliberate departure from Anthropic’s pricing structure.
The differing strategies underscore a broader trend among tech giants vying for dominance in this burgeoning market. Google’s Vertex AI Agent Engine and Microsoft’s Foundry Agent Service adopt a consumption-based billing model that quantifies usage across sessions and components. In contrast, AWS has announced plans for a Stateful Runtime Environment in partnership with OpenAI, further diversifying the market landscape.
The debate over pricing structures indicates a strategic divergence among the leading labs, with each vying to define how this critical layer is monetized. The coexistence of various pricing models may ultimately cater to distinct customer preferences, as seen in previous cloud infrastructure trends. The introduction of a free, open-source harness from OpenAI presents new challenges for independent startups that had filled the gaps prior to these developments.
For companies like Sycamore, which emphasize trust and governance in enterprise AI, the new landscape could actually enhance their value proposition against both Anthropic’s managed solution and OpenAI’s open-source approach. Conversely, horizontal orchestration frameworks such as LangChain and CrewAI might find themselves more vulnerable now that a robust, open-source option exists, complicating their pitch for flexibility against vendor lock-in.
As teams reassess their build-versus-buy strategies, the landscape has shifted significantly. Organizations that favor bundled infrastructure can now compare their internal solutions against Anthropic’s offering, while those with existing systems can evaluate against OpenAI’s SDK. This evolution raises the stakes for teams still in the prototype phase as the barriers to entry for robust infrastructure have diminished.
Looking ahead, the harness was initially perceived as a competitive advantage. With four leading labs now actively shaping this category, the question remains which business model will prevail. OpenAI’s strategy of providing a free harness contrasts with Anthropic’s paid service, revealing a complex interplay of pricing and service offerings in a rapidly evolving market.
See also
D-Wave Quantum Surges 52% as Nvidia’s AI Models Ignite Investor Interest
Germany”s National Team Prepares for World Cup Qualifiers with Disco Atmosphere
95% of AI Projects Fail in Companies According to MIT
AI in Food & Beverages Market to Surge from $11.08B to $263.80B by 2032
Satya Nadella Supports OpenAI’s $100B Revenue Goal, Highlights AI Funding Needs



















































