Employees at companies like Meta and OpenAI are reportedly competing on internal leaderboards that measure the number of “tokens” they consume while using AI tools, according to a column by Kevin Roose in the New York Times. At Meta, the volume of AI utilized has become a key metric in employee evaluations, with managers rewarding those who extensively use AI tools and reprimanding those who do not. This operational approach raises concerns about the value of productivity being measured solely by the quantity of AI engagement.
Roose draws a comparison to the absurdity of evaluating painters on the amount of paint they use, suggesting it’s akin to assessing soldiers based on the number of bullets fired in combat. A more fitting analogy might be that of NBA mascots being judged on the number of t-shirts they launch from their cannons, despite the inherent lack of correlation between quantity and quality. This “tokenmaxxing” trend highlights a growing focus within the industry on token usage as a standard of success, with staggering figures emerging about token consumption.
One engineer at OpenAI reportedly burned through an astonishing 210 billion tokens, which Roose equates to the equivalent of 33 Wikipedias. Another software engineer from Sweden claimed that his company spends more on Claude Code tokens than his annual salary. This substantial spending reflects a broader trend in the industry, fueled partially by the rise of agentic AI platforms, known as “claws,” such as OpenClaw, which have become a focal point of innovation this year. The virality of OpenClaw marked a shift in preference from OpenAI’s GPT models to Claude among AI enthusiasts, prompting OpenAI to hire the creator of OpenClaw to safeguard its leading position in the market.
Even without using an external claw platform, Claude Code is evolving to resemble OpenClaw. Recently, a feature was introduced that enhances users’ ability to interact with Claude Code via mobile devices. This new functionality allows users to communicate with the AI more seamlessly through platforms like Telegram and Discord, expanding the potential for on-the-go coding.
We just released Claude Code channels, which allows you to control your Claude Code session through select MCPs, starting with Telegram and Discord.
Use this to message Claude Code directly from your phone. pic.twitter.com/sl3BP2BEzS
— Thariq (@trq212) March 19, 2026
The promotional material even features a playful graphic of a red crustacean, possibly a lobster or crab, serving as a new emblem of LLM token profligacy. However, the emphasis on token consumption reveals an underlying issue where companies prioritize the sheer volume of tokens processed as a marker of success. Recently, OpenAI president Greg Brockman touted that the coding-oriented GPT-5.4 processes 5 trillion tokens daily, a figure intended to impress investors given the financial implications of token usage.
gpt-5.4 has ramped faster than any other model we’ve launched in the API: within a week of launch, 5T tokens per day, handling more volume than our entire API one year ago, and reaching an annualized run rate of $1B in net-new revenue.
it’s a good model, try it out!
— Greg Brockman (@gdb) March 16, 2026
While the 5 trillion token benchmark is indeed impressive, it also invites scrutiny as companies navigate the complex landscape of AI productivity and impact. As the industry continues to evolve, the reliance on metrics like token consumption could redefine how success is measured in the tech sector.
See also
Sam Altman Praises ChatGPT for Improved Em Dash Handling
AI Country Song Fails to Top Billboard Chart Amid Viral Buzz
GPT-5.1 and Claude 4.5 Sonnet Personality Showdown: A Comprehensive Test
Rethink Your Presentations with OnlyOffice: A Free PowerPoint Alternative
OpenAI Enhances ChatGPT with Em-Dash Personalization Feature




















































