Perplexity has launched the Deep Research Accuracy, Completeness, and Objectivity (DRACO) Benchmark, positioning it as an open standard designed to assess the capabilities of AI agents in executing complex research tasks. This benchmark is now publicly accessible, enabling AI developers, researchers, and organizations across the globe to evaluate their systems. The DRACO Benchmark is informed by real-world scenarios, sourcing tasks from millions of actual production queries submitted to Perplexity Deep Research. It spans ten diverse domains, including Law, Medicine, Finance, and Academic research, and features detailed evaluation rubrics refined through expert review.
In a recent announcement via social media, Perplexity stated, “We’ve upgraded Deep Research in Perplexity. Perplexity Deep Research achieves state-of-the-art performance on leading external benchmarks, outperforming other deep research tools on accuracy and reliability.” The upgraded features are available for Max users now and will be rolled out to Pro users in the coming days.
The DRACO Benchmark evaluates AI agents across four critical dimensions: factual accuracy, analytical breadth and depth, presentation quality, and citation of sources. Notably, the evaluation process employs an LLM-as-judge protocol, ensuring that responses are fact-checked against real data, thereby minimizing subjectivity. Unlike previous benchmarks that often relied on synthetic or academic tasks, DRACO aims to focus on genuine user needs while remaining model-agnostic, allowing assessments of any AI system with research capabilities. Early results suggest that Perplexity Deep Research excels in both accuracy and speed, particularly in challenging domains such as legal inquiries and personalized queries.
Perplexity, the firm behind the DRACO initiative, is well-regarded for its AI-driven search and research tools. By open-sourcing DRACO, the company seeks to elevate the standards for deep research agents and foster broader adoption of rigorous, production-grounded evaluation methods within the AI industry. This move reflects an ongoing trend among AI developers and researchers to establish more robust metrics for evaluating AI capabilities, particularly as these technologies become increasingly integrated into various fields.
As AI systems gain traction in handling complex research tasks, the need for standardized evaluation metrics becomes ever more pressing. The DRACO Benchmark’s focus on real-world scenarios is poised to provide valuable insights into how effectively AI agents can meet user demands. This approach could significantly enhance the way AI performance is assessed and foster advancements in the technology, ensuring that AI tools are not only innovative but also reliable in practical applications.
The launch of the DRACO Benchmark represents a significant step toward improving the accountability and transparency of AI systems. By inviting participation from a global audience of developers and researchers, Perplexity is encouraging a collaborative environment in which best practices can be shared and elevated. As the AI landscape continues to evolve, initiatives like DRACO will play a crucial role in shaping the future of AI research and application, ultimately benefiting users across various sectors.
See also
Sam Altman Praises ChatGPT for Improved Em Dash Handling
AI Country Song Fails to Top Billboard Chart Amid Viral Buzz
GPT-5.1 and Claude 4.5 Sonnet Personality Showdown: A Comprehensive Test
Rethink Your Presentations with OnlyOffice: A Free PowerPoint Alternative
OpenAI Enhances ChatGPT with Em-Dash Personalization Feature















































