Connect with us

Hi, what are you looking for?

AI Regulation

Litera Enhances Legal AI with Hybrid Approach, Boosting Document Accuracy Beyond 90%

Litera’s hybrid approach elevates legal document accuracy beyond 90% by merging advanced AI with decades of precision in rules-based technology.

Greg Ingino, chief technology officer at Litera, has underscored the limitations of artificial intelligence (AI) in the legal sector, arguing for a hybrid approach that combines traditional methods with AI advancements. His remarks come amidst rising expectations that large language models (LLMs) can revolutionize legal tech, suggesting that they might be universally superior to established systems.

Ingino, speaking from his experience at Litera, emphasized that the belief in AI’s capability to address all legal tasks is misguided. “The question legal technology companies need to be asking is not ‘Can AI do this?’ It is ‘Should AI do this and, if so, how much of it?’” he stated, stressing that a blend of AI and traditional technology often yields the best results.

Document comparison is a critical aspect of legal work, involving tasks such as contract review and compliance tracking. Many believe that LLMs can perform these tasks as effectively as traditional methods, but Ingino’s team at Litera put this assumption to the test and found it lacking. Their rules-based comparison engines, honed over two to three decades, provide the precision necessary for lawyers to rely on during contract reviews.

The testing revealed that LLMs could not match the consistency and reliability of Litera’s established systems. Notably, AI struggled with non-text elements such as images and tables, which are vital in legal documents. Litera’s internal benchmark research presented at Legalweek 2026 demonstrated that general-purpose models, including Gemini, Claude, and ChatGPT, failed to produce usable redlines for documents with complex layouts. Even on shorter documents, LLM text accuracy peaked at approximately 90%, which, while seemingly adequate, poses risks in legal contexts where even minor errors can have significant consequences.

In fact, accuracy for one model on a 200-page document plummeted to around 40%. Ingino concluded that while LLMs can outline changes within documents, they fall short of delivering the precise legal artifacts required by attorneys.

“In legal tech, you cannot afford to be ‘mostly right’ or ‘directionally accurate’. Lawyers need certainty,” Ingino remarked, highlighting the high stakes associated with compliance and client expectations. This standard is not yet met by general-purpose AI models, he pointed out.

Rather than discarding their successful rules-based engines, Litera enhanced them by integrating AI in areas where it excels, such as natural language understanding and intelligent orchestration. This hybrid solution led to the creation of platforms like Litera One and Lito, an AI legal agent, where AI and traditional technologies work in tandem. “The AI orchestrates. The rules-based engines execute where precision is required,” Ingino explained, creating a seamless workflow for legal professionals.

This hybrid model has proven effective in other areas of Litera’s operations as well. In quality engineering, for instance, AI has transformed processes by generating close to 70% of the total tests through AI-written test cases, thereby improving product quality and allowing engineers to focus on higher-value tasks.

However, Ingino cautioned against a one-size-fits-all approach: “The lesson from document comparison is equally important: evaluating where the technology can be best utilised is not optional – it is the work.” Firms that can make well-reasoned decisions about AI deployment, based on genuine needs, will outperform those that chase the latest technology without understanding its implications.

Looking ahead, Ingino emphasized the importance of human oversight in AI applications within legal tech. He stated that AI outputs should never enter production without expert review, noting that both engineers and legal technology experts scrutinize AI-generated workflows to mitigate risks. “Speed without expertise creates risk, and in legal technology, that risk is unacceptable,” he asserted.

As legal teams increasingly explore AI tools, Ingino advises focusing on transparency. He suggests that the key question to ask vendors is not simply whether they utilize AI, but rather how and where AI integrates into their workflows. Companies unable to provide clear answers may be operating a “black box,” which is particularly concerning when handling sensitive client information.

In conclusion, Ingino advocates that the hybrid approach is not merely a compromise but a necessity dictated by the complexities of legal work. He foresees that the tools earning enduring trust in the legal industry will be those that effectively combine decades of legal-specific precision with the smart integration of AI, without sacrificing the certainty that lawyers require.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.