Connect with us

Hi, what are you looking for?

AI Regulation

Banks Demand Vendors Disable AI Features to Ensure QA Compliance Amid Regulatory Uncertainty

Banks are demanding vendors disable AI features in QA tools to avoid regulatory scrutiny, risking outdated software and missed cybersecurity updates.

Banks are increasingly requesting technology vendors to disable or remove artificial intelligence features from software testing and quality assurance (QA) tools amid a climate of regulatory uncertainty and the rapid adoption of generative AI. This trend was underscored in a recent submission from the Bank Policy Institute to U.S. regulators, highlighting a growing disconnect between innovation in testing tools and the supervisory frameworks used to evaluate them.

According to the report, some banks have opted to ask vendors to turn off AI features in third-party products to avoid a lengthy and unpredictable model risk management review. This approach risks leaving institutions reliant on outdated software versions, which could lead to missed critical updates, including cybersecurity fixes.

The implications for QA and testing teams within banks are immediate. AI-powered capabilities that promise faster test generation, enhanced defect detection, and improved test coverage are being sidelined—not due to technical failures, but because they cannot yet be justified within existing compliance structures.

At the core of the issue lies the continued reliance on supervisory guidance such as SR 11-7, originally designed for traditional, deterministic models. Paulo Cavallo, Quantitative Risk & Modeling Lead at Comerica Bank, emphasized that the problem does not stem from the technology itself, but rather from the frameworks used to assess it. “Banks are asking vendors to remove AI features from their products. Not because the AI failed. Not because it’s dangerous. Because the examination framework wasn’t built for it,” Cavallo noted on LinkedIn.

Comerica Bank, a Texas-based regional financial institution, is grappling with the mismatch between modern AI systems and legacy regulatory expectations. Cavallo pointed out that SR 11-7 lacks references to critical AI aspects such as hallucination rates and retrieval precision, and was created in 2011 with logistic regressions in mind. Despite this, examiners continue to apply the framework to generative AI systems that have a material impact.

When examiners evaluate financial institutions utilizing generative AI, they rely on SR 11-7, not because it specifically addresses AI, but because its three pillars—conceptual soundness, outcomes analysis, and ongoing monitoring—are technology-agnostic.

This evolving regulatory landscape has shifted the QA burden onto banks, compelling them to develop their own validation frameworks. “The burden is on the institution. You define the framework. You build the testing infrastructure,” Cavallo stated, underscoring a demanding reality for AI validation in regulated environments where governance is scrutinized more than technical specifics.

Examiners tend to focus on governance discipline rather than technical particulars, often posing four fundamental questions: What are your standards? What are your tests? What are your results? Convince me it’s reasonable. This dynamic is pushing banks to prioritize defensibility over innovation, frequently leading to conservative decisions such as completely disabling AI features.

The ramifications of this trend are increasingly visible in vendor relationships and procurement decisions, especially for QA and testing platforms incorporating AI capabilities. Joshua Hunter, Head of Financial Crimes Compliance at Foundever, noted that the trend is becoming more widespread in contracting. “Some institutions have governance frameworks around third-party AI, others aren’t allowing it at all,” he said in a LinkedIn comment.

Even in cases where AI is technically permitted, internal approval processes often dominate decision-making, overshadowing risk assessments. Hunter explained that this tendency places legacy institutions at a disadvantage compared to more agile fintechs and banks, widening the gap in areas like automated testing, fraud detection, and operational resilience, where AI is increasingly essential.

For QA teams, this shift implies that testing is evolving beyond a purely technical function—it is becoming the primary means through which compliance is defined and demonstrated. Cavallo framed this moment as both a challenge and an opportunity for those constructing validation frameworks within banks. “Nobody is handing you a playbook. You’re writing it. And whatever you write, examiners across the industry could eventually use as the benchmark,” he stated, emphasizing the potential to define compliance standards for the next decade.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Generative

FNet introduces Fourier token mixing, achieving a 50% efficiency boost and significantly reduced memory costs for AI models in resource-constrained environments.

AI Education

Ellucian appoints former Shopify executive Josh Rice as Chief Commercial Officer to enhance SaaS and AI adoption across 3,000 global institutions.

AI Business

Oracle's AI investments surge 84% driving cloud revenue to $4.9B, even as long-term debt spikes 66%; will this strategy pay off for investors?

AI Tools

Salesforce's research reveals that sales reps spend only 28% of their week selling, driving the rapid adoption of AI tools like Gong and Lavender...

AI Finance

Palantir's CEO Alex Karp warns that only vocationally trained workers or neurodivergent individuals may thrive in an AI-driven job market, highlighting a shift in...

AI Education

Iowa State University launches a beginner-friendly micro-credential course to boost AI literacy for K-12 educators, addressing the needs of over 50% of teens using...

AI Generative

Midjourney launches its AI-driven Prompt Generator, streamlining image prompt creation for artists and creators with advanced customization tools and intuitive templates.

AI Cybersecurity

Lunai Bioworks partners with BioSymetrics to enhance AI-driven chemical threat detection, leveraging advanced phenotypic screening to classify neurotoxic compounds.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.