Freddie Mac is set to implement updated guidelines mandating mortgage companies to establish a clear framework governing the internal use of artificial intelligence (AI). The new rules will take effect on March 3, 2026, requiring compliance from businesses that utilize AI and machine-learning tools in relation to loans sold to or serviced on behalf of the government-sponsored enterprise.
The move is fueled by the rapid growth of AI platforms in the public sphere, including those from OpenAI and emerging competitors. This trend has prompted some mortgage industry leaders to stress the need for lenders to develop clear internal policies for AI adoption, anticipating the introduction of broader industry regulations.
Freddie Mac’s update comes at a critical juncture, as the potential for intentional and unintentional AI misuse looms large. As various tech giants and startups release similar open-source platforms, AI is increasingly viewed as a daily tool across industries. “Everybody’s starting to consume that and use it in their day-to-day,” noted Joe Sorbello, product owner at mortgage software firm Lender Toolkit, who manages the company’s underwriting platform.
Despite the growing use of public AI tools, concerns about data quality remain prevalent, particularly when these tools are employed for business purposes. “AI can do analysis, AI can feed back results,” Sorbello explained. “But you run the risk of AI learning something wrong.”
What’s included in new Freddie Mac guidelines
In early December, the government-sponsored enterprise (GSE) introduced a new section specifically focused on AI governance. Starting in March, sellers and servicers working with Freddie Mac will be required to demonstrate effective processes for “mapping, measuring and managing AI risks,” ensuring that these practices are transparent and properly implemented.
The guidelines also stipulate that mortgage companies must introduce “trustworthy” AI features and establish procedures to ensure compliance with risk management levels appropriate to their risk tolerance. In an effort to prioritize safety and compliance, Freddie Mac will require constant monitoring of AI and machine learning tools for threats to data integrity, as well as regular audits to identify potential weaknesses that could compromise adherence to established policies.
Further, the latest guidance emphasizes the necessity of separating responsibilities within organizations to minimize conflicts of interest and ensure accountability. All internal rules and associated roles connected to identifying AI risks must be clearly documented and communicated to all employees.
These updated guidelines reinforce that the ultimate responsibility for risk oversight—and any potential penalties arising from failures—rests with the originators, despite their tendency to shift this burden onto technology partners. Additionally, the update highlights the importance of thorough vetting and vigilance by lenders and servicers when selecting AI partners, as well as clarifying the boundaries where automation transitions to human intervention, according to Sorbello.
“A lot of the lenders really rely on us, as the vendors, having those security protocols in place as it relates to security of data, use of data,” he said.
As Freddie Mac unveils these updates, mortgage businesses face the dual challenge of embracing technological advancements while figuring out how to effectively incorporate AI into their operational frameworks. A study conducted by Arizent, the parent company of National Mortgage News, indicated that in 2025, 38% of companies were opting for a gradual introduction of AI, while 2% imposed further restrictions on the adoption of such tools. The fear of noncompliance, amid a lack of clear guidelines, has been a primary deterrent for various business leaders hesitant to accelerate their AI initiatives.
Specific regulations and procedures, such as those outlined in Freddie Mac’s guidance, are expected to alleviate some of the current apprehension within the mortgage sector regarding AI adoption. “It’s going to cause people to ask the questions, which is a good thing,” Sorbello concluded. “From a lender’s adoption perspective, from a lender’s risk perspective, it’s just going to give them some of that fuel to make sure that they continue to ask the right questions of their vendors.”
See also
U.S. Federal AI Regulation Looms: Key Framework to Boost Startups and Global Competitiveness
Meta’s AI Spending Hits $1,117 Price Target, Analyst Predicts 68% Upside Potential
AI Revolutionizes Financial Planning: Enhancing Accessibility and Critical Insights for Investors
Trump’s AI Executive Order Threatens Minnesota’s Tech Safeguards, Sparks Bipartisan Concerns
Character.AI Launches ‘Stories’ Feature to Engage Teens Amid Ongoing Safety Concerns


















































