Cybersecurity has become a critical concern for employee benefit plan fiduciaries, particularly as trillions of dollars in retirement assets and large quantities of sensitive participant data are increasingly targeted by cybercriminals. The rising use of artificial intelligence (AI) in benefits administration further complicates these security issues, introducing new vulnerabilities that fiduciaries must navigate. This article highlights the Department of Labor’s (DOL) cybersecurity guidance, discusses the risks posed by AI tools, and outlines practical measures fiduciaries can take to mitigate these risks.
In April 2021, the DOL’s Employee Benefits Security Administration (EBSA) released its inaugural guidance on cybersecurity for employee benefit plans. This guidance was updated in September 2024 to clarify that all employee benefit plans, including both retirement and health and welfare plans, fall under its cybersecurity mandates. The DOL has emphasized that cybersecurity is an ERISA fiduciary responsibility, requiring plan fiduciaries to actively mitigate cybersecurity risks. This includes the prudent selection and monitoring of service providers who handle participant data and plan assets, underscoring that fiduciaries cannot solely rely on service providers for risk management.
Despite the guidance being over four years old, cybersecurity remains a top priority for the DOL. In early 2024, the EBSA announced its 2026 enforcement priorities, with cybersecurity at the forefront. The agency has integrated cybersecurity inquiries into its standard plan audit protocols, prompting investigators to request documentation related to cybersecurity policies, service provider agreements, and incident response strategies.
The adoption of AI tools in benefits administration, such as chatbots and algorithms that process claims or generate investment recommendations, enhances operational efficiency but also raises significant cybersecurity concerns. These tools require access to substantial amounts of sensitive data, creating appealing targets for cyberattacks. A breach in an AI system could compromise not only current participant data but also historical information used for developing AI models.
Moreover, AI systems can be susceptible to “adversarial attacks,” which are cyberattacks aimed at manipulating AI outputs. Malicious actors could exploit these vulnerabilities to authorize fraudulent transactions, disseminate incorrect benefit information, or bypass security protocols. The intricate nature of some AI systems may make these attacks challenging to detect, heightening the stakes for fiduciaries.
The interconnected nature of AI with other plan systems further complicates the security landscape. AI tools often link to various databases, communication platforms, and third-party services, with each connection representing a potential entry point for cyber threats.
To address these multifaceted cybersecurity challenges, fiduciaries are advised to implement several best practices based on the DOL’s guidance and emerging industry standards. First, vendor due diligence is crucial. During the selection of service providers, fiduciaries should scrutinize their cybersecurity practices as part of a prudent selection process. This includes reviewing written cybersecurity policies, verifying security certifications, and inquiring about incident history and AI usage.
Second, strong contractual protections are essential in service agreements. Key elements to include are clear allocations of responsibility for data security, requirements for maintaining security controls, notification obligations for security incidents, and provisions to address AI-specific risks, such as data usage limitations and security testing requirements.
Another critical area is participant education. Informed participants act as a valuable barrier against social engineering and account takeover attempts. Fiduciaries should communicate cybersecurity best practices, encouraging the use of strong passwords, multi-factor authentication, and regular monitoring of account activity.
Additionally, educating employees who access plan data is vital, as human error remains a significant factor in data breaches. Regular training sessions should cover recognizing phishing attempts and handling sensitive information, including guidelines on the appropriate and inappropriate use of AI. Including a cybersecurity statement in the Summary Plan Description may also guide participants to DOL’s online security tips.
Finally, thorough documentation of all cybersecurity-related decisions is paramount. This includes maintaining records of service provider evaluations, vendor reviews, training activities, and incident response actions. Comprehensive documentation not only provides proof of a fiduciary’s prudent actions but also is crucial in the event of a DOL audit or participant complaint.
As cybersecurity risks continue to evolve with advancements in technology, especially with AI, fiduciaries face an ongoing challenge to safeguard sensitive participant data and assets. The integration of proactive measures will not only protect these vital resources but also ensure compliance with regulatory expectations, reinforcing the need for vigilant oversight in the realm of employee benefit plans.
See also
Anthropic’s Claims of AI-Driven Cyberattacks Raise Industry Skepticism
Anthropic Reports AI-Driven Cyberattack Linked to Chinese Espionage
Quantum Computing Threatens Current Cryptography, Experts Seek Solutions
Anthropic’s Claude AI exploited in significant cyber-espionage operation
AI Poisoning Attacks Surge 40%: Businesses Face Growing Cybersecurity Risks





















































