New employment laws governing artificial intelligence (AI) in hiring are being implemented across several states, including Illinois, Texas, and Colorado, aiming to regulate bias in algorithmic decision-making. These laws, which impose requirements such as bias audits, notice mandates, appeal rights, and impact assessments, come amidst federal efforts to establish a unified national framework for AI policy. The White House’s Executive Order 14365, issued in December 2025, has set the stage for potential legal conflicts by instructing a new federal AI Litigation Task Force to challenge what it deems “burdensome” state regulations that may clash with a more streamlined national policy.
The regulatory landscape is far from uniform, as each state approaches AI employment law differently. Colorado’s Artificial Intelligence Act (SB 24-205), for instance, imposes extensive responsibilities on both “developers” and “deployers” of high-risk AI systems utilized for employment purposes. Scheduled to take effect by June 30, 2026, this law mandates risk management programs, annual impact assessments, and requires employers to notify workers of significant employment decisions influenced by AI. It also compels reporting to the state Attorney General within 90 days upon discovering any algorithmic discrimination.
Illinois, through HB 3773, will amend its Human Rights Act effective January 1, 2026, explicitly addressing AI-mediated discrimination. This amendment prohibits employers from using AI systems that may indirectly discriminate against employees or job applicants at any stage of the employment process, from recruitment to termination. The term “artificial intelligence” is broadly defined, covering any machine-based system that affects employment decisions, thus setting a high bar for compliance.
Meanwhile, Texas’s Responsible Artificial Intelligence Governance Act (TRAIGA), also effective January 1, 2026, takes the most conservative stance among the three states. It only prohibits AI systems developed or deployed with the intention to unlawfully discriminate, while clarifying that a mere disparate impact does not constitute a violation. Under TRAIGA, enforcement is exclusively the purview of the Texas Attorney General, with no provision for private lawsuits and a 60-day cure period for companies to address any issues.
This emerging patchwork of regulations presents a complex compliance landscape, as states like Colorado and Illinois require proactive governance while Texas maintains a narrower focus on intentional discrimination. Such differences are compounded by additional amendments in states like California, further complicating the national narrative.
In response to this regulatory complexity, the federal government under Executive Order 14365 has positioned itself as a counterforce, aiming to eliminate state laws that it finds inconsistent with federal policy. This order empowers the Attorney General to create the AI Litigation Task Force, which will focus on challenging state laws that may obstruct federal competitiveness objectives. The Secretary of Commerce is tasked with evaluating and identifying state laws suitable for federal challenge by March 2026.
The legal arguments anticipated in upcoming conflicts may include claims of preemption, particularly with regards to how state laws could hamper federal regulatory goals, as well as challenges based on the Dormant Commerce Clause and First Amendment rights concerning disclosure mandates. However, proponents of state-level governance argue for the necessity of local regulations to ensure fairness and transparency in AI employment practices.
Currently, federal employment discrimination laws, including guidance from the Equal Employment Opportunity Commission (EEOC), already address algorithmic discrimination through existing frameworks such as Title VII, the Americans with Disabilities Act (ADA), and the Age Discrimination in Employment Act (ADEA). The EEOC’s emphasis on disparate impact, reasonable accommodations for disabled workers, and transparency in decisions establishes a federal baseline for compliance that will remain pertinent regardless of state-specific laws.
For employers navigating this turbulent landscape, the immediate challenge lies in compliance with potentially conflicting state regulations. Organizations adopting Colorado-style governance frameworks may risk their obligations being challenged or narrowed by federal litigation, while delaying compliance could expose them to state enforcement actions and private lawsuits. The recommended strategy is to develop a “highest common denominator” compliance framework that aligns with the most stringent state requirements while allowing for adjustments to meet narrower standards, such as those in Texas.
Equally important is the re-evaluation of contracts with AI vendors, as they may have direct responsibilities under laws like Colorado’s CAIA. Employers should ensure contracts clearly delineate expectations for bias testing, data access for compliance, and incident reporting duties. As the landscape evolves, it will be crucial for employers to closely monitor the Department of Justice Task Force’s activities and the upcoming evaluation from the Secretary of Commerce, preparing scenario plans for various legal outcomes.
As states and the federal government grapple with the complexities of AI regulations, employers must act swiftly to fortify their compliance strategies. The interplay of state and federal laws will have lasting implications for future hiring practices, making immediate attention to these developments essential for organizations aiming to navigate this uncertain terrain.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health



















































