As artificial intelligence (AI) continues to permeate various sectors, a patchwork of state regulations is emerging in the United States to govern its use in hiring practices. By 2026, numerous states, including California, New York, and Illinois, will have enacted laws aimed at safeguarding employee rights and ensuring fairness in AI-driven employment decisions. These regulations come in response to growing concerns about discrimination and bias linked to automated decision-making tools.
New York City has been at the forefront, implementing its “Artificial Intelligence in Hiring Law” in July 2023. This legislation mandates that employers disclose the use of automated employment decision tools (AEDTs) during the hiring process and when assessing employee performance. Companies must inform local candidates about their AI practices and must provide alternatives for those who decline to be evaluated by automated systems. This approach aims to enhance transparency and accountability in the hiring landscape.
California has also taken significant strides in regulating AI. In June 2023, amendments to the Fair Employment and Housing Act (FEHA) were introduced, establishing rules for the use of Automated-Decision Systems (ADS) in hiring and promotion. Effective from October 2023, these regulations prohibit employers from deploying any ADS that discriminates against job applicants based on protected characteristics. Employers must maintain detailed records of their hiring practices for four years, ensuring compliance and facilitating any future reviews.
In Illinois, the AI in Employment law is set to take effect on January 1, 2026. This law extends to all employers and covers decision-making related to hiring and promotions. Similar to the regulations in New York and California, Illinois requires employers to use AI responsibly and transparently, informing employees about how AI is utilized in their evaluations and hiring processes.
Other states are also moving towards implementing their AI regulations. Colorado’s AI Act has faced delays but is expected to be in effect by June 2026. The Texas Responsible Artificial Intelligence Governance Act (TRAIGA), also effective January 1, 2026, encompasses broader regulations beyond employment, addressing issues such as behavioral manipulation and discrimination, as well as unlawful content generation. Meanwhile, Maryland has introduced a Responsible AI Policy outlining how AI systems should be managed within the state.
The federal government’s stance on state-level AI regulations adds another layer of complexity. The White House has indicated a desire for minimal regulation of AI, allowing technology firms to innovate without stringent oversight. In November 2023, discussions emerged regarding an executive order that might preempt state regulations by potentially challenging them as unconstitutional through lawsuits. This federal approach raises questions about the future of state-level regulations and their enforceability.
As these new laws take shape, businesses must navigate a rapidly evolving regulatory landscape. Employers are advised to proactively assess their use of AI tools, ensuring that they comply with existing and forthcoming regulations. Careful documentation of AI usage will be vital, not only to demonstrate adherence to the law but also to address any employee concerns or government inquiries.
With the potential for federal intervention looming, companies that take a proactive stance on ethical AI usage will likely find themselves better positioned to adapt to regulatory changes. As AI technology continues to evolve and integrate into various sectors, the balance between innovation and regulation will remain a critical focal point in the employment sphere.
See also
Vertex Expands AI Tax Compliance Partnership with CPA.com, Targets $27.86 Fair Value
Rubrik Announces 24.5% Surge After Launching AI Agent Governance and Q3 Revenue Update



















































