For businesses in the United States, AI governance will soon become a compliance imperative, not just a best practice
As the European Union prepares for the phased implementation of the EU Artificial Intelligence Act in 2024, it sets a global benchmark for AI regulation, emphasizing the necessity for compliance over mere best practices. The law, which aims to ban high-risk AI systems and establish comprehensive governance by 2026, signals that U.S. businesses may soon need to adapt their operations to meet similar governance standards.
In May 2024, Colorado became a trailblazer by enacting the Colorado Artificial Intelligence Act, the first state-level law to address AI comprehensively. Drawing inspiration from the EU framework, Colorado’s legislation places significant obligations on developers and users of high-risk AI technologies, particularly in areas such as employment, housing, healthcare, and lending. These include mandates for impact assessments, risk management programs, transparency, and human oversight.
Although enforcement of the Colorado law, originally set for February 2026, has been postponed to June amid industry pushback and legislative adjustments, its introduction has reverberated across the nation. States like California and Illinois are already exploring similar measures, and New Hampshire may follow suit in its current legislative session.
California has enacted several AI-related laws, including the Transparency in Frontier Artificial Intelligence Act, which requires developers of advanced AI models to disclose safety protocols and operational transparency. Additional regulations have been introduced to ensure chatbot safety and bolster consumer protection, with enforcement commencing in 2026. In Illinois and New York City, new regulations mandate that employers notify or obtain consent from applicants before employing AI tools for hiring, while some laws require auditing of automated employment decisions. Similarly, broader privacy laws in New Hampshire also impose restrictions on automated decision-making in various contexts, including employment.
New Hampshire’s approach, however, is more segmented, focusing on specific risks rather than broad legislation. Current laws prohibit state agencies from employing AI for real-time biometric surveillance and discriminatory profiling without a warrant, and restrict certain applications of generative AI, like deepfakes and interactions with minors.
On the federal level, comprehensive AI legislation remains elusive, with the landscape increasingly shaped by executive actions. In early 2025, President Trump signed Executive Order 14179, titled “Removing Barriers to American Leadership in Artificial Intelligence,” which rolled back previous safety-focused mandates in favor of fostering innovation. A leaked draft executive order from November 2025 suggests an intention to preempt state AI laws, citing concerns over a fragmented regulatory landscape that could hinder competitiveness. This draft proposes the establishment of a federal AI task force and indicates that federal funding may be contingent on state adherence to national regulations. The ongoing tension between federal uniformity and states’ rights will likely impact AI governance discussions leading into 2026.
Given the rapidly evolving regulatory landscape, businesses are advised to proactively prepare for compliance, regardless of whether new state or federal regulations are forthcoming. Three fundamental steps are recommended: first, conduct a comprehensive AI use assessment to inventory current and potential AI tools; second, establish an AI governance framework by forming a cross-functional team that includes leadership and technology and legal advisors; and third, integrate AI into operational practices through testing, prototyping, and vendor engagement. With hundreds of AI-related bills already introduced in the U.S. and global frameworks such as the EU AI Act setting high compliance expectations, businesses must act decisively to maintain their competitive edge.
Cam Shilling, founding chair of McLane Middleton’s Cybersecurity and Privacy Group, emphasizes the importance of these steps to ensure not only compliance but also the security and privacy of AI operations. As AI becomes integral to business strategies, those who wait for regulatory mandates may find themselves at a disadvantage.
EU Artificial Intelligence Act | Colorado Artificial Intelligence Act | California AI Legislation | Executive Order 14179 | NIST AI Risk Management Framework
See also
DeepSeek AI Reveals 2025 Price Projections for XRP, Solana, and Dogecoin Amid Market Recovery
AI-Enhanced Theorem Proving Accelerates Mathematical Discovery and Rigor in Research
Trump’s Executive Order Centralizes AI Regulation, Sparks Controversy Over Safety and Innovation
AI Stocks Drop 1.69% as Broadcom, Oracle Spark Profitability Concerns Amid Rising Costs
Bloom Energy (BE) Plummets 10% Amid AI Infrastructure Selloff Post-Oracle Update



















































