Healthcare providers are increasingly integrating artificial intelligence (AI) tools into their operations to enhance diagnostics, documentation, and overall efficiency. As large AI platforms begin directly marketing these AI-enabled tools to healthcare providers, the need for compliance with a rapidly evolving regulatory landscape becomes paramount. The U.S. Department of Health and Human Services (HHS) has proposed updates to the HIPAA Privacy and Security rules; however, the absence of nationwide AI standards means existing HIPAA obligations remain applicable. Providers must also navigate state-specific regulations, such as those emerging in Texas, raising a critical question: how can they ensure compliance amid ongoing changes in federal and state laws?
The HIPAA Privacy Rule (45 C.F.R. Part 164, Subpart E) governs the use and disclosure of protected health information (PHI), while the HIPAA Security Rule imposes safeguards for electronic PHI (ePHI) (45 C.F.R. Part 164, Subpart C). “Covered entities” — which include healthcare providers, health plans, and healthcare clearinghouses — must be especially vigilant as breaches can stem from routine errors, external threats, or unauthorized access through vendor platforms. When a provider suspects a breach, it is required to investigate and mitigate any harm.
Liabilities for violations can be severe, with potential penalties from the HHS Office for Civil Rights (OCR) and state attorneys general. Federal civil penalties can exceed US$2 million for repeated violations, while criminal penalties may include up to 10 years of imprisonment for knowing infractions. On the state level, Texas allows its attorneys general to enforce HIPAA violations through consumer protection or health privacy statutes, with penalties reaching US$250,000 per violation and even harsher consequences for identity theft or computer crimes.
In instances where HIPAA does not apply, such as in direct-to-consumer wellness apps, the Federal Trade Commission (FTC) oversees data security under the Health Breach Notification Rule (16 CFR Part 318), which addresses both data security breaches and unauthorized disclosures. As federal standards evolve, the HHS has not yet finalized updates to the HIPAA rules to account for AI, but there is a movement toward developing a unified framework. An Executive Order issued on December 11, 2025, encourages federal agencies to challenge state-level AI laws to prevent a fragmented regulatory environment.
Despite these federal efforts, at least thirty-eight states have enacted legislation pertaining to AI, with approximately 400 related bills currently pending. Multi-state providers must monitor these varying obligations, as there is currently no federal pre-emption for AI regulation. In Texas, new laws have been enacted to clarify the use of AI in healthcare, with SB 1188 and HB 149 requiring clear disclosure to patients about AI’s role in their treatment.
Effective September 1, 2025, SB 1188 allows the use of AI for diagnostic purposes, provided practitioners stay within their licensed scope, disclose AI usage, and review AI-generated records according to Texas Medical Board standards. Similarly, HB 149, effective January 1, 2026, mandates that healthcare providers disclose AI use in relation to healthcare services at the time of treatment, treating any failure to disclose as a deceptive trade practice.
While HIPAA’s requirements are technology-neutral, the Security Rule remains applicable, necessitating comprehensive risk analyses to identify vulnerabilities related to PHI before adopting AI tools. Sharing PHI with an AI vendor usually requires a Business Associate Agreement (BAA), which remains a critical foundation for any AI system handling PHI. To comply, covered entities and business associates must maintain appropriate safeguards and continue risk assessments as both federal and state regulations evolve.
A proactive governance approach is essential for healthcare organizations to align with the shifting regulatory landscape. Key recommendations include honest patient communication regarding AI, implementing safeguards, ensuring that PHI sharing adheres to HIPAA standards, auditing AI use through thorough risk assessments, and maintaining written inventories of technology assets interacting with electronic PHI. Legal firms such as Norton Rose Fulbright emphasize the importance of monitoring regulatory developments to help organizations remain compliant.
As the landscape for AI in healthcare becomes increasingly complex, a well-structured governance strategy is crucial. Organizations that embed their AI strategies within HIPAA fundamentals and commit to transparent communication with patients will be better equipped to navigate the regulatory challenges ahead. By prioritizing early investment in governance and documentation, providers can not only stay compliant but also enhance patient trust and bolster operational resilience.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health



















































