Orrick partner Shannon Yavorsky, who leads the firm’s Global Cyber, Privacy & Data Innovation group and co-heads the AI practice, recently discussed significant developments regarding the California Consumer Privacy Act (CCPA) regulations. In a conversation with RegFi co-hosts Jerry Buckley and Sherry Safchuk, Yavorsky dissected the implications of the newly approved regulations, specifically focusing on automated decision-making technology (ADMT), risk assessments, and cybersecurity audits. This dialogue comes at a crucial time as businesses navigate the evolving landscape of data privacy and artificial intelligence.
The discussion highlighted key definitions and requirements for ADMT, an area that has garnered increased attention amid growing regulatory scrutiny. Yavorsky emphasized the need for organizations to understand their obligations under the revised CCPA guidelines, which delineate how automated systems can be employed in decision-making processes. The regulations stipulate that organizations must conduct comprehensive risk assessments and cybersecurity audits to ensure compliance and mitigate potential risks associated with data handling.
California’s updated approach marks a notable shift from merely defining “AI” to focusing on practical use cases of ADMT, particularly in regulated sectors such as financial services. This transition is particularly relevant given the ongoing discourse on data privacy and security in an increasingly digital economy. Yavorsky pointed out that one of the more contentious aspects of the regulations involves the Gramm-Leach-Bliley Act (GLBA) data carve-outs for financial institutions, raising critical questions about how existing legislation intersects with emerging privacy laws.
Yavorsky’s insights underscore the broader implications for companies operating within California and beyond. With both the public and private sectors paying closer attention to data privacy, the demand for compliance and transparency will likely intensify. Companies must now take proactive steps to align their operations with these new regulatory standards, or risk facing significant penalties.
As the conversation unfolded, it was clear that the regulatory landscape surrounding AI and data privacy is rapidly evolving. Yavorsky reiterated the critical need for organizations to adopt a forward-thinking approach that encompasses both compliance and ethical considerations in their use of technology. This perspective is becoming increasingly important as stakeholders demand accountability and ethical practices in the deployment of AI systems.
In summary, the newly approved CCPA regulations not only refine the framework for automated decision-making technologies but also reflect a broader societal shift toward data privacy and security. As companies grapple with these changes, proactive engagement with regulatory requirements will be essential for maintaining consumer trust and safeguarding organizational integrity. Looking ahead, the ongoing development of privacy regulations will likely serve as a bellwether for how data-driven technologies evolve within the legal landscape, shaping both industry practices and consumer expectations in the years to come.
See also
Trump’s New AI Order Aims to Federalize Regulation, Empower Industry Growth
NVIDIA Gains After Groq AI Inference Deal; Stock Steady at $190 Amid Geopolitical Concerns
AI-Powered Tax Tools Simplify Expat Compliance, Reducing Filing Stress by 70%
Texas Lawmakers Defy Trump’s AI Regulation Order, Seek $3.3B Broadband Funding


















































