During Ward and Smith’s annual In-House Counsel seminar, Mayukh Sircar, a cybersecurity, data privacy, and technology attorney, provided critical insights into the role of Artificial Intelligence (AI) in contemporary business practices, the associated risks, evolving regulations, and governance strategies. The event aimed to equip attendees with actionable strategies for effective AI governance.
Laura Hudson, Chief Marketing Officer at Ward and Smith, reflected on Sircar’s journey from aspiring PhD candidate in Physiology and Biophysics at Georgetown to a passionate advocate for intellectual property law. “Luckily for us, he discovered a passion for intellectual property and decided to become an attorney,” she said.
Sircar began his presentation by categorizing various AI technologies, noting that the most basic form is automation, which performs predefined tasks to enhance efficiency. He used the analogy of a thermostat that activates heating when a certain temperature is reached, highlighting examples like workflow approvals and chatbots. “The legal risks with automation are relatively low, unlike with Generative AI, which can lead to potential IP infringement and factual inaccuracies,” he explained. “We’ve all heard about the hallucinations, data privacy violations, and breaches of confidentiality.”
Generative AI, as Sircar described it, encompasses models that generate new content based on existing data patterns. “This is a reactive tool that needs human prompting at each step…it can’t independently verify its output,” he remarked, referring to systems such as ChatGPT, Gemini, and Claude. He further introduced Agentic AI, an emerging category capable of autonomously pursuing goals and making decisions with limited human input. This technology can develop strategies and adapt its actions based on outcomes, with examples including self-driving cars and Virtuoso QA, an autonomous software quality assurance tool.
“Agentic AI magnifies the risks associated with Generative AI, introducing new layers of legal agency and accountability for the actions of the tool,” noted Sircar. “These actions can have binding legal effects.” As businesses increasingly integrate AI technologies, he emphasized the importance of legal departments evolving from reactive gatekeepers to proactive strategic advisors. “Understanding existing regulations is essential for building guardrails that allow businesses to innovate responsibly,” he stated.
The regulatory landscape surrounding AI is complex and constantly changing, paralleling the dynamic nature of data privacy rules. According to Sircar, the European Union’s AI Act serves as a pivotal jurisdictional framework, categorizing AI systems based on risk levels—ranging from Unacceptable to Minimal Risk. “If you’re doing business in the EU with an AI tool, the EU AI Act applies, much like GDPR,” he explained, noting recent proposals aimed at simplifying compliance for small and mid-sized businesses.
In contrast, the United Kingdom is adopting a more flexible, pro-innovation regulatory approach, empowering existing regulators to manage AI technologies. Meanwhile, China’s regulatory environment emphasizes state control, focusing on algorithmic transparency and user consent. “The Cyberspace Administration of China leads enforcement actions with a stated intent of ensuring social and political stability,” Sircar commented.
In the United States, the regulatory framework varies significantly by state, with states like California, Colorado, and Illinois advancing their own privacy and automated decision-making laws. Sircar pointed out the challenges posed by this patchwork of regulations, advising organizations to benchmark their AI governance against the strictest standards, likely the EU AI Act. “Federal agencies are issuing guidance under existing statutes, with the FTC addressing unfair practices associated with AI,” he added.
Several core principles are emerging within the context of AI regulation. Transparency is paramount; the EU AI Act, for example, mandates labeling for deep fakes. “The FTC has made it clear that the deceptive use of AI for advertising is a violation,” Sircar noted, reinforcing the idea that consumers have the right to know when interacting with AI tools. Similarly, principles of fairness and non-discrimination are gaining traction, as illustrated by the EU’s requirement for bias detection in high-risk AI systems.
Accountability is also a crucial theme; the EU mandates formal risk management systems for high-risk AI, while the U.S. National Institute of Standards and Technology has developed a voluntary risk management framework for AI, which is rapidly becoming a de facto standard for responsible governance. Sircar humorously added, “So, if you’re using AI, that’s something to either look forward to or not look forward to.”
Human oversight is another significant concern shared by both the EU and China, as regulations require human intervention in high-risk AI systems. Even with established regulations, Sircar anticipates that data privacy laws will remain a backdrop in AI governance, echoing the principles outlined in the GDPR and California Privacy Rights Act.
Organizations considering or currently using AI must also navigate several legal risks, particularly in compliance due to the diverse regulatory landscape. Sircar emphasized the importance of ensuring robust internal governance against stringent standards, particularly highlighting intellectual property concerns. “The U.S. Copyright Office has made it clear that works generated solely by AI lack the human authorship necessary for copyright protection,” he explained, addressing the complexities around AI-generated content.
Data privacy and confidentiality risks are also prevalent, as organizations must guard against potential breaches when sensitive information is input into AI systems. Algorithmic biases, as evidenced by Amazon’s need to scrap a biased AI recruitment tool, further complicate the landscape. Contractual liabilities pose yet another challenge, as standard vendor agreements often fail to allocate risks associated with AI.
Finally, attorneys must grasp the intricacies of AI-related risks, as stated in the American Bar Association’s model rules. “Our professional responsibility as attorneys requires us to review the information and verify the research,” Sircar concluded, underscoring the need for vigilance in an evolving technological landscape.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health





















































