Connect with us

Hi, what are you looking for?

AI Regulation

Ward and Smith’s Mayukh Sircar Reveals Key Strategies for AI Governance at In-House Counsel Seminar

Ward and Smith’s Mayukh Sircar highlights the urgent need for robust AI governance strategies amid evolving regulations to mitigate risks like IP infringement and data privacy violations.

During Ward and Smith’s annual In-House Counsel seminar, Mayukh Sircar, a cybersecurity, data privacy, and technology attorney, provided critical insights into the role of Artificial Intelligence (AI) in contemporary business practices, the associated risks, evolving regulations, and governance strategies. The event aimed to equip attendees with actionable strategies for effective AI governance.

Laura Hudson, Chief Marketing Officer at Ward and Smith, reflected on Sircar’s journey from aspiring PhD candidate in Physiology and Biophysics at Georgetown to a passionate advocate for intellectual property law. “Luckily for us, he discovered a passion for intellectual property and decided to become an attorney,” she said.

Sircar began his presentation by categorizing various AI technologies, noting that the most basic form is automation, which performs predefined tasks to enhance efficiency. He used the analogy of a thermostat that activates heating when a certain temperature is reached, highlighting examples like workflow approvals and chatbots. “The legal risks with automation are relatively low, unlike with Generative AI, which can lead to potential IP infringement and factual inaccuracies,” he explained. “We’ve all heard about the hallucinations, data privacy violations, and breaches of confidentiality.”

Generative AI, as Sircar described it, encompasses models that generate new content based on existing data patterns. “This is a reactive tool that needs human prompting at each step…it can’t independently verify its output,” he remarked, referring to systems such as ChatGPT, Gemini, and Claude. He further introduced Agentic AI, an emerging category capable of autonomously pursuing goals and making decisions with limited human input. This technology can develop strategies and adapt its actions based on outcomes, with examples including self-driving cars and Virtuoso QA, an autonomous software quality assurance tool.

“Agentic AI magnifies the risks associated with Generative AI, introducing new layers of legal agency and accountability for the actions of the tool,” noted Sircar. “These actions can have binding legal effects.” As businesses increasingly integrate AI technologies, he emphasized the importance of legal departments evolving from reactive gatekeepers to proactive strategic advisors. “Understanding existing regulations is essential for building guardrails that allow businesses to innovate responsibly,” he stated.

The regulatory landscape surrounding AI is complex and constantly changing, paralleling the dynamic nature of data privacy rules. According to Sircar, the European Union’s AI Act serves as a pivotal jurisdictional framework, categorizing AI systems based on risk levels—ranging from Unacceptable to Minimal Risk. “If you’re doing business in the EU with an AI tool, the EU AI Act applies, much like GDPR,” he explained, noting recent proposals aimed at simplifying compliance for small and mid-sized businesses.

In contrast, the United Kingdom is adopting a more flexible, pro-innovation regulatory approach, empowering existing regulators to manage AI technologies. Meanwhile, China’s regulatory environment emphasizes state control, focusing on algorithmic transparency and user consent. “The Cyberspace Administration of China leads enforcement actions with a stated intent of ensuring social and political stability,” Sircar commented.

In the United States, the regulatory framework varies significantly by state, with states like California, Colorado, and Illinois advancing their own privacy and automated decision-making laws. Sircar pointed out the challenges posed by this patchwork of regulations, advising organizations to benchmark their AI governance against the strictest standards, likely the EU AI Act. “Federal agencies are issuing guidance under existing statutes, with the FTC addressing unfair practices associated with AI,” he added.

Several core principles are emerging within the context of AI regulation. Transparency is paramount; the EU AI Act, for example, mandates labeling for deep fakes. “The FTC has made it clear that the deceptive use of AI for advertising is a violation,” Sircar noted, reinforcing the idea that consumers have the right to know when interacting with AI tools. Similarly, principles of fairness and non-discrimination are gaining traction, as illustrated by the EU’s requirement for bias detection in high-risk AI systems.

Accountability is also a crucial theme; the EU mandates formal risk management systems for high-risk AI, while the U.S. National Institute of Standards and Technology has developed a voluntary risk management framework for AI, which is rapidly becoming a de facto standard for responsible governance. Sircar humorously added, “So, if you’re using AI, that’s something to either look forward to or not look forward to.”

Human oversight is another significant concern shared by both the EU and China, as regulations require human intervention in high-risk AI systems. Even with established regulations, Sircar anticipates that data privacy laws will remain a backdrop in AI governance, echoing the principles outlined in the GDPR and California Privacy Rights Act.

Organizations considering or currently using AI must also navigate several legal risks, particularly in compliance due to the diverse regulatory landscape. Sircar emphasized the importance of ensuring robust internal governance against stringent standards, particularly highlighting intellectual property concerns. “The U.S. Copyright Office has made it clear that works generated solely by AI lack the human authorship necessary for copyright protection,” he explained, addressing the complexities around AI-generated content.

Data privacy and confidentiality risks are also prevalent, as organizations must guard against potential breaches when sensitive information is input into AI systems. Algorithmic biases, as evidenced by Amazon’s need to scrap a biased AI recruitment tool, further complicate the landscape. Contractual liabilities pose yet another challenge, as standard vendor agreements often fail to allocate risks associated with AI.

Finally, attorneys must grasp the intricacies of AI-related risks, as stated in the American Bar Association’s model rules. “Our professional responsibility as attorneys requires us to review the information and verify the research,” Sircar concluded, underscoring the need for vigilance in an evolving technological landscape.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Business

Galgotias University student Keshav Madan launches Saivyy Technologies, an AI-driven startup aiming to revolutionize data management for businesses through advanced technologies.

AI Cybersecurity

Maritime sector faces a staggering 60% rise in AI-driven cyberattacks within 48 hours, threatening operational continuity and costing companies millions.

AI Generative

On-chain AI agents using LLMs automate DeFi transactions, enhancing efficiency and risk management while minimizing human intervention in blockchain finance.

Top Stories

CEOs in CEE focus on short-term revenue growth, with 73% reporting AI's minimal impact on earnings, risking long-term innovation and sustainability.

Top Stories

AI cryptocurrency market dips to $12.6B as Bittensor falls 20%, while Pippin surges 45% and BankrCoin gains over 22% amid shifting investor sentiment

AI Education

A recent study reveals that an AI-enhanced digital framework significantly improved STEM language skills by 30%, bridging the communication gap in technical education.

Top Stories

India AI Impact Summit 2026 in New Delhi will unite global leaders like NVIDIA's Jensen Huang to advance inclusive AI development, emphasizing equity for...

AI Business

CISOs must clarify objectives for adopting AI in cybersecurity, leveraging SaaS AI accelerators to achieve measurable efficiency gains within weeks while maintaining control with...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.