Artificial intelligence (AI) has transformed the landscape of reverse engineering, allowing a broader range of users to dissect and analyze public-facing products with unprecedented ease. Once a domain limited to highly skilled professionals, extensive resources, and significant time commitments, reverse engineering now demands little more than a curious mindset and access to powerful AI tools. For companies that rely on proprietary information and trade secrets, this shift poses serious challenges, particularly for in-house counsel tasked with protecting sensitive data.
Reverse engineering involves analyzing publicly available information—such as software code or user interfaces—to uncover nonpublic details about a product or process. Historically, this laborious process required expert knowledge and substantial information. However, recent advancements in AI, including sophisticated code analysis tools and automated data scrapers, have streamlined the process. Reverse engineering can now be conducted with minimal data and at an astonishing scale and speed.
Machine learning and predictive modeling empower AI to unveil hidden information, deducing complex algorithms from behavioral patterns and reconstructing proprietary logic from software outputs. This evolution means that no company—whether a software-as-a-service (SaaS) provider, a traditional tech firm, or even industries that employ digital processes—can consider itself immune from the risk of having its trade secrets exposed.
In the United States, trade secrets are safeguarded under the Uniform Trade Secrets Act (UTSA) and the Defend Trade Secrets Act (DTSA). These laws define a trade secret as information that derives economic value from being confidential and is not readily ascertainable by the public. A key focus of these statutes is on misappropriation—specifically, wrongful acquisition or use through “improper means.” Nevertheless, both the UTSA and DTSA carve out an exception: information acquired through reverse engineering a publicly available product is not deemed “improper,” thereby exempting it from the definition of misappropriation.
However, the rise of AI is complicating traditional definitions of “proper” and “improper.” For instance, is it “proper” to utilize bots for scraping massive datasets? Does manipulating a generative AI model through “prompt injection” constitute fair play, or does it veer into the realm of cyberattacks? Recent legal cases indicate that courts are struggling to navigate these complex questions.
In a notable case in 2024, a company alleged that competitors employed prompt injection techniques to extract sensitive outputs, potentially acquiring valuable trade secrets. The situation was further complicated by claims of credential impersonation, which brought the concept of “improper means” into sharper focus. Similarly, the Eleventh Circuit recently ruled that the manner in which data is accessed can render even publicly available information “improper” if acquired through automated scraping methods. Such a ruling raises concerns for companies that assume “technical public availability” alone suffices as a protective measure.
As courts grapple with the implications of AI on trade secret laws, a significant risk emerges: AI tools may redefine what constitutes “readily ascertainable” information. As AI capabilities improve, courts may determine that certain previously secure data could lose its protected status, not due to a lapse in security, but because technological advancements enable easier deductions of confidential information.
The pressing question for in-house counsel and business leaders is how to adapt to this rapidly evolving landscape. Traditional strategies for securing trade secrets may no longer suffice. Businesses must consider several proactive steps to safeguard their confidential information. For example, technical barriers such as rate limiting, CAPTCHA challenges, and advanced bot detection should be implemented, particularly within SaaS platforms. AI-powered monitoring tools can also identify unusual patterns indicative of scraping or prompt injection attempts.
Furthermore, companies should revise their legal protections. This includes updating terms of service to explicitly prohibit automated access and reverse engineering, alongside enforcing these clauses consistently. It is advisable to include explicit provisions related to AI-specific attack vectors within contracts and non-disclosure agreements (NDAs). Documenting incidents and responses will also demonstrate adherence to “reasonable measures” in case of future litigation.
Revisiting best practices remains essential. Limiting access to sensitive information on a strict “need-to-know” basis and maintaining a robust trade secret management program are critical. Regular audits of access controls and employment agreements should also be undertaken to ensure they are suitable for the realities of the AI era.
As the legal landscape continues to evolve amid the rise of AI, the need for vigilance has never been greater. Companies cannot afford to remain passive while the law struggles to catch up with technological advancements. A layered approach, encompassing legal, technical, and procedural measures, will better equip businesses to protect their intellectual assets as they navigate the uncharted waters of an AI-driven future. The time to review and enhance trade secret protocols is now.
See also
Notre Dame’s $50.8M Grant Fuels Groundbreaking AI Ethics Initiative with DELTA Framework
Nvidia Licenses Groq’s Inference Tech, Attracts Key Leadership Amid Competitive AI Landscape
Mother Sues Character.AI After Allegations of ‘Sexting’ with 11-Year-Old Son
Western Digital Surges 8% Post-Nasdaq-100 Entry; AI Storage Strategy Reshapes Outlook



















































