Artificial intelligence is reshaping client expectations around work speed, prompting professionals to explore ways to leverage code outputs from large language models (LLMs) like Claude Code and ChatGPT. The aim is to create applications that not only enhance productivity but also uphold quality standards. While polished user interfaces may not be a necessity, LLMs are proving effective in developing specialized applications that streamline essential compliance tasks. However, key challenges remain, particularly in secure coding practices and navigating regulatory requirements.
Millions of software engineers currently utilize AI-powered coding tools such as Claude Code and Cursor in their daily workflows. According to an industry leader in governance, regulation, and compliance automation, these tools democratize coding, allowing individuals with limited programming knowledge to swiftly write Python applications that automate tedious processes. This capability empowers engineers to iterate and enhance these applications rapidly.
The shift towards AI is prompting professionals to adapt their mindsets, breaking down traditional role boundaries. Client expectations regarding work speed have been fundamentally altered, necessitating a proactive response from industry players. As the landscape evolves, the demand for more efficient tools increases, allowing professionals to build custom solutions tailored to their specific needs.
Many tasks within privacy, AI, and cybersecurity governance—traditionally performed using costly governance, risk management, and compliance (GRC) software or inefficient tools like spreadsheets—seek automation. Professionals can now harness LLM capabilities to address real-time compliance challenges, such as identifying newly added cookies on client websites or automating checks for changes in vendor processing locations. The ability to build these tools independently empowers users to solve pressing issues directly.
Even if the user base for these applications is minimal, the focus should remain on functionality. The shared challenges faced by professionals in the IAPP community lend themselves to collaborative problem-solving. Legal frameworks like Apache 2.0 Open Source Software licensing and a variety of model contracts offer safeguards against legal risks. A suite of free open-source software (OSS) tools, including VS Code, Pytest, and Docker, provide further support, reducing the burden of repetitive development tasks.
Despite these advancements, secure coding remains a significant hurdle for many in the GRC community. To mitigate risks, professionals can focus on limited use cases that involve publicly available information, such as data from consent management platforms or publicly accessible documents like privacy policies. Collaborating with engineering teams can also enhance security, ensuring that applications adhere to best practices while protecting sensitive data.
As professionals refine their coding skills, they can transition from manual processes to automated solutions, alleviating the need for outdated compliance questionnaires. However, fostering a collaborative environment is crucial; imposing engineering tasks on individuals can create resistance. A transparent partnership with engineers allows for ongoing adjustments to the code, ensuring that applications meet users’ needs, even if they may not be aesthetically pleasing.
While there remains a role for established GRC vendors, various organizations may be limited in their ability to build and maintain compliance automation in-house. Many stakeholders prefer applications with graphical user interfaces rather than command-line tools. As the user base grows, these considerations become increasingly important.
Moreover, a rising number of open-source vendors in the GRC sector are emerging, providing opportunities for organizations to customize and enhance their tools without waiting for vendor support or incurring additional costs. Notable examples include OpenGRC, OpenVAS, and GLPI, which enable companies to implement specialized features and analytics tailored to their unique requirements.
Some professionals remain hesitant to embrace these changes, expressing concerns about overstepping traditional boundaries within their roles. However, the reality is clear: AI is already dismantling the barriers that once defined the division of labor within GRC teams. Privacy UX designers are increasingly taking on product management tasks, while cybersecurity engineers are delving into product design, leading to a more integrated approach to problem-solving.
As these transformations unfold, legal counsel is also encouraged to engage in prototyping and lightweight application development. This collaborative approach fosters innovation across the GRC landscape, ultimately benefiting organizations and the public alike.
In summary, the convergence of AI coding, open-source GRC tools, and evolving professional mindsets is set to revolutionize how tools are created, maintained, and extended in this space. Application development is no longer the sole domain of vendors or engineering teams; it rests in the hands of the entire community. With available resources, the future of GRC application building is poised for significant change, inviting practitioners to explore new possibilities.
See also
Colorado Enacts CAIA, Regulating High-Risk AI in Employment from February 2026
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case


















































