Connect with us

Hi, what are you looking for?

AI Cybersecurity

New Analysis Reveals AI Tools’ Security Risks: Indefinite Data Storage and Poor Access Controls

New analysis reveals that companies risk severe data breaches from indefinite AI data storage and inadequate access controls, urging immediate action for robust governance.

As companies increasingly adopt AI chatbots for workplace queries, a significant yet often overlooked data security concern arises. Employees’ inquiries are retained, with many organizations lacking clear protocols on data deletion. This situation, amplified across thousands of employees and organizations, presents a considerable security risk, according to a new analysis from Brooks Kushman, a law firm specializing in technology and intellectual property.

The firm identifies two key issues that pose urgent security threats in corporate AI: the indefinite storage of data and inadequate controls over access to AI systems. The data retention issue is more pervasive than many executives might assume. Employees frequently upload sensitive materials—such as client records, financial data, and trade secrets—without realizing that these files may be stored indefinitely. Some AI platforms even utilize these interactions to improve their models unless companies actively opt out.

This trend contributes to a growing attack surface; the more data a company retains, the greater the risk of data breaches. Regulators are increasingly scrutinizing how organizations manage and limit this exposure, adding further pressure to corporate governance. As Brooks Kushman notes, the accumulation of retained data without proper oversight can lead to severe security vulnerabilities.

The second significant issue is access control, specifically regarding who or what can utilize AI systems and their capabilities. Traditional software limits user access to specific tools and data sets. In contrast, AI technology allows a single user with extensive permissions to extract information across an organization, generate new content, and disseminate that output without oversight.

The complexities multiply when the “user” is an AI agent capable of independent operations and decision-making. Brooks Kushman argues that these AI agents should be managed similarly to privileged human employees, given their potential to access and manipulate sensitive information.

“AI security is no longer just about protecting models. It is about controlling data, defining access, preserving evidence, and ensuring accountability across complex, evolving systems,” the firm states. To mitigate these risks, Brooks Kushman advocates for implementing a framework known as Role-Based Access Control (RBAC). This system clearly delineates what each individual and AI agent is authorized to do within an organization’s systems, establishing distinct permissions for roles such as developers and managers.

Legal risks are also significant. A recent federal court ruling in United States v. Heppner determined that conversations conducted with publicly available AI tools lack attorney-client privilege. This ruling could expose lawyers and executives using consumer-grade AI products for sensitive legal analyses, as such conversations may surface in court. The decision underscores the necessity for companies to utilize enterprise-grade AI platforms that guarantee formal security commitments, rather than relying on free consumer applications.

As regulatory pressures mount, the urgency for companies to enhance their AI governance structures is becoming increasingly clear. The EU AI Act, new U.S. state privacy laws, and intensified federal scrutiny are all compelling organizations to demonstrate robust governance frameworks around their AI systems. Brooks Kushman stresses that firms that proactively tighten data retention policies, develop formal access protocols, and educate employees on responsible AI usage will be better equipped to navigate these challenges than those that delay action.

The future of AI in the corporate landscape relies heavily on how companies address these security vulnerabilities. With data privacy concerns escalating and regulatory frameworks evolving, organizations that prioritize comprehensive management of AI systems will not only protect sensitive information but also secure their positions in an increasingly competitive marketplace.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

AI Generative

IS Dongseo launches a comprehensive Generative AI training program to boost employee productivity and proficiency, enhancing practical skills across operations.

Top Stories

Amazon anticipates a 14% revenue surge to $188B in Q1 2026, fueled by AWS growth and a 21% rise in advertising revenue to $16.84B

AI Tools

Alteryx study reveals 77% of AI pilots fail to scale due to governance gaps, highlighting the need for effective data management and oversight.

AI Technology

Intel projects Q2 revenue of up to $14.8B, driven by AI demand for its Xeon CPUs, despite a GAAP loss per share of $0.73...

AI Generative

Kimg AI revolutionizes freelancing by enabling rapid, high-quality image generation, significantly reducing project turnaround times and enhancing creative output.

AI Regulation

High Five expands its AI recruitment platform into Vietnam as an Employer of Record, offering a flat subscription model that reduces hiring costs by...

AI Business

Salesforce revenue jumps 83% to $22,000 as AI agents drive usage, while SaaStr ditches Notion amid rising reliance on AI-driven workflows

AI Generative

Angela Mastronuzzi announces a groundbreaking conference on May 8-9 focused on AI advancements in liver imaging to enhance oncology diagnostics and treatments.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.