The FBI confirmed this week that it is actively purchasing commercially available data on American citizens, highlighting a significant point of contention in its ongoing standoff with the artificial intelligence firm Anthropic over mass surveillance practices. During a Senate intelligence committee hearing on Wednesday, FBI Director Kash Patel addressed a question from Senator Ron Wyden regarding the agency’s acquisition of location data, an issue the Bureau had previously acknowledged in 2023.
This admission brings renewed scrutiny to the broader surveillance capabilities of the federal government, which are already extensive even in the absence of AI technology. Despite assurances to uphold the Fourth Amendment rights protecting against unreasonable searches, Patel’s comments reveal that the FBI can, and does, conduct surveillance operations at scale by leveraging commercial data.
Federal law typically mandates that law enforcement agencies obtain a warrant to gather historical or real-time cellphone location data, a process that necessitates demonstrating probable cause to a judge. Although the Supreme Court ruled in 2018 that authorities could not compel companies to disclose sensitive information like cellphone location records, it did not explicitly prohibit the purchase of such data. This regulatory gap has allowed agencies to contract with data brokers who compile vast amounts of information from various sources, including apps and web browsers, enabling them to acquire what would otherwise require a warrant.
This practice has drawn ire from privacy advocates, who argue that it represents a circumvention of constitutional protections. The data broker industry, valued at hundreds of billions of dollars globally, serves as a critical resource for modern marketing and targeted advertising, but its potential for misuse raises significant ethical concerns.
Critics, including researchers and journalists, have long documented instances where information obtained from data brokers has been used to uncover private details about citizens without their consent. In 2019, the New York Times illustrated the ease with which smartphone location data could pinpoint individuals, revealing the identity of a senior defense official through analysis of daily movements.
As fears over surveillance have grown, the advancements in AI technology have exacerbated concerns about the potential for mass tracking and data exploitation. Reports have surfaced regarding the Department of Homeland Security’s efforts, in conjunction with private entities, to create comprehensive datasets that could be employed for various government functions, including immigration enforcement.
The implications of these practices are not merely theoretical. During previous mass deportation efforts, reports indicated that agencies like ICE utilized commercially available data to surveil neighborhoods and track individuals to their homes or workplaces. In a more recent case from 2024, a company allegedly tracked nearly 600 visits to Planned Parenthood locations, providing data for an extensive anti-abortion advertising campaign.
During Anthropic’s conflict with the Pentagon, CEO Dario Amodei articulated concerns regarding the role of data brokers in facilitating mass surveillance through AI. He noted, “Under current law, the government can purchase detailed records of Americans’ movements, web browsing, and associations from public sources without obtaining a warrant,” emphasizing the ease with which AI could assemble disparate data into a comprehensive profile of an individual’s life.
Amodei pointed out the vague nature of the Pentagon’s demands for AI firms to permit “any lawful use” of their technologies, a stipulation that he argued could potentially encompass mass surveillance. Senator Wyden described this loophole as an “outrageous end run around the Fourth Amendment,” highlighting the risks of unchecked government access to personal data.
In contrast, OpenAI, which secured a contract with the Department of Defense following Anthropic’s refusal, initially left ambiguous terms concerning the use of commercial data. Following public backlash, the company added a stipulation to its agreement that its AI systems “shall not be intentionally used for domestic surveillance of U.S. persons and nationals.” This caveat was intended to clarify restrictions against deliberate tracking, but experts have expressed skepticism about its robustness.
Privacy advocates have raised concerns that the language—specifically terms like “intentionally” and “deliberate”—provides a loophole for the government to argue that any personal data obtained is merely incidental, thereby allowing them to persist in surveillance operations without violating existing laws. This ongoing debate underscores a pressing need for clearer regulations that address the intersection of technology and civil liberties.
As discussions around the ethical use of AI and data privacy continue to evolve, the implications of mass surveillance by federal agencies remain a critical topic, reinforcing the urgency for comprehensive reforms to protect individual rights in an increasingly interconnected digital landscape.
See also
Kaiser Therapists Warn New Screening System Delays Care, Risks Lives of Patients
Tesseract Launches Site Manager and PRISM Vision Badge for Job Site Clarity
Affordable Android Smartwatches That Offer Great Value and Features
Russia”s AIDOL Robot Stumbles During Debut in Moscow
AI Technology Revolutionizes Meat Processing at Cargill Slaughterhouse






















































