The New York City Council unanimously passed the GUARD Act on Tuesday, a comprehensive legislative package aimed at enhancing transparency and accountability concerning the city’s use of artificial intelligence (AI) tools. The legislation, spearheaded by Council member Jennifer Gutierrez, who chairs the council’s technology committee, establishes crucial oversight mechanisms that Gutierrez believes are urgently needed.
“We just got through a campaign cycle here, and [AI] was a big topic about how opponents were utilizing AI tools for the purpose of campaigning, so it’s still very fresh in everyone’s minds,” Gutierrez said in a recent interview. “I think everyone wants to get behind more transparency and better functioning government that’s going to protect people.”
Central to the GUARD Act is the creation of an Office of Algorithmic Data Accountability, an independent body tasked with reviewing, auditing, and monitoring AI tools utilized by city agencies both prior to and after their deployment. The office will also investigate public complaints and publish a directory of every AI system it evaluates, thereby increasing transparency regarding how automated tools are employed in city operations.
Gutierrez emphasized that the office’s role is critical for ensuring the safety of New Yorkers. “This office should function in this way to keep New Yorkers safe, to ensure that we’re being transparent, that we’re disclosing with the public tools that are being used and that we’re working really hard to check biases when they’re reported,” she explained.
For years, various city departments have employed AI and automation tools to make decisions on critical issues such as housing access, policing, and benefit distribution, often with little oversight. This lack of regulation has left residents vulnerable to potentially biased or erroneous algorithms. A report from The Markup in May revealed that the city’s Administration for Children’s Services was using AI to flag families for increased scrutiny, predicting which children might experience harm without informing those affected.
In 2023, New York City began enforcing Local Law 144, which regulates the use of automated employment decision tools across city agencies to combat bias in hiring and mandates annual audits of these tools. Gutierrez pointed out that the previous administration had piloted AI services without sufficient public disclosure. “They were not very forthcoming with contracts, with whether or not the tools were being checked for biases, especially for agencies that were using these tools for public services,” she noted.
Further complicating the landscape was the city’s AI Action Plan, released under Mayor Eric Adams in 2023. While the plan provided valuable recommendations, Gutierrez argued that it left city agencies to navigate the complexities of AI independently, leading to inconsistent applications of the technology. “This administration, I think, very smartly put together a paper which had a really good set of pillars and recommendations,” she said. “But they were just recommendations. None of it was going to be enforced.”
The GUARD Act also introduces mandatory citywide standards that require agencies to protect residents’ data privacy, test AI systems for fairness, and undergo independent evaluations before launching any new tools. These measures aim to ensure that the city’s use of AI aligns with ethical standards and public interest.
As cities nationwide grapple with the implications of AI technology, New York’s legislative approach may serve as a model for other municipalities seeking to implement accountability measures in the face of rapid technological advancement. The GUARD Act marks a significant step toward establishing a framework that prioritizes the rights and safety of residents in an increasingly automated world.
Gao et al. Transform Lung Cancer Inflammation Index into AI Tool for Colorectal Cancer Risk Assessment
Vine Relaunches as DiVine with 100K Archived Videos, Bans AI-Generated Content
Dtonic Launches AI-Driven ‘Pharmkeeper’ for Enhanced Pharmacy Security and Efficiency
Colombian Students Win NASA Challenge with AI Tool Detecting Exoplanets in 48 Hours
Shilpa Shetty Sues 28 Defendants Over AI Deepfakes and Identity Misuse in Bombay High Court





















































