Connect with us

Hi, what are you looking for?

AI Tools

New York City Council Passes GUARD Act for AI Transparency and Accountability

New York City Council unanimously passes the GUARD Act, establishing an independent oversight office for AI tools to enhance transparency and protect residents’ rights.

The New York City Council unanimously passed the GUARD Act on Tuesday, a comprehensive legislative package aimed at enhancing transparency and accountability concerning the city’s use of artificial intelligence (AI) tools. The legislation, spearheaded by Council member Jennifer Gutierrez, who chairs the council’s technology committee, establishes crucial oversight mechanisms that Gutierrez believes are urgently needed.

“We just got through a campaign cycle here, and [AI] was a big topic about how opponents were utilizing AI tools for the purpose of campaigning, so it’s still very fresh in everyone’s minds,” Gutierrez said in a recent interview. “I think everyone wants to get behind more transparency and better functioning government that’s going to protect people.”

Central to the GUARD Act is the creation of an Office of Algorithmic Data Accountability, an independent body tasked with reviewing, auditing, and monitoring AI tools utilized by city agencies both prior to and after their deployment. The office will also investigate public complaints and publish a directory of every AI system it evaluates, thereby increasing transparency regarding how automated tools are employed in city operations.

Gutierrez emphasized that the office’s role is critical for ensuring the safety of New Yorkers. “This office should function in this way to keep New Yorkers safe, to ensure that we’re being transparent, that we’re disclosing with the public tools that are being used and that we’re working really hard to check biases when they’re reported,” she explained.

For years, various city departments have employed AI and automation tools to make decisions on critical issues such as housing access, policing, and benefit distribution, often with little oversight. This lack of regulation has left residents vulnerable to potentially biased or erroneous algorithms. A report from The Markup in May revealed that the city’s Administration for Children’s Services was using AI to flag families for increased scrutiny, predicting which children might experience harm without informing those affected.

In 2023, New York City began enforcing Local Law 144, which regulates the use of automated employment decision tools across city agencies to combat bias in hiring and mandates annual audits of these tools. Gutierrez pointed out that the previous administration had piloted AI services without sufficient public disclosure. “They were not very forthcoming with contracts, with whether or not the tools were being checked for biases, especially for agencies that were using these tools for public services,” she noted.

Further complicating the landscape was the city’s AI Action Plan, released under Mayor Eric Adams in 2023. While the plan provided valuable recommendations, Gutierrez argued that it left city agencies to navigate the complexities of AI independently, leading to inconsistent applications of the technology. “This administration, I think, very smartly put together a paper which had a really good set of pillars and recommendations,” she said. “But they were just recommendations. None of it was going to be enforced.”

The GUARD Act also introduces mandatory citywide standards that require agencies to protect residents’ data privacy, test AI systems for fairness, and undergo independent evaluations before launching any new tools. These measures aim to ensure that the city’s use of AI aligns with ethical standards and public interest.

As cities nationwide grapple with the implications of AI technology, New York’s legislative approach may serve as a model for other municipalities seeking to implement accountability measures in the face of rapid technological advancement. The GUARD Act marks a significant step toward establishing a framework that prioritizes the rights and safety of residents in an increasingly automated world.

Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Generative

AI's rapid evolution is reshaping journalism, with generative AI models generating content that risks eroding public trust in democratic institutions.

Top Stories

Judges at the South Zone Regional Judicial Conference warn against AI reliance in courts, citing risks of 'hallucinated' citations that mislead legal outcomes.

AI Education

Ambow Education pivots to an AI-driven tech company, with 40% of revenue now from its innovative HybriU platform transforming learning and collaboration.

AI Generative

Alibaba's Qwen3-VL achieves 99.5% accuracy in detecting frames within two-hour videos, revolutionizing multimodal AI capabilities with 235 billion parameters.

Top Stories

Governments urge enhanced AI literacy to combat misinformation, as companies like OpenAI and Nvidia lead ethical standards in AI development.

Top Stories

Japan proposes tax reforms to enable companies to claim up to 40% of R&D expenses as credits by 2026, targeting AI, semiconductors, and key...

Top Stories

Financial institutions face critical governance challenges as AI tools enhance output but risk shallow understanding, jeopardizing client outcomes and accountability.

AI Finance

Deloitte unveils its AI-powered Sales Forecasting Box for Swiss finance teams, enabling automated, high-accuracy forecasting without system integration or specialized skills.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.