Connect with us

Hi, what are you looking for?

AI Technology

Canada’s National Security Agency Reviews AI Use, Identifies Potential Risks

Canada’s NSIRA launches a comprehensive review of AI governance in national security, aiming to uncover potential risks and enhance transparency across agencies.

OTTAWA — Canada’s National Security and Intelligence Review Agency (NSIRA) is launching an investigation into the use and governance of artificial intelligence (AI) in national security operations. This initiative aims to explore how the security community defines, utilizes, and oversees AI technologies.

The review agency has notified key federal ministers and organizations about the study, which encompasses the various applications of AI by Canadian security agencies, including tasks such as document translation and malware threat detection. In her letter to ministers and heads of relevant organizations, NSIRA chair Marie Deschamps emphasized that the findings will deliver insights into the adoption of new tools, guide future assessments, and identify potential gaps or risks requiring attention.

NSIRA possesses a statutory right to access all information held by departments and agencies under its review, including classified material, excluding cabinet confidences. The letter details that requests for information may include documents, written explanations, briefings, interviews, surveys, and system access. “This review may also involve independent inspections of some technical systems,” Deschamps noted.

The correspondence was sent to multiple cabinet members, including Prime Minister Mark Carney, Artificial Intelligence and Digital Innovation Minister Evan Solomon, Public Safety Minister Gary Anandasangaree, Defence Minister David McGuinty, Foreign Affairs Minister Anita Anand, and Industry Minister Mélanie Joly. It also reached the heads of agencies with significant security responsibilities, such as the Canadian Security Intelligence Service (CSIS), the Royal Canadian Mounted Police (RCMP), and the Communications Security Establishment (CSE), Canada’s cybersecurity service.

In response to inquiries regarding the review, the RCMP expressed its support for an independent examination of national security and intelligence activities. “The RCMP believes that establishing transparent and accountable external review processes is critical to maintaining public confidence and trust,” the agency stated in a media release.

In 2024, a report from the National Security Transparency Advisory Group urged Canada’s security agencies to publish detailed descriptions of their current and future applications of AI systems and software. The advisory group predicted an increasing reliance on AI technology to analyze vast amounts of text and images, recognize patterns, and interpret trends and behaviors.

Both CSIS and CSE acknowledged the importance of transparency regarding AI, although they highlighted limitations on public disclosures due to their security mandates. The federal government’s principles for AI usage emphasize the need for openness about how, why, and when AI is employed, as well as the assessment and management of risks associated with AI on legal rights and democratic norms at an early stage.

The principles also advocate for training public officials involved in developing or using AI to ensure they understand legal, ethical, and operational issues, including privacy and security. In its most recent annual report, CSIS noted the implementation of AI pilot programs across the agency in alignment with the government’s guiding principles.

On its website, the RCMP listed several factors essential for ensuring that AI is employed legally, ethically, and responsibly. These aspects include careful system design to prevent bias and discrimination, respect for privacy during information analysis, transparency regarding AI decision-making processes, and accountability measures to ensure proper functioning.

The CSE’s AI strategy underscores its commitment to developing innovative solutions to critical issues through the effective use of AI and machine learning technologies. The agency aims to champion responsible and secure AI while countering threats posed by adversaries utilizing AI. CSE chief Caroline Xavier stated in a message included in the strategy that, if deployed safely and effectively, these capabilities will enhance the agency’s ability to analyze larger data sets more rapidly and precisely, improving decision-making quality and speed.

“We will always be thoughtful and rule-bound in our adoption of AI, keeping responsibility and accountability at the core of how we will achieve our goals,” Xavier said. She emphasized the importance of recognizing the fallibility of these technologies, advocating for incremental experimentation and scaling, along with rigorous testing and evaluation.

This investigation into the governance of AI within Canada’s national security framework is poised to reshape how these technologies are integrated into intelligence and security operations, reflecting an increasing acknowledgment of the complexities and ethical considerations surrounding the use of AI in governmental contexts.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Marketing

The global physical AI market is set to soar from $4.12 billion in 2024 to $61.19 billion by 2034, driven by a 31.26% CAGR...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.