Policymakers and industry leaders in Canada are grappling with the implications of artificial intelligence (AI) and digital technology for the economy and daily life. The federal government has earmarked more than $1 billion over the next five years to bolster the nation’s AI and quantum computing ecosystems while integrating AI technology into its operations. Amidst a turbulent relationship with the United States, Prime Minister Mark Carney advocates for an AI strategy that emphasizes data sovereignty. However, AI Minister Evan Solomon argues for a shift away from excessive regulation, suggesting that Canada must prioritize economic benefits from AI.
Despite these ambitious initiatives, Canada’s regulatory framework for AI remains underdeveloped, raising concerns about privacy and human rights. Although various non-binding frameworks have been introduced, there is no binding legislation to protect Canadians from potential harms associated with AI technologies. In September 2025, the government launched an AI Strategy Task Force and a 30-day “national sprint” for public input, yet critics argue that the initiative fails to address significant issues. An open letter from human rights organizations and academics highlighted that true sovereignty over technology requires robust protections against its risks.
Previous attempts to regulate AI include the 2022 Artificial Intelligence and Data Act (AIDA), Canada’s inaugural legislative effort to tackle AI-related privacy and human rights concerns. This act aimed at evaluating AI harm and bias but only focused on high-impact systems. In contrast, the European Union’s AI Act employs a tiered risk-based approach that categorizes AI systems into four levels of risk and assigns obligations accordingly. Critics contend that the AIDA’s public consultation process was exclusionary and its provisions inadequate, particularly in protecting marginalized communities.
Experts have identified additional problems with the AIDA’s definitions of risk and harm, arguing it overlooks community-level and environmental impacts that are harder to quantify. Critics assert that the legislation does not sufficiently empower individuals to lodge complaints about AI systems, thus leaving vulnerable populations without recourse. Furthermore, the legislative process took a significant hit when Bill C-27 was tabled but subsequently died with the prorogation of parliament, leaving Canada without legally binding AI regulations.
On February 3, 2026, Innovation, Science and Economic Development Canada published the findings of its national sprint, echoing many concerns raised by human rights groups. These include worries about privacy, systemic bias, and job displacement, stressing the urgency of implementing effective legislative measures to harness AI’s potential while mitigating its risks. However, the reliance on generative AI tools from major U.S. companies to analyze public submissions raises doubts about the government’s commitment to an unbiased approach.
To ensure that Canada can effectively manage AI’s rapid evolution, experts call for the establishment of a complaint mechanism for AI-related harms. This could be achieved through the appointment of an AI federal ombudsperson or collaboration with the Canadian Human Rights Commission. Such a mechanism, along with potential investigative and enforcement bodies like an AI and Data Commissioner, would proactively address AI-related issues before they manifest.
The government claims that fostering public trust in AI is a priority. To achieve this, there needs to be a balance between regulatory measures and the economic advantages offered by AI technologies. Introducing legally binding instruments designed to protect Canadians and promote the safe use of AI in both public and private sectors is essential. Adopting the EU’s tiered risk-based approach could further enhance regulatory effectiveness by enabling a more nuanced assessment of AI systems beyond just high-impact categories.
Moreover, the AIDA’s definition of harm, which limits it to physical or economic damage, should be re-evaluated. A broader definition that encompasses the impact on dignity, privacy, human rights, and environmental sustainability is necessary. As the pace of AI advancements accelerates, so too must the diligence of policymakers in addressing these challenges to ensure that the benefits of AI do not come at the expense of Canadians’ rights and welfare. The path forward demands a collaborative effort to create comprehensive AI regulations that prioritize human rights and societal well-being.
The author would like to thank Katherine Scott and Hadrian Mertins-Kirkwood for their invaluable guidance, as well as the interdisciplinary scholars who generously contributed their insights into the complex intersection of AI, policy, and human rights.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health



















































