OTTAWA — The Canadian federal government is poised to unveil its new national artificial intelligence strategy, yet critics argue that it has not adequately engaged the public. Polls indicate a skeptical populace is demanding regulatory frameworks to ensure the responsible use of AI technologies.
“We see more of an inclination to want government to be a regulator of AI, to create the guardrails by which it operates,” said David Coletto, CEO of Abacus Data. “Because I think there is, if I’m going to describe public opinion on it right now, there’s more concern than there is optimism.”
Artificial Intelligence Minister Evan Solomon has emphasized a rapid approach to the strategy, initiating a consultation process last fall termed a “sprint.” Solomon appointed an expert group tasked with providing recommendations within 30 days, coinciding with a similar public consultation period.
Critics, however, contend that the time allotted was insufficient and that the expert group largely consisted of industry advocates. Some of these critics have launched an online “people’s consultation” on AI, asserting the need for a more comprehensive public dialogue. “This seems like upping the ante on moving fast and breaking things,” said tech lawyer Cynthia Khoo, who signed an open letter expressing dissatisfaction with the government’s approach. “The Canadian public deserves better.”
In that open letter published last October, over 160 signatories, including lawyers, activists, and human rights organizations, voiced concerns that the government displayed “serious disregard” for public apprehensions regarding AI. The letter addressed a range of issues, including environmental impacts, threats to labor rights, potential mental health effects such as AI-triggered psychosis, inaccuracies in generative AI outputs, privacy risks, and the rise of non-consensual intimate deepfakes.
The online consultation, which began on Wednesday, invites submissions until March 15 and aims to incorporate public feedback on how AI affects individual lives, while allowing for broader commentary. In contrast, the government’s own 26-question consultation document primarily focuses on themes like economic benefits, research, and scaling Canada’s AI industry, with only three questions addressing safety and public trust.
Under Prime Minister Mark Carney, the Liberal government has shifted its AI policy emphasis from concerns about AI harms to prioritizing economic advantages. Yet, public sentiment appears misaligned with this approach. A Leger poll conducted in August revealed that 85 percent of respondents believe the government should regulate AI tools to ensure their ethical and safe use. This suggests that Canadians are more cautious than the government regarding the risks associated with AI.
Further research by Alex Kohut of North Poll Strategies in November indicated that 60 percent of Canadians prefer a government approach that emphasizes skepticism toward AI technology to prevent harm or deception, while 40 percent favor supportive measures to boost the economy through AI. Kohut acknowledged that with economic growth being a high priority, the public’s wariness is notable. “It does seem that in this case, there are enough concerns about some of the other aspects of this,” he noted.
When asked what Solomon should prioritize in AI policy, 60 percent of respondents indicated that ethical and safe use legislation should be the top focus, while 34 percent preferred enhancing government efficiency. Attracting investment for AI research and reducing regulatory barriers were lower on the priority list at 28 percent and 24 percent, respectively. The online survey, conducted from November 1 to 7 among 1,687 Canadians, lacks a margin of error due to its non-random sampling method.
Concerns were also raised regarding the government’s lengthy public consultation questionnaire, which included mandatory open-ended questions that may have deterred some respondents from completing it. Kohut pointed out that the government’s use of AI to analyze submissions could result in “a robot telling a robot what to do for the policy,” distancing policymakers from genuine public input.
In response, Solomon’s communications director asserted that public trust is central to the government’s strategy and that a multifaceted engagement process was designed to capture diverse perspectives. “The engagement process for the updated national AI strategy was designed to be broad and multi-channel,” Peter Wall stated in an email, highlighting the government’s consultations with various stakeholders, including civil society organizations and industry representatives.
Coletto noted that Canadians are experiencing a duality of feelings regarding AI, showing optimism about its capabilities while also harboring significant anxieties about its implications. “The risks, I think, still outweigh in their minds the positives,” he said, suggesting that the government’s bullish stance on AI may not resonate with the general public. He proposed that the political landscape could offer opportunities for parties like the NDP to advocate for labor protections amid AI adoption.
This ongoing tension between the government’s pro-AI agenda and public sentiment underscores the challenges ahead as Canada navigates the complexities of integrating AI into society. As consultations continue, the outcomes will likely shape not only the regulatory landscape but also the broader discourse on the ethical use of technology in the country.
See also
AI Technology Enhances Road Safety in U.S. Cities
China Enforces New Rules Mandating Labeling of AI-Generated Content Starting Next Year
AI-Generated Video of Indian Army Official Criticizing Modi’s Policies Debunked as Fake
JobSphere Launches AI Career Assistant, Reducing Costs by 89% with Multilingual Support
Australia Mandates AI Training for 185,000 Public Servants to Enhance Service Delivery

















































