In light of recent tragedies, including the Tumbler Ridge shooting, industry leaders are calling for stronger, enforceable safety standards around artificial intelligence (AI) technologies. Sayan Navaratnam, founder and CEO of The Malar Group of Companies, emphasized that vague safety frameworks are insufficient and that tech companies must take the initiative to establish robust guidelines to prevent future incidents.
Navaratnam’s remarks, directed at fellow executives, board members, and industry associations, highlight a pressing need for accountability within AI platforms. He argues that families affected by the Tumbler Ridge incident deserve more than governmental reassurances; they require a commitment from companies to uphold standards that can avert similar failures. “The era of vague safety frameworks has passed; the era of enforceable standards must begin,” he asserted.
While acknowledging that concerns around AI’s negative impacts may sometimes be exaggerated, Navaratnam maintains that the benefits of the technology vastly outweigh its risks, provided those risks are properly managed. “Thanks to its power and reach, this technology will bring value that many cannot even imagine,” he explained, underscoring the importance of regulatory frameworks that prioritize safety while still allowing innovation to thrive.
He commended the Canadian government, particularly Minister Evan Solomon, for advocating for changes to enhance safety for citizens. However, Navaratnam argued that such regulations should not be solely dictated by government bodies. “The companies designing these systems understand them better than any regulator,” he stated, calling for the industry itself to take the lead in formulating comprehensive safety measures.
Key components of these measures should include defining thresholds for credible threats, establishing escalation protocols that respect privacy, and determining when automated detection should trigger human review. By investing in such systemic safeguards, companies can integrate safety as a core feature rather than treating it as an afterthought.
Navaratnam also cautioned that operating in regulatory vacuums can lead to reactive legislation following crises, which can harm not just specific companies but the entire industry. The scrutiny faced by OpenAI regarding its handling of the Tumbler Ridge incident exemplifies this risk, illustrating how reputational damage can arise from inadequate safety protocols.
He warned against “blunt, performative legislation” that may prioritize a quick response over effective problem-solving, which could ultimately deepen vulnerabilities within AI technologies. “Rash decisions that disregard technical nuance don’t just stifle our most transformative sector; they create a false sense of security while leaving the actual, complex loopholes wide open,” he cautioned.
A meaningful industry-designed code of conduct for AI safety should address fundamental issues such as clear reporting structures and accountability measures. If an automated system flags content, it is imperative that humans review the flagged material based on consistent criteria. Failure to follow these protocols should lead to serious investigations and transparent outcomes.
Cross-border Cooperation Needed
Navaratnam stressed the necessity for cross-border coordination in establishing any safety framework, noting that the global nature of the internet means that a solely Canadian approach would be insufficient. “This might be the most difficult step, but Canada has signaled repeatedly… its hunger to lead,” he stated, recalling the aspirations articulated by prominent figures like Mark Carney.
The opportunity now lies with the private sector to demonstrate that technological innovation and public safety can coexist. Companies must act decisively to govern themselves responsibly, or they risk facing stringent government regulations that would limit the potential benefits of AI for Canadians. “Lead now or be led,” Navaratnam concluded, highlighting the imperative for immediate action in the wake of recent events.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health


















































