The increasing integration of artificial intelligence (AI) into various sectors is elevating discussions around its governance and ethical use. This urgency was underscored at the Ethical AI, Building Trust panel during Ontario Tech University’s inaugural AI Forum, held March 27. Featuring experts from academia and industry, the panel addressed the evolution of governance frameworks, regulatory measures, and human-centred design necessary for fostering public trust in AI systems.
Under the forum’s theme, Building Trust: The Strategic Advantage of Human-Centred AI, panelists emphasized that trust is pivotal for innovation rather than an obstacle. “Trust is the decision to put yourself in a situation where the outcome that matters to you depends on the actions of somebody else,” noted Dr. Peter Lewis, Canada Research Chair in Trustworthy Artificial Intelligence and Director of the Mindful AI Research Institute at Ontario Tech. He explained that in the context of AI, uncertainty is heightened, as these systems are complex and often unpredictable.
Dr. Lewis argued against blind confidence in AI, advocating instead for trustworthiness based on evidence. “The aim is not to just have more trust,” he stated, emphasizing the need for AI systems to demonstrate their trustworthiness in specific contexts.
Dr. Hossein Rahnama, Founder and CEO of Flybits, raised questions about the concept of trusted AI. He asserted, “Trust is something that is defined between people,” advocating for a zero-trust approach toward machines and stressing the necessity for algorithm transparency and auditability. He noted that while AI systems are often seen as decision-makers, they primarily support human decision-making processes.
The discussion also covered the current landscape of AI regulation in Canada. Dr. Steven Murphy, President and Vice-Chancellor of Ontario Tech University, described the country’s regulatory approach as a middle ground between the conservative stance of Europe and the rapid innovation focus of the U.S. He characterized the perceived divide between regulation and innovation as a false dichotomy. “What we need to start thinking about is, how do we innovate in a trustworthy environment?” he said, highlighting the opportunity for Canadians to lead in trust-based innovation.
Amber MacArthur, President of AmberMac Media and an award-winning podcaster, expressed urgency for meaningful regulation, asserting that the conversation has shifted from whether to regulate to how quickly it should be done. “We have a role as Canadians right now to actually introduce regulation that is sensible, that helps our companies thrive,” she emphasized.
Dr. Murphy pointed out a common misconception among leaders regarding AI risks, specifically about trust and social licence. He compared AI to Canada’s nuclear sector, where trust is cultivated over time through transparency, education, and robust safeguards. “We need to be thinking about building social licence,” he stated, acknowledging that public trust in AI is often undermined by a lack of understanding.
He reiterated that the responsibility lies with institutions to build that trust: “Why should I trust you and what you have to say? Well, you shouldn’t until I have won your trust.” This perspective highlights the need for ongoing dialogue and education around AI as it becomes increasingly integrated into daily life, suggesting that building an informed public is essential for the technology’s acceptance and successful implementation.
See also
Tesseract Launches Site Manager and PRISM Vision Badge for Job Site Clarity
Affordable Android Smartwatches That Offer Great Value and Features
Russia”s AIDOL Robot Stumbles During Debut in Moscow
AI Technology Revolutionizes Meat Processing at Cargill Slaughterhouse
Seagate Unveils Exos 4U100: 3.2PB AI-Ready Storage with Advanced HAMR Tech




















































