As artificial intelligence (AI) rapidly evolves, a palpable tension emerges between our enthusiasm for digital tools and our deep-seated distrust of them. Valeria Adani, a partner at Projects by IF, encapsulated this paradox succinctly: “We love our digital tools. We just don’t trust them.” This sentiment resonates particularly as AI ventures into uncharted territory, leaving users, businesses, and regulators scrambling to keep pace.
The lag in regulatory frameworks has far-reaching implications. If AI remains untrusted due to inconsistent regulations, its adoption may stagnate. Businesses find themselves grappling with how to implement technologies while the rulebook is still being written, while regulators face the daunting task of overseeing a field that evolves almost daily.
On March 10, 2026, the Digital Regulation Cooperation Forum (DRCF) convened its second Responsible AI Forum in London. The event brought together regulators, technology firms, academics, and civil society representatives to confront these pressing challenges and explore actionable insights for senior leadership teams and general counsel navigating this fluid landscape.
The DRCF, established in 2020, aims to enhance cooperation among regulators to address the unique challenges posed by the regulation of online platforms. The forum includes four UK regulators responsible for digital oversight: the Competition and Markets Authority, the Financial Conduct Authority, the Information Commissioner’s Office, and Ofcom.
Key Insights for Businesses
Keynote speaker Kenneth Cukier, a journalist at The Economist and co-author of *Big Data*, emphasized a transformative approach to regulation, suggesting a shift from a ‘do no harm’ standard to a positive duty of care. He acknowledged the complexities of regulation but insisted that AI necessitates a fundamental rethinking of our responsibilities. For businesses, this means proactively engaging with regulators and being willing to adapt business models to meet evolving expectations.
The forum highlighted a notable shift in regulatory focus: many regulators are increasingly concerned with achieving favorable outcomes rather than merely ensuring technical compliance with detailed rules. While businesses await clearer guidance, emerging standards are beginning to fill the gaps. Sheldon Mills, currently consulting on how AI might reshape retail financial services, noted that “voluntary standards will probably take on a role” in this landscape. Given the rapid pace of AI advancements, businesses should brace themselves for scrutiny that examines broader societal impacts.
In the UK, a single AI regulator has yet to materialize, resulting in existing bodies like the FCA and Ofcom being tasked with overseeing AI within their jurisdictions. This approach rests on five core principles: safety, security and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress. Each regulator interprets these principles according to its specific mandate, a strategy aimed at fostering AI growth while safeguarding human rights and upholding democratic norms.
However, potential pitfalls remain. Dame Melanie Dawes, CEO of Ofcom and chair of the DRCF, acknowledged the risk of regulatory gaps as different bodies implement the framework uniquely. The future of this cooperative arrangement, whether it will eventually require a statutory basis, is still undecided. For now, businesses must recognize that inter-regulatory coordination is still in its infancy.
As companies consider adopting AI technologies, the timing of their efforts is critical. Businesses are advised not to delay their initiatives, as waiting for a clearer regulatory environment may lead to missed opportunities. Conversely, a hurried approach without adequate strategy could result in missteps. The challenge lies in striking a balance: advancing with caution and deliberation rather than succumbing to fear of missing out.
Throughout the forum, the word “trust” resonated prominently. Without it, AI risks becoming underutilized and relegated to the status of expensive shelfware. The *Future @ Work 2026* report from Lewis Silkin noted that cultural resistance, including fears of job loss and skepticism towards AI outputs, remains a significant barrier to adoption. Ensuring adequate human oversight of AI products is essential for fostering this trust.
Tim Gordon, a partner at Best Practice AI, underscored that AI will become central to every business model, necessitating ongoing discussions beyond quarterly meetings. Leadership must confront challenging questions about AI behavior and its alignment with company values. Accountability for AI oversight should extend beyond IT departments to encompass all levels of management.
The Responsible AI Forum provided a wealth of insights for navigating the complexities surrounding AI regulation. As the landscape continues to evolve, organizations must engage in the broader dialogues shaping the future of AI. Stakeholders are encouraged to share their concerns and aspirations regarding AI as these discussions unfold.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health



















































