Corporate entities in China, Germany, and the United States are increasingly shaping the global governance of artificial intelligence (AI), according to a study published in Big Data & Society. The research, titled “Strategising imaginaries: How corporate actors in China, Germany and the US shape AI governance,” details efforts by 30 major corporations and industry associations since 2017 to influence narratives surrounding AI ethics, responsibility, and regulation.
The analysis, which scrutinizes 102 corporate documents over six years, indicates that these companies are not merely reacting to emerging AI regulations; rather, they are actively defining the standards and ideals that policymakers and international institutions will adopt. The study identifies a coordinated effort among global tech firms to promote competing narratives, or “socio-technical imaginaries,” which are designed to underscore their expertise, expand their operational autonomy, and ensure that industry-led solutions dominate future AI governance frameworks.
In mapping these divergent narratives, the study reveals that while the three regions present distinct visions for AI integration and governance, they share a common portrayal of AI as a transformative force, positioning corporations as moral and technical authorities. In China, major players like Tencent, Huawei, SenseTime, and Baidu advance two dominant imaginaries. One depicts AI as a catalyst for societal development aligned with national ambitions for an “intelligent society,” aiming for economic modernization and social improvement. The other, termed “Trustworthy AI,” emphasizes safety, explainability, privacy, and accountability, developed in conjunction with government bodies and industry alliances.
German corporations, including Siemens, SAP, Bosch, Deutsche Telekom, and Volkswagen, adopt two interconnected imaginaries. One frames these firms as leaders in applying European AI principles, particularly those related to the EU’s ethics guidelines and the upcoming AI Act. This narrative prioritizes compliance and human-centered design. In contrast, Germany’s startup ecosystem pushes an “AI made in Europe” narrative, advocating for economic sovereignty and reduced regulatory burdens on small and medium-sized enterprises.
In the United States, the study notes a striking consistency among giants like Google, Microsoft, Amazon, Facebook/Meta, IBM, Intel, Palantir, and OpenAI. The prevailing imaginary, “Responsible AI,” promotes innovation-led governance, positioning American firms as leaders in voluntary standards while resisting government intervention. Companies advocate for technical solutions to risks, emphasizing fairness tools and transparency techniques as alternatives to binding regulations.
The researchers also highlight what they term “hedging imaginaries,” a strategy allowing corporations to promote multiple, sometimes contradictory visions of AI governance. This enables them to maintain flexibility in navigating regulatory pressures and public trust. For instance, Chinese firms advance optimistic narratives about AI prosperity while shaping national standards. In Germany, companies publicly support EU regulatory ambitions but lobby for reduced restrictions behind the scenes. American companies often interchange terms like “Trustworthy AI” and “Responsible AI,” signaling alignment with global ethical debates while promoting their model of governance.
Hedging imaginaries serve as strategic tools for managing stakeholder relationships and deflecting criticism, ultimately reinforcing corporate power in the evolving governance ecosystem. The authors argue that this influence extends beyond traditional lobbying; by defining what responsible AI entails and embedding these definitions into technical infrastructures, companies can create durable governance mechanisms. The study warns that this dynamic risks sidelining diverse perspectives and constraining democratic debate.
Furthermore, corporate imaginaries are not limited to narrative framing; they directly influence the technical and institutional infrastructure governing AI for years to come. Companies actively engage in crafting standards, toolkits, and certification systems that, once adopted, can become de facto governance mechanisms. In China, the integration of corporate-designed risk-assessment tools and data governance practices into national laws exemplifies this influence. Conversely, German firms play a critical role in EU policy formation, building standardization networks that institutionalize their governance interpretations.
In the United States, major firms create open-source toolkits and internal audit structures that shape global development practices, illustrating how corporate actors are laying the groundwork for AI oversight before formal regulatory responses are established. This corporate dominance over AI governance questions the inclusivity of the regulatory landscape, emphasizing the need for a broader discourse in defining the future of AI.
See also
Amazon Pulls Error-Filled AI Recap of Fallout Season 1 After Major Factual Mistake
AI Pioneer Fei-Fei Li Prioritizes Skills Over Degrees, Shaping Tech Talent Acquisition
Wall St Plunges as Treasury Yields Rise, Broadcom Warns of AI Margin Risks
Disney Licenses 200+ Characters to OpenAI, Invests $1 Billion in AI Video App Sora


















































