In a thought-provoking exploration of artificial intelligence, John Donovan’s article “Consulting the Machines,” published on March 22, 2026, examines the role of AI platforms in addressing a long-standing dispute involving Royal Dutch Shell. By querying multiple AI models—including Grok, Copilot, and Perplexity—about the same 30-year issue, Donovan sought to highlight both convergence and divergence in their responses, thereby creating an informal advisory panel for collective insights.
The experiment welcomed a diverse range of perspectives, allowing for critical engagement without falling prey to single-source bias. As noted in the article, the AIs collectively recognized the dispute as significant, unusual, and potentially resolvable, though not existential for Shell. Such consensus among the platforms offers a stronger signal than any single AI’s opinion, while also emphasizing the need for human judgment in interpreting these outputs.
This approach to AI utilization is seen as a positive development within the tech community. By treating AI not as an oracle but rather as a panel of advisors, users can engage more critically with the information presented. Donovan’s methodology aligns with the ethos of many AI developers, who believe that genuine intelligence emerges through the interplay between machines and users.
However, the experiment is not without its downsides. One major risk is the potential for hallucination propagation and false consensus. Shared errors among AI platforms can amplify misinformation, particularly if their collective output is treated as authoritative. The convergence of opinions may not necessarily reflect objective truth, given that many models are trained on overlapping datasets, which can lead to groupthink.
Another concern is the lack of accountability inherent in AI tools. These platforms operate without a real-time understanding of legal or business contexts, making it dangerous for organizations to treat AI-generated outputs as formal advice. Companies may risk poor decision-making or legal complications if they fail to apply due diligence in interpreting AI insights.
Moreover, privacy issues arise from feeding sensitive corporate information into multiple public AIs, as even anonymized data can create a permanent digital footprint. This opens avenues for strategic manipulation of the outputs, where parties might engineer prompts to generate favorable “panel” opinions. The experiment demonstrates both the promise and peril of using AI collaboratively in decision-making.
Despite these challenges, Donovan’s experiment represents a welcome evolution in the interaction between humans and AI. By fostering a more sophisticated and collaborative approach, the exercise redefines how users engage with these technologies. The future will reveal whether decision-makers, such as Shell’s leadership, will leverage AI outputs as one of many data points rather than relying on them for final judgments.
Ultimately, this meta-experiment underlines the value of treating AI as a thoughtful, albeit fallible, ally in complex situations. As the landscape of artificial intelligence continues to evolve, the lessons drawn from Donovan’s approach could enhance the way organizations navigate intricate disputes and decision-making processes.
See also
Nvidia Faces Antitrust Scrutiny Over $20 Billion Groq Licensing Agreement
Germany”s National Team Prepares for World Cup Qualifiers with Disco Atmosphere
95% of AI Projects Fail in Companies According to MIT
AI in Food & Beverages Market to Surge from $11.08B to $263.80B by 2032
Satya Nadella Supports OpenAI’s $100B Revenue Goal, Highlights AI Funding Needs




















































