Strategy’s Future Amid AI Disruption
At the recent South by Southwest (SXSW) Sydney, a panel featuring behavioural scientist Ganna Pogrebna and strategist Graham Kenny, moderated by Dan Krigstein of The Growth Distillery, explored the increasingly pressing question: Why does strategy continue to fail? Their conclusion centered not on the intelligence of executives or the availability of data, but rather on a fundamental misunderstanding of how decisions are made within organizations.
Kenny emphasized that the disconnect arises from a misalignment between strategy formulation and execution. “Strategy is conceived at the organisational level, but execution happens at the individual level, and many organizations miss that distinction,” he stated. He noted that when strategy is merely seen as another plan, it loses its significance. “Positioning is an organisational issue. Action is an individual one,” he added.
Pogrebna highlighted that many organizations falter long before reaching the execution phase, primarily because they fail to anticipate potential failures. “People generally don’t plan for failure at all,” she remarked, advocating for a proactive approach that accepts the possibility of setbacks and prepares for them. She referred to this approach as the “Samurai method,” suggesting that acknowledging the likelihood of failure can significantly enhance readiness.
The discussion also tackled the entrenched belief that executives possess all the necessary answers. Kenny pointed out that the advent of artificial intelligence (AI) is challenging this notion. “AI is pushing against that,” he observed, noting that it is transforming the role of leadership and flattening organizational structures. “Leadership is moving from dictation to co-creation,” he explained. With AI tools increasingly available to employees, those in leadership positions who resist this shift risk disengagement among their teams.
Pogrebna challenged the assumption that executives are purely rational decision-makers, asserting that most strategic choices are made intuitively. “Data is often used to justify decisions already taken,” she said. Rather than replacing intuition, she posited that AI serves to mitigate the risks associated with it, making uncertainty more manageable.
The overarching theme of the discussion was the value inherent in uncertainty. Pogrebna declared, “Organizations don’t have data problems. They have decision problems. Uncertainty is where value is created. If you can make better decisions than competitors, you win.” Kenny added that many organizations mistakenly prioritize comfortable metrics over those that truly reflect stakeholder outcomes. “Efficiency alone doesn’t matter to customers. They care about service, quality, and value,” he asserted. Pogrebna concurred, noting that excessive measurement leads to confusion, as many organizations track hundreds of variables while only a few drive revenue.
Despite the excitement surrounding AI, Kenny and Pogrebna emphasized that it will not absolve leaders of accountability. “AI is useful but overhyped,” Kenny stated. “It won’t replace boards. Responsibility must remain human.” Pogrebna cautioned that organizations often rush to implement AI without first comprehending the issues they aim to resolve. “Start with the problem, then decide whether AI is the right tool,” she advised, underscoring the importance of co-designing solutions with stakeholders.
Kenny further stressed the significance of language in discussions about technology: “Stop talking about ‘AI strategy.’ You have a business strategy first. AI is a plan that supports it, not the other way around.” This perspective highlights the importance of maintaining clarity of purpose in strategic planning.
As executives deliberate on risk and governance, employees are increasingly adopting AI technologies to save time. “Executives worry about risk, privacy, and control,” Kenny observed, noting the bottom-up nature of this adoption poses new governance challenges, particularly concerning third-party AI tools. Pogrebna pointed out that many boards lack insight into how algorithms are trained or the origins of their data, turning this into a pressing governance issue.
In their concluding remarks, both experts expressed concerns over the future of decision-making autonomy and the potential for a bubble in AI funding models. Pogrebna noted that “decision independence is disappearing,” while Kenny raised alarms about the sustainability of current AI financing trends. Their insights served as a stark reminder that the future of strategy lies not in the illusion of certainty or control, but in fostering judgment, humility, and adaptability.
As organizations navigate the complexities of strategy in an AI-enhanced environment, understanding these dynamics will be crucial for effective leadership and sustained success.
For more insights on media, marketing, and agency news, subscribe to the Mediaweek Morning Report.
See also
Appy Pie Launches AI Video Generation Automation for Streamlined Content Production
Trump’s EO Targets State AI Laws, Aims to Centralize U.S. AI Regulations and Funding
Sell Palantir and BigBear.ai: Unsustainable Growth Risks Ahead of 2026
IBM and Notre Dame Launch 105 Open-Source Benchmark Cards to Streamline AI Evaluations
AI, ESG Compliance, and Crypto Transform Banking Strategies by 2026: Key Shifts Ahead



















































