A leading artificial intelligence researcher has warned that Australia’s lack of regulatory measures on the technology is risking another generation of young people who may be “sacrificed for the profits of big tech.” Professor Toby Walsh, the chief scientist at the University of New South Wales AI Institute, raised the alarm during an address to the National Press Club in Canberra today.
“Social media should have been a wake-up call about the harms of unregulated AI,” he stated. “We’re about to supercharge the sort of harms we saw with social media with an even more powerful and persuasive technology.”
Walsh’s comments came a day after the federal government revealed it had scrapped plans for a permanent AI advisory body promised by former industry minister Ed Husic in 2024. Walsh, who has consulted with governments and the United Nations on AI challenges, had been appointed to an interim expert group. He criticized both the Australian government and major technology firms for their insufficient approaches to AI regulation.
“There’s fresh harms that AI is bringing into our lives that will need fresh laws,” Walsh said, emphasizing the significant financial incentives for tech companies to prioritize rapid development over safety. “They are breaking things like the mental health of our youth,” he added.
Walsh recounted the tragic story of 16-year-old Adam Raine from the United States, who took his own life in April 2025 after months of distressing conversations with ChatGPT about self-harm. The AI reportedly discouraged Raine from seeking help from his family and even assisted him in writing a suicide note shortly before his death. His parents have filed a lawsuit in California, marking the first legal action accusing OpenAI of wrongful death.
Walsh highlighted alarming data from OpenAI, revealing that among 800 million weekly ChatGPT users, 1.2 million indicated they had plans to harm themselves. “Before Adam’s suicide, OpenAI knew that lots of people contemplating suicide were talking to ChatGPT,” he said, questioning the absence of stronger regulatory guardrails in response.
He also pointed out the increasing use of AI for generating fraudulent advertisements on social media and the rise of harmful deepfake images. “My anger with the tech companies bringing AI into our lives irresponsibly has turned into outrage,” he remarked, citing issues like AI companions undermining human connections and AI health advisors offering dangerous medical advice.
A report released in September by researchers from OpenAI, Duke University, and Harvard University found that 10 percent of the world’s adult population was using ChatGPT. In Australia, a recent investigation uncovered that young users were being sexually harassed and even encouraged to take their own lives by AI chatbots. About 45 percent of Australians reported having used a generative AI tool in 2024, according to the Australian Digital Inclusion Index.
Walsh urged the Australian government to invest “in the upsides” of AI, criticizing the current political narrative dominated by tech firms. He pointed out that in the last election, the big tech sector donated more to political parties than the mining sector, a reality he believes should raise concerns.
“What makes Australia so special that we’ll see the benefits of AI without making the sort of investments other nations are?” he challenged. He noted that countries like Canada and Singapore have significantly outpaced Australia in AI investment, with Canada investing six times more over the past five years and Singapore investing 15 times more despite its smaller population.
In January, South Korea introduced what it claimed was the world’s first comprehensive set of laws regulating AI, aimed at enhancing trust and safety in the sector. By contrast, Australia’s long-awaited National AI Plan will mainly leverage existing laws to manage risks. This follows similar moves by countries such as Japan, China, Taiwan, and Sweden in enacting AI regulations.
Walsh asserted that Australia’s future lies in technology and innovation, not merely in traditional resource exports. “Our future isn’t in shipping red dirt and coal to China. It will be in bits and bytes — increasingly AI-generated bits and bytes,” he concluded.
The federal government had dedicated 15 months and nearly $200,000 to establishing a permanent AI advisory body, which has now been abandoned. An interim panel that aimed to implement “AI guardrails” to ensure safety has also seen its role diminished, as the government opted for a roadmap relying on existing laws. Industry Minister Tim Ayres and Assistant Technology Minister Andrew Charlton announced plans for an AI safety institute in December, which aims to test and monitor regulatory gaps as they arise, rather than relying solely on external expertise.
Walsh, who served on the temporary expert group, expressed his commitment to providing independent advice to the government, stating, “I can assure you that I and my colleagues will continue to offer advice fearlessly, whether they want it or not.” He indicated that future advice would be delivered publicly, rather than privately, suggesting that the government may find this approach more uncomfortable.















































