GitLab’s latest research highlights a rapid adoption of artificial intelligence among Australian software teams, accompanied by a notable decline in overall efficiency. The findings, drawn from a global survey of 3,266 professionals—including over 250 based in Australia—underscore the challenges that developers face with fragmented tools, compliance pressures, and skills gaps.
The report introduces the concept of an “AI paradox” within software development, indicating that while AI accelerates coding tasks, broader delivery processes are experiencing significant slowdowns. Australian DevSecOps professionals, according to the data, lose an average of seven hours weekly due to inefficient practices, with 35% attributing inefficiency to the overwhelming number of tools in use.
A considerable portion of the respondents, comprising 31%, cited insufficient knowledge sharing as a contributing factor, while another 31% noted that different teams using disparate tools exacerbates the problem. The survey revealed that 67% of participants utilize more than five development tools, and 63% employ over five AI tools—14% higher than the global average.
As attitudes towards future software architectures evolve, 82% of respondents believe that agentic AI can only thrive within a platform engineering framework. This sentiment reflects a growing awareness of the need for cohesive strategies in an increasingly fragmented environment.
The integration of AI is reshaping roles within DevSecOps teams, leading to a rising demand for skilled professionals. A striking 82% of those surveyed agree that as AI simplifies coding, the industry will require more engineers rather than fewer. Additionally, 89% of respondents assert that adopting AI is essential for ensuring the longevity of their software careers.
The desire for greater investment in training is evident, with 87% advocating for increased upskilling opportunities and 84% anticipating significant changes to their roles within the next five years. This reflects a broader trend in which AI has become a standard component of software work in Australia, prompting professionals to adapt their skills accordingly.
Despite the widespread use of AI, teams are exercising caution regarding its deployment. An overwhelming 99% of respondents reported using or planning to use AI across development, security, and operations tasks. However, trust in AI remains limited; on average, respondents believe AI can handle only 39% of daily tasks without human oversight. This skepticism is further underscored by the 78% who have encountered issues stemming from what they characterize as “vibe coding,” where AI-generated code, though seemingly plausible, contains significant errors or omissions.
The survey also highlights increasing compliance pressures associated with AI in software pipelines. A significant 79% of respondents feel that AI complicates compliance efforts, and 85% have noted a rise in compliance issues detected post-deployment. This evolving landscape is prompting a shift in skill priorities, with 37% viewing AI-driven security and compliance as the most critical skill for career advancement.
Looking ahead, 85% of those surveyed predict that by 2027, compliance checks will be integrated directly into code, signaling a transformation in how compliance will function in the future. The research paints a picture of an industry in transition, where toolchain complexity and governance requirements create friction between the speed of coding and the pace of overall software delivery.
GitLab emphasizes that fragmented toolchains are undermining the advantages gained from AI-enhanced development. Manav Khurana, the company’s chief product and marketing officer, articulated this challenge, stating, “This survey illustrates what we call the ‘AI Paradox,’ where coding is faster than ever, yet the lack of quality, security, and speed across the software lifecycle is causing friction on the road to innovation.” He noted that disjointed use of AI agents can exacerbate integration challenges across teams.
As Australian organizations navigate these complexities, they are expected to explore new platform models and governance frameworks in the coming years, striving for better alignment between AI adoption, security, and compliance. The findings suggest that while the integration of AI is progressing rapidly, the path to streamlined, efficient processes remains fraught with challenges.
See also
EU Launches Antitrust Investigation into Google’s AI Tools Over Competition Concerns
ADL Study Reveals Open-Source AI Models Easily Manipulated to Generate Antisemitic Content
AI-Powered AML Automation Reduces Compliance Delays by 80% for Online Casinos
Figma Launches AI Image Editing Tools to Compete with Photoshop’s Features
Anthropic Donates Model Context Protocol to Linux Foundation for Open AI Standards


















































