Connect with us

Hi, what are you looking for?

AI Cybersecurity

Generative AI Poses Heightened Cybersecurity Risks, Warns Heriot-Watt Expert

Heriot-Watt’s Micheal Lones warns that generative AI in machine learning could introduce critical vulnerabilities, complicating system transparency and compliance.

Machine-learning systems are increasingly integrated into everyday life, influencing spam filters, product recommendations, and social media algorithms. A new trend is the incorporation of **generative AI** into these workflows to enhance tasks such as coding, data labeling, and decision-making. However, **Micheal Lones**, a computer scientist at Heriot-Watt University, raises concerns about this growing reliance on generative AI, arguing that it may complicate machine-learning systems, making them harder to understand and audit while introducing new vulnerabilities.

In his paper published in the journal **Cell Press Patterns**, Lones highlights that the integration of **large language models** (LLMs) into machine-learning practices may result in unforeseen complications. He stresses that while generative AI can improve efficiency, the associated risks are often underestimated. “Machine-learning developers need to be aware of the risks of using GenAI in machine learning and find a sensible balance between improvements in capability and the risks that might come with that,” Lones states. His work serves as a warning and a practical guide for those developing machine-learning systems.

Lones identifies four critical roles for generative AI in machine learning: as part of the decision-making process, for designing pipelines and writing code, generating synthetic training data or preprocessing existing data, and analyzing results or producing reports. The combination of these roles can compound the risks, making systems increasingly opaque. “If you have GenAI working in a number of different ways within your machine-learning workflows, they can interact in unpredictable and hard to understand ways,” he notes. He advises against adding excessive complexity, especially in high-stakes sectors like healthcare and finance, where errors can have serious repercussions.

To illustrate his concerns, Lones presents two examples: a hospital triage system using a language model to assess case severity and a loan approval system leveraging a commercial generative AI service. Both systems are attractive due to their potential for speed and cost reduction but pose significant risks if they fail.

One major issue is that large language models can make errors that are not easily detectable. They may hallucinate facts, produce flawed code, or provide inconsistent answers to the same queries. Lones argues that such mistakes are particularly perilous in machine learning, as developers might unwittingly rely on AI-generated recommendations that could skew foundational elements like training and evaluation processes. He warns that newer or larger models do not automatically outperform simpler ones, urging developers to carefully consider the necessity of generative AI in their projects.

As Lones points out, the integration of LLMs complicates the explainability of systems, a critical requirement in regulated fields like medicine and finance. “There are laws about being able to show that the machine-learning system is reliable, and that you can explain how it reaches decisions,” he explains. The opacity of generative AI can hinder compliance with such regulations, leaving developers struggling to interpret system outputs. The false confidence that some AI explanations provide can lead stakeholders to trust systems that are not genuinely reliable.

Security and governance risks are another focal point of Lones’ research. Many generative models require data to be processed on remote servers, increasing the risk of data leaks, especially when sensitive information is involved. He also cautions that generative AI can exacerbate existing problems in machine learning, such as bias. Synthetic data derived from flawed original datasets may carry over hidden biases, affecting feature engineering and model decisions.

“It’s important for people in the general public to be aware of the limitations of GenAI systems,” Lones asserts. Companies may implement these technologies to cut costs, potentially enhancing user experiences, but the hidden consequences of bias and unfairness must also be considered. He advises developers to manually review AI-generated outputs, document their use of generative AI, and weigh the efficiency gains against potential risks.

Finally, Lones emphasizes that the dangers of generative AI extend beyond the development phase and can continue to affect systems post-deployment. Changes in remote models or user interactions that exploit system weaknesses could result in unforeseen issues. In conclusion, Lones calls for a balanced approach toward the use of generative AI in machine learning, especially in sectors that directly impact health, finance, and access to services. He warns that while automation can enhance machine learning capabilities, it can also introduce fragility, complicating accountability and system comprehension.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

AI Research

UK study reveals only 20% of local councils are AI-ready, exposing significant gaps in digital capabilities across 208 authorities in England, Scotland, Wales, and...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.