Connect with us

Hi, what are you looking for?

AI Regulation

ISACA Reveals Key AI Governance Lessons from 2025 to Enhance Safety and Trust in 2026

ISACA’s Mary Carmichael urges organizations to implement robust AI governance in 2026, citing predictable incidents in 2025 that compromised privacy, security, and trust.

In a detailed examination of lessons learned from artificial intelligence incidents in 2025, ISACA’s Mary Carmichael emphasizes the need for organizations to enhance their AI governance as they approach 2026. Utilizing data from MIT’s AI Incident Database, her analysis reveals that many challenges encountered last year were predictable and preventable, impacting areas such as privacy, security, reliability, and human consequences.

Carmichael’s blog post, titled “Avoiding AI Pitfalls in 2026: Lessons Learned from Top 2025 Incidents,” outlines key patterns observed in 2025 and argues for strategic changes to improve AI usage in the coming year. She notes that organizations must begin treating AI systems as core infrastructure, enforcing protocols such as multi-factor authentication, unique administrative accounts, and regular security assessments, particularly when personal data is involved.

To address issues of discrimination and harmful biases, Carmichael highlights the potential of facial recognition technology in investigations, while stressing that it should not serve as the sole basis for decisions. She advocates for requiring corroborative evidence and transparency regarding error rates across different demographics. Furthermore, she warns that the rise of deepfakes necessitates organizations to closely monitor potential misuse of their brands and public figures, urging them to develop comprehensive response strategies that include training for employees and the public on verification practices.

Another pressing issue raised is the use of AI models in cyber-espionage. Carmichael advises organizations to assume that attackers may utilize AI as a sophisticated assistant, which requires stringent governance measures. This includes treating certain AI models as high-risk identities, implementing least-privilege access, and ensuring rigorous logging and monitoring. Any AI capable of executing code should be managed similarly to a powerful engineer account, rather than a benign tool.

In the realm of user interaction, Carmichael warns that chatbots and AI companion applications have been involved in damaging conversations. She urges developers to embed safety features from the outset, including clinical input, appropriate escalation paths, and strong limits that allow for human intervention. If a product cannot provide these safeguards, it should not be marketed as an emotional support tool for vulnerable populations, such as young people.

Environmental considerations also feature prominently in Carmichael’s recommendations. She notes that some AI providers are linked to increased air pollution and noise in communities, thus calling for enhanced due diligence during the procurement process. Organizations should gather data on energy consumption, emissions, and water use to ensure that their AI initiatives align with broader climate and sustainability objectives.

One of the most critical issues raised in the post is the phenomenon of AI hallucinations, where systems confidently make incorrect assertions. Carmichael stresses the need for robust governance frameworks around high-impact AI systems, incorporating logging, version control, and validation checks to enable accountability and oversight.

As organizations look ahead to 2026, Carmichael underscores the strategic advantage of implementing a comprehensive AI governance program. She believes that maintaining visibility, establishing clear ownership, and facilitating rapid intervention will not only mitigate harm but also build trust with users. “With the right oversight, AI can create value without compromising safety, trust or integrity,” she concludes. For businesses yet to develop an AI governance strategy, the beginning of 2026 presents an opportune moment to take action.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Research

MIT researchers unveil the BODHI framework, boosting AI context-seeking in clinical scenarios from 7.8% to 97.3%, enhancing medical decision-making safety.

AI Generative

MIT engineers unveil VibeGen, an AI model that revolutionizes protein design by targeting motion dynamics, enhancing drug efficacy and material properties.

AI Technology

MIT and HPI launch a 10-year AI and Creativity Hub, introducing fellowships and interdisciplinary research to transform design and innovation.

AI Tools

Datalign launches Halo AI platform, enabling advisory firms to deploy custom AI agents while managing $80 billion in assets under a robust compliance framework

AI Education

MIT introduces a groundbreaking course combining computer science and anthropology to develop AI chatbots that enhance social interactions, led by professors Arvind Satyanarayan and...

AI Cybersecurity

MIT's Strahinja Janjusevic advances maritime cybersecurity by combining AI and policy frameworks to counter threats like GPS spoofing, enhancing national security.

AI Research

MIT unveils Self-Distillation Fine-Tuning, a groundbreaking method that cuts catastrophic forgetting by enhancing AI's reasoning while retaining 2.5 times more knowledge.

AI Generative

MIT is now offering seven free AI courses through its OpenCourseWare, catering to all skill levels, to meet the surging demand for AI literacy...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.