In a detailed examination of lessons learned from artificial intelligence incidents in 2025, ISACA’s Mary Carmichael emphasizes the need for organizations to enhance their AI governance as they approach 2026. Utilizing data from MIT’s AI Incident Database, her analysis reveals that many challenges encountered last year were predictable and preventable, impacting areas such as privacy, security, reliability, and human consequences.
Carmichael’s blog post, titled “Avoiding AI Pitfalls in 2026: Lessons Learned from Top 2025 Incidents,” outlines key patterns observed in 2025 and argues for strategic changes to improve AI usage in the coming year. She notes that organizations must begin treating AI systems as core infrastructure, enforcing protocols such as multi-factor authentication, unique administrative accounts, and regular security assessments, particularly when personal data is involved.
To address issues of discrimination and harmful biases, Carmichael highlights the potential of facial recognition technology in investigations, while stressing that it should not serve as the sole basis for decisions. She advocates for requiring corroborative evidence and transparency regarding error rates across different demographics. Furthermore, she warns that the rise of deepfakes necessitates organizations to closely monitor potential misuse of their brands and public figures, urging them to develop comprehensive response strategies that include training for employees and the public on verification practices.
Another pressing issue raised is the use of AI models in cyber-espionage. Carmichael advises organizations to assume that attackers may utilize AI as a sophisticated assistant, which requires stringent governance measures. This includes treating certain AI models as high-risk identities, implementing least-privilege access, and ensuring rigorous logging and monitoring. Any AI capable of executing code should be managed similarly to a powerful engineer account, rather than a benign tool.
In the realm of user interaction, Carmichael warns that chatbots and AI companion applications have been involved in damaging conversations. She urges developers to embed safety features from the outset, including clinical input, appropriate escalation paths, and strong limits that allow for human intervention. If a product cannot provide these safeguards, it should not be marketed as an emotional support tool for vulnerable populations, such as young people.
Environmental considerations also feature prominently in Carmichael’s recommendations. She notes that some AI providers are linked to increased air pollution and noise in communities, thus calling for enhanced due diligence during the procurement process. Organizations should gather data on energy consumption, emissions, and water use to ensure that their AI initiatives align with broader climate and sustainability objectives.
One of the most critical issues raised in the post is the phenomenon of AI hallucinations, where systems confidently make incorrect assertions. Carmichael stresses the need for robust governance frameworks around high-impact AI systems, incorporating logging, version control, and validation checks to enable accountability and oversight.
As organizations look ahead to 2026, Carmichael underscores the strategic advantage of implementing a comprehensive AI governance program. She believes that maintaining visibility, establishing clear ownership, and facilitating rapid intervention will not only mitigate harm but also build trust with users. “With the right oversight, AI can create value without compromising safety, trust or integrity,” she concludes. For businesses yet to develop an AI governance strategy, the beginning of 2026 presents an opportune moment to take action.
See also
Trump’s AI Executive Order Sparks State Backlash, Challenges Local Regulations
TCS Launches Intelligent Urban Exchange™ for Enhanced ESG and CSRD Compliance Management
Vietnam Advances AI Governance Framework with New Artificial Intelligence Law
AI Compliance Challenges Rise as Misuse Cases Surge: Key Tactics for Advertisers
Senator Marsha Blackburn Reveals TRUMP AMERICA AI Act to Establish Federal AI Standards



















































