Connect with us

Hi, what are you looking for?

AI Regulation

ISACA Reveals Key AI Governance Lessons from 2025 to Enhance Safety and Trust in 2026

ISACA’s Mary Carmichael urges organizations to implement robust AI governance in 2026, citing predictable incidents in 2025 that compromised privacy, security, and trust.

In a detailed examination of lessons learned from artificial intelligence incidents in 2025, ISACA’s Mary Carmichael emphasizes the need for organizations to enhance their AI governance as they approach 2026. Utilizing data from MIT’s AI Incident Database, her analysis reveals that many challenges encountered last year were predictable and preventable, impacting areas such as privacy, security, reliability, and human consequences.

Carmichael’s blog post, titled “Avoiding AI Pitfalls in 2026: Lessons Learned from Top 2025 Incidents,” outlines key patterns observed in 2025 and argues for strategic changes to improve AI usage in the coming year. She notes that organizations must begin treating AI systems as core infrastructure, enforcing protocols such as multi-factor authentication, unique administrative accounts, and regular security assessments, particularly when personal data is involved.

To address issues of discrimination and harmful biases, Carmichael highlights the potential of facial recognition technology in investigations, while stressing that it should not serve as the sole basis for decisions. She advocates for requiring corroborative evidence and transparency regarding error rates across different demographics. Furthermore, she warns that the rise of deepfakes necessitates organizations to closely monitor potential misuse of their brands and public figures, urging them to develop comprehensive response strategies that include training for employees and the public on verification practices.

Another pressing issue raised is the use of AI models in cyber-espionage. Carmichael advises organizations to assume that attackers may utilize AI as a sophisticated assistant, which requires stringent governance measures. This includes treating certain AI models as high-risk identities, implementing least-privilege access, and ensuring rigorous logging and monitoring. Any AI capable of executing code should be managed similarly to a powerful engineer account, rather than a benign tool.

In the realm of user interaction, Carmichael warns that chatbots and AI companion applications have been involved in damaging conversations. She urges developers to embed safety features from the outset, including clinical input, appropriate escalation paths, and strong limits that allow for human intervention. If a product cannot provide these safeguards, it should not be marketed as an emotional support tool for vulnerable populations, such as young people.

Environmental considerations also feature prominently in Carmichael’s recommendations. She notes that some AI providers are linked to increased air pollution and noise in communities, thus calling for enhanced due diligence during the procurement process. Organizations should gather data on energy consumption, emissions, and water use to ensure that their AI initiatives align with broader climate and sustainability objectives.

One of the most critical issues raised in the post is the phenomenon of AI hallucinations, where systems confidently make incorrect assertions. Carmichael stresses the need for robust governance frameworks around high-impact AI systems, incorporating logging, version control, and validation checks to enable accountability and oversight.

As organizations look ahead to 2026, Carmichael underscores the strategic advantage of implementing a comprehensive AI governance program. She believes that maintaining visibility, establishing clear ownership, and facilitating rapid intervention will not only mitigate harm but also build trust with users. “With the right oversight, AI can create value without compromising safety, trust or integrity,” she concludes. For businesses yet to develop an AI governance strategy, the beginning of 2026 presents an opportune moment to take action.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Tools

MIT study reveals that 83% of students using ChatGPT for essays struggle to recall their work, highlighting significant cognitive deficits and reduced engagement.

Top Stories

57-year-old consultant enhances AI skills through a $3,000 Johns Hopkins program, transforming a critical gap into a strategic partnership with an oil and gas...

AI Technology

MIT study reveals AI could automate 12% of U.S. jobs, threatening $1.2 trillion in wages, sparking urgent debates among policymakers and economists.

AI Cybersecurity

AI-driven threats are escalating as global cybercrime damages are set to exceed $23 trillion by 2027, prompting urgent calls for enhanced governance and security...

AI Technology

MIT reveals that only 5% of companies profit from AI, highlighting the critical need for human expertise in transforming infrastructure engineering.

AI Generative

Resemble AI unveils Chatterbox Turbo, an open-source TTS model with 350M parameters, delivering real-time voice synthesis six times faster than competitors.

AI Generative

MIT study reveals 95% of generative AI pilots fail, highlighting the need for CX teams to leverage proven tools that deliver measurable ROI and...

Top Stories

MIT's groundbreaking AI agent converts hand-drawn sketches into CAD-ready 3D models in seconds, democratizing design for non-engineers and enhancing productivity.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.