The U.S. Air Force (USAF) is confronting significant challenges in its efforts to adopt artificial intelligence (AI) technologies, as advancements in the field lead to increasingly powerful but less interpretable systems. Joe Chapa, the service’s director of innovation, highlighted these issues during his remarks at SAP’s Public Sector Summit in Washington, D.C., on December 16.
Chapa pointed out that the rapid evolution of modern AI systems, particularly those utilizing deep learning techniques, has resulted in a level of complexity that can hinder understanding. These systems, which depend on vast datasets and extensive computing power, process information through numerous layers of artificial neurons. While this complexity enhances their performance, it also creates “black box” systems, where the decision-making processes are opaque to human users.
“When those models started to become deeper … the math behind the way that those models arrived at an output became harder to do,” Chapa explained. He noted that as the complexity of these systems has increased, the explainability tools originally developed have struggled to keep pace. The rapid rise of generative AI has compounded the issue, making full explainability a near-impossible requirement for many of today’s advanced AI tools.
“We wouldn’t be able to use any of the generative AI tools of the last five years or so,” he said, emphasizing the difficult balancing act organizations face in striving for trustworthy AI. Chapa stressed that instead of pursuing complete transparency, institutions should focus on governance, guardrails, and risk management to foster trust in AI technologies.
Chapa underscored the importance of aligning incentives within organizations, noting a persistent tension between innovation leaders, who are rewarded for rapid progress, and cybersecurity leaders, who prioritize risk prevention and may slow the adoption of new technologies. This dynamic has resulted in delays in deploying generative AI tools within the USAF.
“The solution to that problem is for leaders to accept more risk,” Chapa stated, advocating for a proactive approach to risk that involves acknowledgment and mitigation rather than avoidance. Senior leaders, he argued, must ultimately take ownership of these decisions to ensure responsible AI use.
Chapa further contended that policies and oversight alone are insufficient for fostering responsible AI deployment. Organizations must also cultivate observable changes in behavior that reflect a commitment to both innovation and necessary safeguards. He highlighted that the successful adoption of AI is primarily a challenge of people and culture, rather than a purely technical issue.
He observed that the covert use of generative AI within organizations can erode trust, creating a culture of fear around its application. “We have a little bit of a fear around being found out that you use generative AI,” Chapa noted, contrasting this with the open and accountable culture that he believes is essential for success.
According to Chapa, an organization is truly “winning at AI” when it fosters an environment where employees can openly discuss the use of AI tools and are held accountable for the results. “It’s not trust in the systems,” he said. “It’s trust between the people.”
As the USAF navigates these challenges, the broader implications for the military and other sectors seeking to integrate AI underscore the complexity of balancing innovation with responsible use. With rapid advancements continuing to shape the AI landscape, establishing a culture of transparency and trust will be critical for harnessing these technologies effectively.
See also
freebeat Launches AI Music Video Generator with Adaptive Creative Modes for Enhanced Storytelling
Yildiz Tech University Reveals Scalable Architecture for Real-Time Video Translation
Large Language Models Revolutionize Obesity Management, Reveals Systematic Review Findings
Meta Launches AI Glasses Update with Spotify Integration and Kannada, Telugu Support
OpenAI Announces GPT-5.2-Codex with Trusted Access for Cybersecurity Experts



















































