As discussions about the ethics of artificial intelligence (AI) intensify, the focus often shifts to regulatory frameworks. However, many critical choices affecting the fairness, transparency, security, and sustainability of AI technologies are made much earlier in the design and development phases. Decisions regarding data selection for model training, the validity of performance metrics, and the safeguards integrated into systems are fundamentally scientific and technical in nature.
“From this perspective, ethics is no longer something added at the end of the process. It becomes a practical discipline embedded in technological development,” said Clara Higuera during her recent presentation. She emphasized that the conversation should extend beyond what is permissible to include what should be built, how it should be constructed, and under what conditions. Supporting this viewpoint, Langdon Winner’s essay “Do Artifacts Have Politics?” echoes Higuera’s insights, asserting that the decisions made during development inherently influence technology and subsequently shape user experiences. In essence, technology is not value-neutral; the decisions embedded within AI systems create significant ethical implications.
Currently, AI is navigating a phase reminiscent of the historical trajectories of other technologies, such as aviation and electricity. Both industries experienced rapid initial growth, followed by a gradual establishment of safety standards and shared frameworks to facilitate broader adoption. This pattern of maturation—transforming from experimental technology to reliable infrastructure—has been documented in the development of electricity. Society responded to the challenges posed by early electrical systems, but with time, technical advancements and an increased emphasis on safety led to a more robust power grid, integrating electricity into daily life.
From the ethical foundations of AI to decisions in technical development
If ethics, transparency, and security are recognized as fundamental to AI, the essential question becomes how to operationalize these principles. The first step involves adopting a mindset similar to reliability engineering: designing systems with their entire lifecycle in mind, from the initial design phase through implementation and continuous monitoring. “Bias can appear at many points in the process: in historical data, in how the population is represented, in the way variables are measured, or in monitoring once the system is already in production. Assessing fairness therefore requires a continuous, end-to-end perspective,” Higuera noted.
In this framework, metrics evaluation and explainability become vital tools for ensuring ethical AI practices. At BBVA, for instance, the organization employs quality reviews and evaluation methods aimed at confirming that AI solutions uphold standards for security, privacy, and transparency. Teams utilize practical guides on explainability and fairness, along with tools such as mercury-explainability and mercury-monitoring. These instruments facilitate clarity regarding the decisions made by AI models and help maintain their accuracy and reliability when processing real-world data. The commitment to responsible AI development is further supported through applied research, including the creation of a stress test designed to evaluate bias in generative AI, thus assessing the performance of large language models in real-time user interactions.
A key takeaway from BBVA’s focus on AI development is the complexity of defining fairness in machine learning. There is no one-size-fits-all definition; it varies based on context, specific use cases, involved groups, and potential harm. Consequently, for high-impact models, teams are required to identify and select the most appropriate fairness criteria or metrics based on the particular use case and provide an explanation for their choices.
Ultimately, AI systems inherently reflect our values, whether intentionally or not. Acknowledging this reality serves as a critical first step toward designing AI technologies that are safer, more transparent, and more accountable. The ongoing evolution of AI presents a unique opportunity to embed ethical considerations deeply into its foundational layers, shaping future innovations that align closely with societal needs and values.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health


















































