Connect with us

Hi, what are you looking for?

Top Stories

University Welcomes AI Ethics Leader David Danks to Address Ethical Risks in AI Development

University appoints David Danks as a distinguished professor to lead AI ethics education, addressing critical challenges in algorithm bias and community impact.

David Danks has joined the University as the William L. Polk Jr. and Carolyn K. Polk Jefferson Scholars Foundation distinguished University professor of philosophy, artificial intelligence, and data science. In his new role, Danks will teach courses focused on the ethics and philosophy of AI, including a spring course titled “Data Ethics” in the School of Data Science. His research emphasizes the ethical choices involved in the design and use of AI systems, aiming to integrate ethical considerations into the technology sector.

Prior to his appointment, Danks was a professor at the University of California, San Diego, where he led a Data Intelligence Values Ethics Research Lab. The lab studied the intersection of data, cognition, and values, employing methods from philosophy, psychology, machine learning, and public policy. Danks plans to maintain this research initiative at his new University while also teaching one course each semester in both the School of Data Science and the philosophy department, starting in Fall 2026.

A significant aspect of Danks’ research involves examining data and model bias, particularly whether algorithms should be designed to address human biases. He cites real-world examples such as the use of AI in legal systems for bail determination and the ethical dilemmas involved in developing technologies like self-driving cars. Danks points out ongoing debates about the safety of self-driving vehicles, particularly those programmed to strictly adhere to traffic laws, which could inadvertently endanger human drivers who may exceed speed limits.

“Who wants to be the engineer who has to go tell the legal department, oh, yeah, we designed a self-driving car to break the law?” Danks stated. He highlighted an ethical dilemma in designing technology that may compromise safety while adhering to regulations, illustrating the conflicting priorities engineers often face.

Danks also critiques the technology sector for neglecting the ethical implications of products on local communities. He argues that many engineers are trained primarily in technical skills, leaving a gap in ethical training. He hopes to bring these ethical considerations to the forefront in his teaching, emphasizing the importance of understanding the broader social implications of AI and technology.

“If you talk to one of the engineers at a place like Waymo, they’ll understand that there are these ethical components to it,” Danks noted. “They just say, but that’s not my job. Nobody’s ever taught me how to [consider the ethical impact].” He believes that successful AI development requires not only legal oversight but also a dedicated focus on ethics within technology teams, as a disconnect often exists between ethics teams and software engineers.

In his Data Ethics course, Danks aims to educate students about the numerous decisions involved in creating and deploying AI and data science models. He encourages students to consider the origins of their data, potential biases, and the implications of their technology on various communities. Danks’ arrival at the University represents a broader initiative to advance responsible AI governance, alongside several other University projects, including the LaCross Institute for Ethical AI in Business and an AI Accountability Framework developed by the School of Law.

Danks was attracted to the University because of its emphasis on humanistic and social dimensions alongside technical skills. He believes that to address the challenges posed by AI, it is essential to integrate humanistic considerations from the start. “The only way that we’re going to make real progress on these kinds of challenges of AI and the benefits of AI is by having the humanistic and social just embedded from the very beginning,” he said.

In addition to his academic roles, Danks serves on a National AI Advisory Committee focused on AI’s role in mental health. His work aims to ensure that technology aids rather than harms users, asking, “How do we bend the technology a bit more towards the good?” He emphasizes the importance of guiding companies in developing better algorithms and preparing future generations to avoid past mistakes in AI development.

Looking to the future, Danks envisions AI reshaping education, acknowledging that instructors are still figuring out how to effectively integrate AI into their teaching. However, he foresees opportunities for more intentional learning experiences alongside AI, emphasizing that both educators and students will need to critically assess the role of technology in their learning processes. “We have to think about, okay, why am I using this? How is this changing how I think? And those skills take time to develop,” he explained.

As Danks embarks on this new chapter, he remains focused on the intersection of AI, policy, and ethics, particularly from a location closer to Washington, D.C. His guiding questions—what role should government play in shaping AI’s future and how can society build responsible systems that align with human values—underscore a commitment to a technology ecosystem that benefits all. “If I could change one thing about the AI industry, it would be to change the AI industry so that it produces products that fit us, rather than making us fit the products,” Danks concluded.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Cybersecurity

CyberCatch acquires Atriarch for 1.25 million shares to enhance cybersecurity with advanced multi-authority encryption amid rising AI and quantum threats

AI Generative

UC Irvine's HELIOS framework boosts binary decompilation compilability from 45% to over 85% using control flow graphs, transforming security analysis methods.

AI Technology

UC Riverside's Test-Time Matching method enhances AI reasoning by 89.4%, surpassing GPT-4 with a groundbreaking self-improvement approach.

Top Stories

Illumina launches the Billion Cell Atlas, the world's largest genome-wide dataset, aiming to enhance drug discovery using AI with 1 billion cells mapped to...

AI Generative

UC Berkeley researchers unveil diffusion language models that achieve optimal parallel text generation, outperforming autoregressive models in speed and efficiency.

AI Education

GSV Cup selects 50 innovative EdTech startups from 3,000 global nominations, raising over $177 million and highlighting diverse leadership with 66% underrepresented founders.

AI Generative

Chinese researchers unveil TurboDiffusion, slashing AI video generation times by 200x, enabling a five-second HD clip in just 24 seconds.

AI Technology

Alphabet acquires Intersect for $4.75B to enhance data center energy solutions, aiming for advanced renewable initiatives and optimized infrastructure by 2026

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.