David Danks has joined the University as the William L. Polk Jr. and Carolyn K. Polk Jefferson Scholars Foundation distinguished University professor of philosophy, artificial intelligence, and data science. In his new role, Danks will teach courses focused on the ethics and philosophy of AI, including a spring course titled “Data Ethics” in the School of Data Science. His research emphasizes the ethical choices involved in the design and use of AI systems, aiming to integrate ethical considerations into the technology sector.
Prior to his appointment, Danks was a professor at the University of California, San Diego, where he led a Data Intelligence Values Ethics Research Lab. The lab studied the intersection of data, cognition, and values, employing methods from philosophy, psychology, machine learning, and public policy. Danks plans to maintain this research initiative at his new University while also teaching one course each semester in both the School of Data Science and the philosophy department, starting in Fall 2026.
A significant aspect of Danks’ research involves examining data and model bias, particularly whether algorithms should be designed to address human biases. He cites real-world examples such as the use of AI in legal systems for bail determination and the ethical dilemmas involved in developing technologies like self-driving cars. Danks points out ongoing debates about the safety of self-driving vehicles, particularly those programmed to strictly adhere to traffic laws, which could inadvertently endanger human drivers who may exceed speed limits.
“Who wants to be the engineer who has to go tell the legal department, oh, yeah, we designed a self-driving car to break the law?” Danks stated. He highlighted an ethical dilemma in designing technology that may compromise safety while adhering to regulations, illustrating the conflicting priorities engineers often face.
Danks also critiques the technology sector for neglecting the ethical implications of products on local communities. He argues that many engineers are trained primarily in technical skills, leaving a gap in ethical training. He hopes to bring these ethical considerations to the forefront in his teaching, emphasizing the importance of understanding the broader social implications of AI and technology.
“If you talk to one of the engineers at a place like Waymo, they’ll understand that there are these ethical components to it,” Danks noted. “They just say, but that’s not my job. Nobody’s ever taught me how to [consider the ethical impact].” He believes that successful AI development requires not only legal oversight but also a dedicated focus on ethics within technology teams, as a disconnect often exists between ethics teams and software engineers.
In his Data Ethics course, Danks aims to educate students about the numerous decisions involved in creating and deploying AI and data science models. He encourages students to consider the origins of their data, potential biases, and the implications of their technology on various communities. Danks’ arrival at the University represents a broader initiative to advance responsible AI governance, alongside several other University projects, including the LaCross Institute for Ethical AI in Business and an AI Accountability Framework developed by the School of Law.
Danks was attracted to the University because of its emphasis on humanistic and social dimensions alongside technical skills. He believes that to address the challenges posed by AI, it is essential to integrate humanistic considerations from the start. “The only way that we’re going to make real progress on these kinds of challenges of AI and the benefits of AI is by having the humanistic and social just embedded from the very beginning,” he said.
In addition to his academic roles, Danks serves on a National AI Advisory Committee focused on AI’s role in mental health. His work aims to ensure that technology aids rather than harms users, asking, “How do we bend the technology a bit more towards the good?” He emphasizes the importance of guiding companies in developing better algorithms and preparing future generations to avoid past mistakes in AI development.
Looking to the future, Danks envisions AI reshaping education, acknowledging that instructors are still figuring out how to effectively integrate AI into their teaching. However, he foresees opportunities for more intentional learning experiences alongside AI, emphasizing that both educators and students will need to critically assess the role of technology in their learning processes. “We have to think about, okay, why am I using this? How is this changing how I think? And those skills take time to develop,” he explained.
As Danks embarks on this new chapter, he remains focused on the intersection of AI, policy, and ethics, particularly from a location closer to Washington, D.C. His guiding questions—what role should government play in shaping AI’s future and how can society build responsible systems that align with human values—underscore a commitment to a technology ecosystem that benefits all. “If I could change one thing about the AI industry, it would be to change the AI industry so that it produces products that fit us, rather than making us fit the products,” Danks concluded.
See also
Oracle Sued by Shareholders for Misleading AI Investment Claims Amid Revenue Shortfalls
AUVSI Warns U.S. Robotics at Inflection Point, Urges National Strategy to Counter China
Germany”s National Team Prepares for World Cup Qualifiers with Disco Atmosphere
95% of AI Projects Fail in Companies According to MIT














































