Connect with us

Hi, what are you looking for?

AI Technology

UMich AI Lab Hosts Deepfake Panel: Impacts on Trust and Ethics Explored

University of Michigan’s AI Lab reveals critical insights on deepfake technology’s societal risks, highlighting urgent ethical challenges at a public symposium attended by over 40 participants.

The University of Michigan Artificial Intelligence Laboratory, in partnership with U-M Flint’s College of Innovation and Technology and the School of Information, hosted an event titled “Friday Night AI: Deepfakes, AI, and the Future of Trust” on Jan. 23. This initiative is part of the eighth annual AI Symposium series and took place at the Ann Arbor District Library’s downtown branch, attracting over 40 students and community members. The aim of the event was to educate the public on the evolving landscape of artificial intelligence, particularly the potential dangers posed by deepfakes—artificial images or videos generated by machine learning algorithms.

Yara El-Tawil, a Rackham student and one of the speakers, kicked off the event with an interactive session that involved audience participation. Attendees were presented with pairs of nearly identical photos and videos depicting real individuals and were tasked with discerning which were authentic and which were deepfakes. El-Tawil emphasized that the societal implications of deepfakes are far-reaching and complex. “A lot of researchers in the field are trying to come up with the most ethical ways to go about the development and use of AI,” she stated. “We still don’t know the effects this technology will have on education at Michigan, let alone everything else.”

Cliff Lampe, a professor at the School of Information and associate dean for academic affairs, highlighted the importance of AI education for students navigating an increasingly automated job market. “Students might see AI taking jobs that existed five years ago, or last year even, but new jobs will arise,” he noted. “Most students will want to learn how to use AI tools to supplement their careers in one way or another.”

Panelist Khalid Malik, a professor of Computer Science and director of Cybersecurity at the College of Innovation and Technology at U-M Flint, discussed the rapid advancements in deepfake technology. “Deepfakes are definitely becoming more and more realistic,” Malik remarked, adding that even partial obstructions like a covered face can still yield accurate imitations. He pointed out ongoing research aimed at developing methods to ensure that technology remains human-controlled, facilitating the detection of AI-generated fabrications. “Many of us are working to solve this problem, but whoever is saying that we have solved this problem is not speaking the truth,” he asserted.

Malik elaborated on two distinct approaches being explored: one involves using AI to detect deepfakes, while the other focuses on watermarking and other problem-based techniques to identify manipulated media. Lampe further discussed the criminal uses of deepfake technology, particularly the troubling rise of AI-fabricated nude images, notably involving minors. He expressed concern over the societal ramifications when deepfakes gain traction on social media, complicating the search for truth. “As a person who studies information quality, when people come forward about an image that has been debated and say that the fact that the image is fake doesn’t matter, to me that’s such an interesting statement,” Lampe said.

Highlighting the pervasive nature of deepfakes, Lampe noted that fraud and scams resulting from this technology present serious challenges. “Deepfakes are the most sophisticated of the technologies that we see right now,” he stated, emphasizing the ongoing “arms race” between those creating deepfakes and those developing detection methods. “How we are exploring these is super important because it is like an arms race between the detectors and the new models that get past detection.”

The discussions at the symposium underscore the pressing need for heightened awareness and understanding of deepfake technology as it continues to evolve. As researchers and educators grapple with its implications, the conversation around ethics, detection, and societal impact remains critical in shaping the future landscape of artificial intelligence.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Technology

Etter+Ramli unveils a governance-first ERP model, achieving an 8.7% reduction in NetSuite ownership costs while enhancing transaction volumes by 7.4%.

AI Research

ERC report identifies 238 AI health projects with a €450 million budget, highlighting transformative applications from disease detection to drug discovery.

AI Cybersecurity

Microsoft reveals North Korean cybercriminals embed AI in attacks, enhancing operational scale and persistence, posing significant global security threats.

Top Stories

HCA Healthcare's CIO Chad Wasserman reveals a transformative strategy leveraging AI and cloud technology to optimize patient care across 191 hospitals and 2,500 clinics.

AI Technology

Demand for forward deployed engineers skyrocketed over 1,000% in 2025 as companies struggle to integrate complex AI systems into operations.

AI Technology

Cloudflare reports a record $614.51M revenue with nearly 50% growth in annual contracts, highlighting strong AI-driven demand despite valuation concerns.

AI Cybersecurity

CrowdStrike's AI-native Falcon platform drives a remarkable 120% ARR growth to $1.69 billion, challenging Palo Alto Networks' broader cybersecurity strategy.

AI Regulation

95% of U.S. companies adopt generative AI, but leaders warn rapid deployment outpaces governance, risking significant operational vulnerabilities.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.