Connect with us

Hi, what are you looking for?

AI Technology

UMich AI Lab Hosts Deepfake Panel: Impacts on Trust and Ethics Explored

University of Michigan’s AI Lab reveals critical insights on deepfake technology’s societal risks, highlighting urgent ethical challenges at a public symposium attended by over 40 participants.

The University of Michigan Artificial Intelligence Laboratory, in partnership with U-M Flint’s College of Innovation and Technology and the School of Information, hosted an event titled “Friday Night AI: Deepfakes, AI, and the Future of Trust” on Jan. 23. This initiative is part of the eighth annual AI Symposium series and took place at the Ann Arbor District Library’s downtown branch, attracting over 40 students and community members. The aim of the event was to educate the public on the evolving landscape of artificial intelligence, particularly the potential dangers posed by deepfakes—artificial images or videos generated by machine learning algorithms.

Yara El-Tawil, a Rackham student and one of the speakers, kicked off the event with an interactive session that involved audience participation. Attendees were presented with pairs of nearly identical photos and videos depicting real individuals and were tasked with discerning which were authentic and which were deepfakes. El-Tawil emphasized that the societal implications of deepfakes are far-reaching and complex. “A lot of researchers in the field are trying to come up with the most ethical ways to go about the development and use of AI,” she stated. “We still don’t know the effects this technology will have on education at Michigan, let alone everything else.”

Cliff Lampe, a professor at the School of Information and associate dean for academic affairs, highlighted the importance of AI education for students navigating an increasingly automated job market. “Students might see AI taking jobs that existed five years ago, or last year even, but new jobs will arise,” he noted. “Most students will want to learn how to use AI tools to supplement their careers in one way or another.”

Panelist Khalid Malik, a professor of Computer Science and director of Cybersecurity at the College of Innovation and Technology at U-M Flint, discussed the rapid advancements in deepfake technology. “Deepfakes are definitely becoming more and more realistic,” Malik remarked, adding that even partial obstructions like a covered face can still yield accurate imitations. He pointed out ongoing research aimed at developing methods to ensure that technology remains human-controlled, facilitating the detection of AI-generated fabrications. “Many of us are working to solve this problem, but whoever is saying that we have solved this problem is not speaking the truth,” he asserted.

Malik elaborated on two distinct approaches being explored: one involves using AI to detect deepfakes, while the other focuses on watermarking and other problem-based techniques to identify manipulated media. Lampe further discussed the criminal uses of deepfake technology, particularly the troubling rise of AI-fabricated nude images, notably involving minors. He expressed concern over the societal ramifications when deepfakes gain traction on social media, complicating the search for truth. “As a person who studies information quality, when people come forward about an image that has been debated and say that the fact that the image is fake doesn’t matter, to me that’s such an interesting statement,” Lampe said.

Highlighting the pervasive nature of deepfakes, Lampe noted that fraud and scams resulting from this technology present serious challenges. “Deepfakes are the most sophisticated of the technologies that we see right now,” he stated, emphasizing the ongoing “arms race” between those creating deepfakes and those developing detection methods. “How we are exploring these is super important because it is like an arms race between the detectors and the new models that get past detection.”

The discussions at the symposium underscore the pressing need for heightened awareness and understanding of deepfake technology as it continues to evolve. As researchers and educators grapple with its implications, the conversation around ethics, detection, and societal impact remains critical in shaping the future landscape of artificial intelligence.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Regulation

Australia's Albanese government initiates AI workplace regulation, excluding union veto rights, while mandating minimum apprenticeships for billion-dollar data centers.

AI Cybersecurity

CERT-In issues a high-severity alert on AI-driven cyber threats, warning MSMEs of unprecedented attack capabilities that could compromise entire networks.

AI Technology

Cerebras Systems files for IPO amid surging investor interest in AI chips designed for large-scale models, positioning itself against giants like Nvidia.

Top Stories

Cohere AI acquires Aleph Alpha for $20 billion, creating a transatlantic AI powerhouse with 90% control for Cohere shareholders and a focus on data...

AI Technology

NGCG signs a major purchase order to deploy high-performance AI server systems, addressing surging market demand for scalable compute resources.

AI Regulation

Gavel launches Gavel Exec for Web, a browser-based AI contract platform featuring 93% search accuracy and batch analysis for enhanced legal efficiency.

AI Marketing

Meta expands its AI business assistant to major global markets, enhancing marketing campaign effectiveness with actionable insights and advanced analytics.

AI Technology

UAE aims to implement AI in 50% of government services by 2025, enhancing efficiency and cutting costs under Sheikh Mohammed's ambitious new strategy.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.