Connect with us

Hi, what are you looking for?

AI Research

University of Delaware Launches AI Model to Identify High-Risk Social Media Videos Before Virality

University of Delaware’s Jiaheng Xie unveils an AI model that predicts high-risk social media videos, enhancing user safety before they go viral.

University of Delaware's Jiaheng Xie unveils an AI model that predicts high-risk social media videos, enhancing user safety before they go viral.

Certain short-form videos on major social media platforms can trigger suicidal thoughts among vulnerable viewers, according to new research led by the University of Delaware‘s Jiaheng Xie. The study highlights the potential dangers posed by specific viral content, especially to young and impressionable audiences.

Xie’s team developed an AI model capable of predicting and flagging videos that may pose a risk to viewers. Their findings, released in the journal Information Systems Research, demonstrate how this AI tool can enhance safety protocols by identifying high-risk videos prior to their viral spread. The model evaluates both the content of the videos and the sentiments expressed in viewer comments, offering a dual-layered approach to risk assessment.

As an assistant professor of accounting and management information systems, Xie emphasized the model’s innovative capability to differentiate between the creator’s intentions and the audience’s perceptions. “Our tool can distinguish what creators choose to post from what viewers think or feel after watching,” he stated. This distinction is crucial, as it helps to understand the emotional impact of content beyond its surface level.

The AI system further separates known medical risk factors from emerging social media trends, such as viral heartbreak clips or challenges that may negatively influence teenagers. This ability to discern established risks from novel social phenomena is particularly important in a digital landscape that evolves rapidly.

One of the most significant aspects of Xie’s research is the model’s proactive nature; it aims to predict high-risk videos before they gain traction among larger audiences. Such foresight could revolutionize how platforms like TikTok and others approach content moderation, potentially preventing harmful content from reaching susceptible viewers.

As social media continues to play a pivotal role in shaping public discourse and personal well-being, the implications of this research are profound. Platforms face increasing scrutiny over their content moderation practices, particularly as mental health concerns escalate amid rising social media use. Xie’s work provides a pathway toward more responsible platform governance, where preemptive measures can be taken to safeguard vulnerable users.

Xie is open to discussions about how the model was developed and its potential implications for content moderation across various platforms. Reporters interested in delving deeper into this topic can reach out to [email protected] for interview opportunities.

As the digital landscape continues to evolve, so too must the approaches to managing its impact on mental health. The emergence of tools like Xie’s AI model signals a shift toward more nuanced and effective moderation strategies, marking a significant step in protecting users from harmful content online.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Education

University of Delaware students report AI tools like ChatGPT enhance writing but may mislead with inaccurate citations, prompting educators to rethink teaching methods.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.