As concerns about the reliability and harmfulness of artificial intelligence (AI)-generated content grow, a Korean startup has taken a unique stance by focusing on content safety. Filer, established in 2021, showcased its innovative approach during Nvidia’s annual developer conference, GTC 2026, held from March 16 to 19 in San Jose, California. The company aims to mitigate the risks posed by low-quality AI content, often referred to as “AI Slop.”
In an interview at the conference, Filer CEO Oh Jae-ho highlighted the dual nature of AI technology, acknowledging both its potential benefits and the side effects it can produce. “There are of course many positive aspects of AI, but there are also many side effects that can be brought from the opposite side. Filer is a company that prepares to stop it,” he stated. The startup specializes in providing brand safety solutions for businesses, particularly in the realm of video advertising.
Filer’s technology is designed to prevent advertisements from appearing alongside controversial or low-quality videos on platforms like YouTube. For companies, the risk of having their ads displayed next to content related to terrorism or fake news can lead to significant reputational damage. Traditionally, advertisers had to manually scan videos to determine appropriate placements, but Filer uses its own multimodal AI model to analyze video content and suggest suitable ad placements automatically.
Oh emphasized that not all AI-generated videos are detrimental, noting that many human-created videos can also pose risks. The challenge lies in establishing criteria for identifying harmful content, a task Filer is tackling through advanced technology. “How to set and distinguish the criteria for harmfulness is a technical task,” he explained.
Filer has collaborated with Nvidia since its inception, and at GTC, the company presented its video understanding AI technology during a session titled “Expanding Reliable Multimodal Image Intelligence.” Oh expressed enthusiasm about potential future collaborations with Nvidia and its NeMo team, which is focused on detecting content hazards. “When I came to GTC, I learned that Nvidia and the NeMo team are also working on technology to detect content hazards with an organization that studies content safety,” he said.
Beyond advertising, Filer envisions its video content analysis technology having broader applications in ensuring safety across various sectors. Oh drew a comparison to automobile safety, stating, “There were no car seat belts until the car accident,” highlighting the pressing need for safety measures in the evolving digital landscape. He stressed that while there are numerous discussions regarding risks associated with video content, there is a scarcity of dialogue about concrete responses.
Following its participation in GTC, Filer is poised to accelerate its global expansion efforts this year, aiming to address the pressing issues surrounding content safety in an increasingly AI-driven world. As the demand for reliable content escalates, Filer’s commitment to safety technology could play a pivotal role in shaping how businesses navigate the challenges posed by AI-generated media.
See also
Sam Altman Praises ChatGPT for Improved Em Dash Handling
AI Country Song Fails to Top Billboard Chart Amid Viral Buzz
GPT-5.1 and Claude 4.5 Sonnet Personality Showdown: A Comprehensive Test
Rethink Your Presentations with OnlyOffice: A Free PowerPoint Alternative
OpenAI Enhances ChatGPT with Em-Dash Personalization Feature


















































