As generative AI rapidly evolves across various sectors, the pressing concern has shifted from merely developing intelligence to establishing trust in its use. Korean startup PYLER is emerging as a pivotal player in this landscape, showcasing how video understanding AI can enhance the integrity of digital environments. Its recent triumph at NVIDIA’s Inception Startup Grand Challenge 2025 not only highlights its accomplishments but also signifies a national commitment towards building the Trust Layer, envisioned as a safety infrastructure for the impending AI era.
PYLER secured first place at the NVIDIA Inception Startup Grand Challenge 2025 in November, outperforming over 80 competitors from around the globe. This accolade acknowledges its technological innovation and commercial viability in tackling a critical industry challenge: ensuring that AI-generated and video-based content remains safe, accurate, and ethical.
Founded in 2021 by Oh Jae-ho at the age of 19, PYLER has developed Antares, a multimodal video understanding AI that synthesizes visual, audio, and text analysis to identify and categorize harmful, misleading, or dangerous content. Its flagship products—AiD (Ad Intelligence Defense) and AiM (Ad Intelligence Match)—employ contextual analysis to filter out ads from unsafe videos while enhancing exposure to relevant, brand-aligned content.
In recognition of its contributions to building trustworthy and scalable video intelligence, PYLER also received the Minister’s Award from Korea’s Ministry of SMEs and Startups (MSS). Such honors from both a leading national innovation agency and a global tech giant underscore PYLER’s pivotal role in advancing the Trust & Safety layer of the AI ecosystem.
Currently, PYLER’s system analyzes three million videos per day, screening for sensitive or manipulative content. The company boasts partnerships with major clients, including Samsung Electronics, KT, Hyundai Marine & Fire Insurance, Nongshim, Kenvue, and Lotte Wellfood.
The explosive growth of AI-generated content (AIGC) has introduced new risks—ranging from deepfakes and hate speech to misinformation and child exploitation material—that undermine trust in digital media. PYLER’s Trust & Safety (T&S) approach serves as a new defense mechanism in this evolving landscape, where traditional human moderation can no longer keep pace.
After accepting the NVIDIA award, CEO Oh articulated the urgency of their mission, stating, “With the surge in AI-generated content, deepfakes and harmful materials are becoming serious problems. We need a verification layer to ensure AI remains safe and trustworthy.” This concept of a “verification layer” signifies a broader structural shift within the AI ecosystem, emphasizing accountability alongside efficiency. Through the integration of automated detection, contextual targeting, and ethical safeguards, PYLER lays the groundwork for brand safety and AI governance.
Korea’s government is actively establishing legal and ethical frameworks for AI, with the AI Basic Act set to take effect in 2026. This legislation prioritizes safety, transparency, and reliability, aligning Korea’s innovation strategies with global AI governance trends. PYLER’s focus on Trust and Safety AI places it at the intersection of this national agenda and the demands of the global market.
Key collaborations, such as PYLER’s partnership with NVIDIA and its membership in international frameworks like the IAB Tech Lab—where it became the first Korean member—highlight Korea’s growing influence in establishing global standards for ethical AI and digital advertising transparency. Participation in events such as CVPR, ICCV, and NVIDIA AI Day Seoul further affirms PYLER’s competitive standing alongside leading AI labs and corporations, including Intel, Tencent, and ByteDance.
CEO Oh Jae-ho underlined the societal significance of PYLER’s mission, asserting, “2026 will be the first year of Korea’s AI Basic Act. As safety and reliability become central to AI development, PYLER aims to become the company that ensures safety in this new AI era.” This perspective marks a significant evolution in startup leadership in Korea, emphasizing AI not just as a commercial opportunity, but as a social infrastructure challenge that demands ongoing validation and ethical oversight.
PYLER’s ascent exemplifies the maturation of Korea’s deep tech ecosystem, which is evolving from commercial AI applications to infrastructure-level innovation. Trust & Safety AI, while not consumer-facing, forms the essential foundation supporting global digital commerce, governance, and human communication.
By automating brand safety, verifying AI-generated content, and contributing to international AI standards, PYLER is effectively constructing the “Trust Layer” that future AI systems will rely upon. This development signals Korea’s strategic pivot from merely manufacturing intelligence to institutionalizing integrity, positioning the nation as a core node in the global AI safety infrastructure.
The trajectory of PYLER underscores that the forthcoming era of AI will be defined not only by the sophistication of models but also by the establishment of the most trusted systems. As generative AI transforms industries, ensuring the credibility of digital content emerges as a collective responsibility among startups, regulators, and corporations. PYLER’s initiatives affirm that Trust and Safety is not a compliance burden but a competitive advantage, enhancing brand integrity, investor confidence, and facilitating cross-border market access.
By embedding AI safety into the architecture of digital systems, Korea and startups like PYLER are laying the groundwork for a future where innovation and responsibility grow together, making the Trust Layer the most vital infrastructure of the AI era.


















































