Samsung made a significant announcement at its Galaxy Unpacked 2026 event, introducing what it terms “The Beginning of Truly Agentic AI.” The tech giant aims to integrate this advanced artificial intelligence seamlessly into everyday mobile experiences, emphasizing that agentic AI should function as a reliable component of smartphone use rather than merely a separate feature.
The unveiling of the Galaxy S26 and Buds4 lines highlighted Samsung’s vision for a next-generation mobile AI that comprehensively understands context, offers actionable suggestions, and aids users in completing tasks without requiring them to switch between applications. Among the notable features presented were call screening summaries, “Now Nudge” recommendations based on ongoing conversations, and an enhanced Bixby capable of web searches while maintaining context within discussions.
Samsung also emphasized its commitment to user privacy and on-device protection, unveiling a “Personal Data Engine” designed to learn user preferences locally and bolster security. This system is complemented by multiple security layers, including Knox Enhanced Encrypted Protection and Knox Vault, along with new privacy controls such as “Privacy Display” aimed at minimizing shoulder-surfing threats for sensitive applications.
In collaboration with Google, Samsung showcased an early preview of a more “agentic” Android platform powered by Google’s Gemini 3, along with expanded Circle to Search functionality. Co-CEO TM Roh articulated the company’s mission to make AI dependable and broadly applicable, stating, “Infrastructure is responsibility. It must work for everyone, everywhere.”
Perplexity’s Agentic Shift
In a parallel development, Perplexity announced the launch of its new cloud-based agent product, Perplexity Computer, available to subscribers on its $200/month Perplexity Max plan. The company describes this offering as a “computer user agent” capable of autonomously managing complex workflows, including the creation of subagents for various tasks.
According to reports from TechCrunch, Perplexity claims its system can manage workflows across 19 different AI models, showcasing its ability to handle assignments that involve gathering statistical, financial, or legal information and generating outputs such as websites or visualizations. This launch represents a broader strategic shift for Perplexity, which initially gained traction by delivering a search-like experience with top AI models, and is now targeting a more specialized audience focused on high-value use cases.
Executives highlighted that the goal is not to maximize user counts but to create products for individuals making critical decisions. They believe access to multiple specialized models is essential, asserting that software should automatically select the most suitable model for a given task. As articulated by one executive, “Multimodel is the future.”
As the conversation around AI agents evolves, questions arise regarding their governance within enterprises. A report from Cybersecurity Insiders contends that while AI agents are already operational in many businesses, they are not being treated as legitimate “users” from a security perspective. Paul Walker, field CTO at Omada, pointed out that agent frameworks facilitate actions across various systems, blurring the distinction between human and nonhuman actors.
This rapid adoption of AI agents introduces governance challenges, as management structures have yet to catch up. Walker noted that current identity management practices designed for employees do not apply easily to agents, which can generate “authorization without oversight.” Agents can autonomously request or assume access, leading to a gradual accumulation of permissions that outstrip their initial purpose.
To address these concerns, the report advocates for treating agents as first-class identities. This includes formal provisioning, clearly defined entitlements, least-privilege access, lifecycle controls, and activity monitoring across systems. Walker emphasized that agent identity involves more than just login capabilities; it encompasses accountability for actions taken, data accessed, and instructions given to systems. The report calls for enhanced audit trails and stricter controls to ensure security teams can monitor agent activities comprehensively, framing this as a fundamental requirement for the safe scaling of agentic AI. “AI agents are already inside your enterprise. The question is: Who’s governing them?” Walker asked, underscoring the need for robust oversight in an evolving technological landscape.
As these advances in agentic AI unfold, the implications for both user experience and enterprise security are profound, highlighting the necessity for a balanced approach that prioritizes innovation alongside responsibility.
See also
Bitcoin Rises to $68,600 Amid Market Resilience and Geopolitical Tensions
Germany”s National Team Prepares for World Cup Qualifiers with Disco Atmosphere
95% of AI Projects Fail in Companies According to MIT
AI in Food & Beverages Market to Surge from $11.08B to $263.80B by 2032
Satya Nadella Supports OpenAI’s $100B Revenue Goal, Highlights AI Funding Needs

















































