In a recent podcast interview, Google Vice President of Search Liz Reid discussed the transformative impact of large language models (LLMs) on Google’s search capabilities. Speaking on the Access Podcast, Reid highlighted how these advancements are altering what Google can index and how it tailors search results for individual users.
Reid emphasized that the advent of multimodal AI models has significantly enhanced Google’s ability to understand audio and video content. This development marks a departure from earlier limitations, enabling a deeper comprehension of various content formats. “The great thing about LLM is they’re multimodal. So we can actually understand audio content and video content actually at a level we couldn’t years ago,” she stated. This progression goes beyond mere transcription, allowing Google to interpret not just the words spoken in a video but also its thematic essence and style.
This newfound capability is particularly relevant for non-English speakers, such as users in India who require information in their native languages. Reid noted that prior efforts to translate web content were not scalable. However, with LLMs, Google can now take information in one language, understand it, and subsequently output it in another language. “Now with an LLM, you can take information in one language, understand it, and then output in another language. Like that opens up information,” she explained.
Google’s ongoing evolution in search algorithms was underscored by adjustments made in October 2025 to prioritize short-form videos, forums, and user-generated content. Reid’s remarks also contextualize Google’s recent Audio Overviews experiment, which generates spoken AI summaries of search results. This feature was unavailable just a few years ago when Google faced challenges with speech-to-text accuracy, especially concerning proper nouns and regional references. Reid’s insights suggest that such barriers have now been significantly lowered.
In addition to advancements in content indexing, Reid outlined a future where search results are more personalized based on users’ subscriptions. She described a shift from Google’s existing Preferred Sources feature towards a model that prioritizes content from outlets to which users have paid subscriptions. “If you love this source and you do have a relationship with it then that content should surface more easily for you on Google,” she stated, providing a practical example: if numerous interviews on a topic are behind a paywall, Google should direct users to the one they can access based on their subscription.
Reid acknowledged that Google has made incremental progress in this area but expressed a desire to enhance how audiences connect with trusted sources through search. She also mentioned the potential for micropayments for individual articles, although she conceded that this model has historically struggled to gain traction. The expansion of Preferred Sources globally in December for English-language users, along with the introduction of a feature that highlights links from users’ paid subscriptions, indicates that Google is moving in this direction. Users who choose a preferred source tend to click through to that site twice as frequently, suggesting a strong market interest.
The implications of these multimodal capabilities are significant, as they expand the types of content formats that can be discovered through search. Podcasts, video series, and other audio-first formats have historically posed challenges for Google to evaluate beyond basic metadata and transcripts. This growing ability to assess relevance in audio and video content transforms how brands and creators can reach audiences, ensuring that their work does not go unnoticed.
The shift towards subscription-aware personalization is particularly crucial for publishers with paywalls or membership models. Search results that adapt to individual users’ subscriptions could bridge the gap between audience retention and search visibility, allowing paywalled content to perform better for the specific audiences that publishers aim to engage.
While Reid did not provide specific timelines for these developments, her remarks suggest that the multimodal indexing capabilities are already in play, while the subscription-aware personalization represents a stated future direction with existing features. As Google I/O approaches on May 19-20, Reid indicated that the company is “actively building,” with the rapid pace of AI development potentially allowing for new features to emerge as soon as April.
Overall, these advancements not only reflect Google’s commitment to enhancing user experience but also signal a broader evolution in how search engines can interact with diverse content formats and user preferences.
See also
Sam Altman Praises ChatGPT for Improved Em Dash Handling
AI Country Song Fails to Top Billboard Chart Amid Viral Buzz
GPT-5.1 and Claude 4.5 Sonnet Personality Showdown: A Comprehensive Test
Rethink Your Presentations with OnlyOffice: A Free PowerPoint Alternative
OpenAI Enhances ChatGPT with Em-Dash Personalization Feature



















































