Google has begun inviting users of its Gemini chatbot to allow the tool to access their Gmail, Photos, search history, and YouTube data, promising potentially more personalized responses in return. The announcement was made on Wednesday by Josh Woodward, VP of Google Labs, who revealed that the beta version of the new feature, termed Personal Intelligence, will gradually roll out to US-based subscribers of the Gemini AI Pro and AI Ultra tiers over the coming week.
Woodward’s remarks highlight an ambition behind the term “Intelligence,” which some critics argue is more aspirational than accurate, as machine learning models function primarily through predicting tokens based on existing training data. Some suggest that a more fitting label could be “Personalized Predictions,” although it may lack the allure of its current nomenclature. Accessing personal data does raise the promise of offering users a more tailored AI experience.
According to Woodward, Personal Intelligence can leverage information from various Google apps to enhance the Gemini model’s responses, enabling it to incorporate personal or app-specific data from users. “Personal Intelligence has two core strengths: reasoning across complex sources and retrieving specific details from, say, an email or photo to answer your question,” Woodward explained. He noted that the model often combines these capabilities, utilizing text, photos, and videos to deliver uniquely customized answers.
As an illustration, Woodward recounted a recent incident where he needed information about his car’s tire size while at a service center but could not remember his license plate number. By querying Gemini, the model scanned his photo library, identified a picture of his car, and converted the license plate from the image to text. This raises questions regarding the balance between automation and personal autonomy, as reliance on AI for such tasks may diminish one’s cognitive engagement.
In a move that may address user concerns, Google has made Personal Intelligence opt-in, requiring users to enable it for each individual app. Woodward acknowledged that the adoption of such features might be encouraged through notifications and prompts during app interactions, similar to existing functionalities within Google Workspace.
Woodward further asserted that Google’s approach to user data is distinct from rival AI services, as it emphasizes that this data “already lives at Google securely.” He maintained that there is no breach of privacy when the information is being utilized from within Google’s ecosystem. Gemini will also attempt to cite the sources of its personalized outputs, allowing users to verify or correct recommendations. To mitigate concerns about sensitive information, there are safeguards designed to prevent Gemini from referencing particularly sensitive data, such as health-related details.
The context of data sharing at Google has evolved since the controversial privacy policy changes in 2012 that allowed the company to amalgamate user data across its services. Currently, there is a noticeable trend toward encouraging users to willingly share their data to enhance their experiences. Woodward emphasized that the aim of this initiative is to improve the Gemini experience while maintaining user data security and control. “Built with privacy in mind, Gemini doesn’t train directly on your Gmail inbox or Google Photos library,” he stated. “We train on limited info, like specific prompts in Gemini and the model’s responses, to improve functionality over time.”
He elaborated that personal data, such as the license plate or road trip photos, will not be used for the model’s training. Instead, only filtered prompts and responses devoid of personal details will contribute to the training process. “In short, we don’t train our systems to learn your license plate number; we train them to understand that when you ask for one, we can locate it,” Woodward concluded.
Google’s Gemini Apps Privacy Hub provides a detailed overview of how the company intends to use the information available to its AI model. It also states that some user data may be reviewed by human reviewers, including those from partner service providers, for various purposes such as service improvement and safety. Users are cautioned against entering confidential information that they would not like to be accessed by reviewers or used to enhance Google’s services.
While Google’s support documentation warns that Gemini might yield inaccurate or inappropriate responses not reflective of the company’s views, it emphasizes that users should refrain from relying on the AI for critical advice in medical, legal, or financial matters. Despite these limitations, the introduction of Personal Intelligence may indeed offer enhanced assistance for less significant inquiries.
See also
Notre Dame Secures $50.8M Grant to Shape Christian Ethics for AI Development
GPT-5.2 Achieves Milestone by Solving Long-Standing Erdős Problems, Transforming AI in Mathematics
Meta Launches Meta Compute to Drive Multi-Gigawatt AI Infrastructure Expansion
NYSE Pre-Market Update: U.S. Stocks Dip Ahead of Fed Speeches; U.S.-Saudi Biotech Summit Launches
IHG Appoints McDonald’s AI VP Wei Manfredi to Transform Hospitality Tech Strategy



















































