Meta Platforms Inc. has announced a series of controversial updates to its Ray-Ban smart glasses, introducing features that could potentially impact users’ mental health and privacy. Slated for release this summer, the new functionalities will enable wearers to log their food intake using voice commands or photos, with the company’s AI providing nutritional insights based on their dietary habits.
In a move that has elicited concern among mental health advocates, Meta’s AI will allow users to ask questions like “What should I eat to increase my energy?” The response will be tailored to the user’s food log and personal health objectives, aiming to offer increasingly personalized advice over time. However, critics are alarmed that such features may exacerbate issues like eating disorders, particularly among vulnerable individuals.
The most alarming aspect of the upcoming update is the promise of automatic food logging. Meta claims that the Ray-Bans will “understand what you’re eating,” thus providing users with detailed nutritional insights without the need for manual entry. This raises significant privacy questions, as the glasses appear to require continuous recording to recognize food items accurately, potentially invading the personal space of users in public settings.
The complexity of accurately logging food intake poses another challenge. Caloric and nutritional tracking typically involves a mix of guesswork and research, and it remains unclear how AI will reliably interpret portion sizes and food types. Critics argue that the technology could easily misinform users, offering bad advice that may reinforce unhealthy behaviors or existing food anxieties.
Concerns about AI’s influence on mental health are not unfounded. Reports of so-called “AI psychosis” highlight the risks involved when artificial intelligence systems provide erroneous or harmful suggestions. For instance, a recent case involved a user who, following the advice given by Meta’s AI, ended up engaged in bizarre behaviors, reflecting the potential dangers inherent in the technology.
Moreover, the possibility of users asking harmful questions adds another layer of risk. Those struggling with body image issues might query whether skipping meals could help them lose weight, while others might misuse the technology to justify unhealthy eating patterns. This scenario echoes previous incidents where chatbots offered misguided guidance about restrictive diets, raising alarms over the implications of AI in sensitive areas like nutrition.
The new features will be restricted to users aged 18 and older in the U.S., which suggests that Meta is aware of the potential risks associated with its technology. As the company navigates the murky waters of privacy and mental health, it faces scrutiny not just from consumers but also from regulatory bodies concerned about the implications of such innovations.
Meta’s foray into AI-driven health advice marks a significant leap in the integration of technology into daily life, but it also underscores the ethical dilemmas companies face in balancing innovation with the well-being of their users. As these smart glasses roll out, the broader implications for individual health and societal standards regarding nutrition will likely spark ongoing discussion and debate.
See also
Rivian Delays AI Assistant Launch, Rolls Out Stability Update with Connectivity Enhancements
Germany”s National Team Prepares for World Cup Qualifiers with Disco Atmosphere
95% of AI Projects Fail in Companies According to MIT
AI in Food & Beverages Market to Surge from $11.08B to $263.80B by 2032
Satya Nadella Supports OpenAI’s $100B Revenue Goal, Highlights AI Funding Needs


















































