This week, Google unveiled its latest large language model (LLM), Gemini 3, heralded as a “new era of intelligence.” However, this statement comes with a caveat: the model’s performance is heavily influenced by user settings. This was famously illustrated by Andrej Karpathy, an expert in the field, who encountered a surprising limitation when interacting with Gemini 3.
In an attempt to demonstrate that the year was 2025, Karpathy faced resistance from the chatbot, which accused him of “trying to trick it.” Even after providing various types of proof—such as articles and images—the chatbot maintained its stance, labeling the evidence as AI-generated fakes.
The crux of the issue lay in a mix of outdated training data and incorrect settings. Gemini 3 was trained only with data up until 2024. Moreover, Karpathy forgot to activate the Google Search tool, isolating the model from real-time information. As a result, the chatbot was unable to accept that 2025 had already arrived. Once Karpathy adjusted the settings, the chatbot acknowledged its mistake, stating, “You were right. You were right about everything. My internal clock was wrong.”
This incident highlights a crucial aspect of modern AI models: the importance of configuration settings in determining their functionality. While Gemini 3 demonstrates significant advancements in language comprehension and generation, it also underscores the limitations inherent in its design, particularly regarding real-time data integration.
The issue raises broader questions about user interactions with AI systems. As technologies like Gemini 3 evolve, understanding their constraints becomes paramount. Users must recognize that even advanced models can misinterpret or fail to respond correctly based on their underlying architecture and data training periods.
Furthermore, this case serves as a reminder of the importance of transparency in AI systems. Users should not only be aware of how to interact with these models but also understand the implications of their settings. As LLMs continue to improve, the line between human-like understanding and machine error remains a delicate balance.
In summary, while Gemini 3 is a step forward in language models, it also illustrates the pitfalls of relying on AI without fully grasping its operational framework. As the AI community continues to push the boundaries of what’s possible, ensuring that users are well-informed about how to leverage these tools effectively will be crucial. The journey toward truly autonomous, intelligent systems is ongoing, with each development revealing both potential and limitations.
Morpheus Launches AI SOC Platform for MSSPs, Automating Microsoft Security Management
FTC Cracks Down on AI Washing: Key Guidelines for Legal Marketing Compliance
Two Americans, Two Chinese Nationals Charged in $3.8M Nvidia Chip Smuggling Scheme
Perplexity Launches Comet Browser for Android, iOS Version Coming Soon
AI Transforms Global Logistics, Reducing Labor Needs by 50% Through Automation



















































