Google is injecting a dose of transparency into its AI with the rollout of Gemini 2.0 Flash Thinking Experimental to its Gemini app. This new model, available starting today, isn't just spitting out answers; it's designed to show its work, revealing the step-by-step thought process behind its responses.
Gemini 2.0 Flash Thinking, previously available in Google AI Studio, is now accessible directly within the Gemini app on both desktop and mobile. It’s positioned as a “reasoning” model, meaning it’s built to tackle more complex queries by breaking them down into smaller, more manageable steps. Think of it as the AI equivalent of showing your math homework – you can actually see how Gemini arrives at its conclusions.

According to Google, this "thinking" model is capable of "working through more complex problems and showing how it reasons through them." This is a departure from standard large language models that typically present answers as a final output without revealing the internal logic. Gemini 2.0 Flash Thinking aims to offer users a clearer understanding of the AI's decision-making, making it easier to trust and evaluate the responses, especially for intricate questions.
But there’s more to it than just seeing the AI’s thought process. Google is also rolling out a version of Gemini 2.0 Flash Thinking that integrates directly with Google apps like YouTube, Maps, and Search. This connected version allows the AI to leverage the vast resources of Google’s ecosystem to enhance its reasoning. For example, ask it a question that requires real-world data, and it might tap into Google Maps for location-based information or YouTube for relevant video content, rather than relying solely on its training data.
Google provided an example: asking the model "How long would it take to walk to China?" Without app integration, Gemini 2.0 Flash Thinking would rely on its internal knowledge base, grappling with the vagueness of the query. However, with app integration, it immediately turns to Google Maps to provide a more grounded and practical answer, demonstrating its reasoning steps along the way.
This move to make reasoning models more accessible in the Gemini app comes alongside other Gemini family updates. Google also announced the experimental release of Gemini 2.0 Pro, touted as its "most capable model" particularly for coding and complex tasks. And for developers looking for cost-effective AI, there's Gemini 2.0 Flash-Lite, a new model designed to offer strong performance at a lower price point.
While Gemini 2.0 Pro and Flash-Lite cater to developers and advanced users, Gemini 2.0 Flash Thinking in the Gemini app puts this "reasoning" capability directly into the hands of everyday users. It’s an intriguing step towards making AI more transparent and understandable, letting users not just get answers, but also understand how those answers are generated. Whether this peek behind the curtain will build more trust in AI remains to be seen, but for now, you can watch Gemini think for itself in the Google app.
Discussion