Google, AI and Gemini
Digest more
Google says the release version of 2.5 Flash is better at reasoning, coding, and multimodality, but it uses 20–30 percent fewer tokens than the preview version. This edition is now live in Vertex AI, AI Studio, and the Gemini app. It will be made the default model in early June.
Google’s AI models are learning to reason, wield agency, and build virtual models of the real world. The company’s AI lead, Demis Hassabis, says all this—and more—will be needed for true AGI.
Google has launched a new Gemini AI Ultra AI subscription that costs $250 per month: Here's what you get from the most expensive tier.
Explore more
Gemini AI and others now have the ability to scour the video footage we keep in our apps: Here's why, what it's learning and how it may be able to help you.
Google’s Gemini Diffusion demo didn’t get much airtime at I/O, but its blazing speed—and potential for coding—has AI insiders speculating about a shift in the model wars.
On Tuesday at Google I/O 2025, the company announced Deep Think, an “enhanced” reasoning mode for its flagship Gemini 2.5 Pro model. Deep Think allows the model to consider multiple answers to questions before responding, boosting its performance on certain benchmarks.
Google is bringing Gemini intelligence to its Home APIs, allowing smart home developers and manufacturers to tap into Gemini’s AI-powered features and potentially making your smart home a lot, well, smarter. The company announced the news in a blog post during the Google I/O developers conference this week.