Toggle light / dark theme

Now millions of developers are building with Gemini. And it’s helping us reimagine all of our products — including all 7 of them with 2 billion users — and to create new ones. NotebookLM is a great example of what multimodality and long context can enable for people, and why it’s loved by so many.

Over the last year, we have been investing in developing more agentic models, meaning they can understand more about the world around you, think multiple steps ahead, and take action on your behalf, with your supervision.

Today we’re excited to launch our next era of models built for this new agentic era: introducing Gemini 2.0, our most capable model yet. With new advances in multimodality — like native image and audio output — and native tool use, it will enable us to build new AI agents that bring us closer to our vision of a universal assistant.

3 Comments so far

  1. Gemini 2.0 sounds like a major leap forward! The combination of multimodal capabilities and agentic models is so exciting. How do you see developers leveraging these features to create more intuitive and impactful user experiences across industries? It feels like we’re on the cusp of some truly transformative tools.

  2. The introduction of Gemini 2.0 seems like a huge leap in AI autonomy. I’m curious about how this might impact industries that rely on human decision-making, like healthcare or law. Do you see any potential concerns about AI’s increasing independence?

  3. Gemini 2.0 seems like a game-changer in how AI models will evolve. As we move into the agentic era, I’m curious how industries will adapt to such a shift in autonomy. The ethical implications are definitely something to keep an eye on.

Leave a Comment