Google I/o 2024: Gemini 1.5 Pro Gets Big Upgrade as New Flash and Gemma AI Models Unveiled

Technology



Google held a keynote session at its annual developer-focused Google I/O event on Tuesday. During the session, the tech giant focused heavily on new developments on the artificial intelligence (AI) front and introduced several new AI models as well as new features for existing infrastructure. One of the highlights was the introduction of a two million token context window for Gemini 1.5 Pro, which is currently available to developers. A faster variant of Gemini was also introduced, as well as the next generation of Google's Small Model Language (SML) Gemma 2.

The event was kicked off by CEO Sundar Pichai, who made one of the biggest announcements of the night: the availability of a two million token context window for Gemini 1.5 Pro. The company introduced a context window of one million tokens earlier this year, but until now it was only available to developers. Google has now made it generally available in public preview and can be accessed via Google AI Studio and Vertex AI. Instead, the two million token context window is available exclusively via the waiting list to developers using the API and to Google Cloud customers.

With a context window of two million, according to Google, the AI ​​model can process two hours of video, 22 hours of audio, more than 60,000 lines of code, or more than 1.4 million words at once . In addition to improving contextual understanding, the tech giant has also improved Gemini 1.5 Pro's code generation, logical reasoning, planning, multi-turn conversation, as well as image and audio understanding. The tech giant is also integrating the AI ​​model into Gemini Advanced and Workspace apps.

Google has also introduced a new addition to the family of Gemini AI models. The new AI model, called Gemini 1.5 Flash is a lightweight model that is designed to be faster, more responsive and cost-effective. The tech giant said it has worked to improve its latency to improve its speed. Although solving complex tasks would not be its forte, it can handle tasks such as summarizing, chat applications, captioning images and videos, extracting data from documents and long tables, and much more.

Finally, the tech giant announced the next generation of its smallest AI models, Gemma 2. The model includes 27 billion parameters, but can run efficiently on GPUs or a single TPU. Google claims that Gemma 2 outperforms models twice its size. The company has not yet released its benchmark scores.



Source

Leave a Reply

Your email address will not be published. Required fields are marked *