At the end of Google’s annual developer conference I/O at Shoreline Amphitheater in Mountain View, Google CEO Sundar Pichai revealed that the company said “AI” 121 times. That was basically the gist of Google’s two-hour keynote – introducing AI to all of Google’s apps and services used by more than two billion people around the world. Here you’ll find all the important updates from Google’s big event, as well as some additional announcements made after the keynote.
Gemini 1.5 Flash and Gemini 1.5 Pro Update
Google has announced an all-new AI model called Gemini 1.5 Flash, which it says is optimized for speed and efficiency. Flash is included in the Gemini 1.5 Pro and Gemini 1.5 Nano, the company’s smallest models that run natively on the device. Google says it created Flash because developers wanted a lighter, more affordable model than Gemini Pro for building AI-powered apps and services, while retaining some of the things that set Gemini Pro apart from competing models, like the million-character context window. Later this year, Google will double the Gemini context window to 2 million tokens, meaning it can process 2 hours of video, 22 hours of audio, and more than 60,000 lines of code, or more than 1.4 million words, in a single breath. time. .
Astra-project
Google unveiled Project Astra, an early version of a global AI-powered assistant that Demis Hassabis, CEO of Google’s DeepMind, said is Google’s version of an AI agent “that can be useful in everyday life.”
In a video that Google says was shot in a single video, an Astra user walks around Google’s London office, holding his phone and pointing the camera at various objects – a speaker, code on a whiteboard, out a window – and a natural it. A conversation with the app about what it looks like. And in one of the video’s most impressive moments, he tells the user exactly where he previously left his glasses without the user taking off his glasses.
The video ends with a surprise: when the user finds and puts on the lost glasses, we learn that they have a built-in camera system and can have a seamless conversation with the user using Project Astra, which could indicate that Google is working on ‘meta’. Saw Ray-Ban is a contender for smart glasses.
Ask Google Images
Google Photos was already intelligent when searching for specific photos or videos, but with AI, Google is taking it a step further. If you’re a Google One subscriber in the US, you’ll be able to ask Google Photos a simple question like “Show me the best photos from every national park you’ve visited” when the feature rolls out in the next few months. Google Photos uses GPS data as well as its own judgment about what’s “best” to provide you with options. You can also ask Google Photos to generate captions for posting photos on social media.
See and imagine 3
Google’s new AI-powered media creation engines are called VO and Imagine 3. VO is Google’s answer to OpenAI’s Sora. Google says it can create “high-quality” 1080p videos that can last “more than a minute” and understand cinematic concepts like time-lapse.
Meanwhile, Imagen 3 is a text-to-image generator that Google claims handles text better than its predecessor, Imagen 2. The result is the company’s highest-quality text-to-image model with “amazing levels of realism”, vivid images” and fewer artifacts – essentially rivaling OpenAI’s DALLE-3.
Big update to Google search
Google is making big changes to the way it searches. Most of the updates announced today include the ability to ask really complex questions (“Find the best yoga or Pilates studios in Boston and see details about Beacon Hill’s offerings and walking hours.”) and search to plan uses for meals and vacations. It’s only available if you sign up for Search Labs, the company’s platform where users can try out beta features
But the big new feature, called Google AI Overview and which the company has been testing for a year now, is finally rolling out to millions of people in the US. Google Search will now display AI-generated answers at the top of search results by default, and the company says it will make the feature available to more than a billion users worldwide by the end of the year.
Gemini on Android
Google integrates Gemini directly into Android. When Android 15 is released later this year, Gemini will know what app, photo or video you’re running and can pull it up as an overlay and ask context-specific questions. Where does this leave Google Assistant, which already does this? You know who! Google didn’t bring this up at all in today’s keynote.
WearOS 5 battery life improvements
Google isn’t quite ready to launch the latest version of its smartwatch operating system just yet, but it’s promising some significant improvements in terms of battery life. The company says Wear OS 5 uses 20 percent less power than Wear 4 when a user runs a marathon. While Wear OS 4 brings battery life improvements to smartwatches that support it, it could still do a lot better in terms of managing device performance. Google has given developers new guidelines for power and battery conservation so they can build more efficient apps.
Anti-theft feature of Android 15
The developer preview for Android 15 has been running for months, but more features should be coming. Theft Detection Lock is a new feature in Android 15 that uses AI (here it is again) to predict phone theft and locks things accordingly. Google says its algorithms can detect movements associated with theft, such as grabbing a phone and walking away, riding a bike or driving a car. When a phone running Android 15 selects one of these scenarios, the phone’s screen will lock quickly, making it difficult for a phone hijacker to access your data.
There have also been a number of other updates. Google said it will add digital watermarks to AI-generated videos and texts, make Jimeny accessible in the side panels of Gmail and Docs, support AI-powered virtual colleagues in workspaces, listen in on phone calls and detect whether you’re actually being scammed. Over time and more.
Follow all the news from Google I/O 2024 live here!
Updated May 15, 2:45 pm ET: This story was updated after publication to include details on the new Android 15 and WearOS 5 announcements made after the I/O 2024 keynote.