Connect with us


Google’s AI Revolution: 10 Game-Changing Announcements from I/O 2024

Google I/O 2024 wrapped up, leaving a trail of announcements that solidify the company’s dedication to pushing the boundaries of Artificial Intelligence (AI).

I/O 2024 reiterates Google’s effort to weave AI into every fabric of its core products.

From revolutionizing the way we search for information to empowering users with creative tools and even enhancing the music experience, Google’s AI advancements promise to significantly impact our daily interactions with technology.

Here’s a breakdown of the key announcements:

Gemini 1.5 Pro and Gemini 1.5 Flash

Google announced the general availability of Gemini 1.5 Pro, a powerful AI model with a 1 million token context window, enabling it to process vast amounts of information like an hour of video or 1,500 pages of a PDF and respond to complex queries about this source material.

Gemini 1.5 pro will also be available in more than 35 languages starting today – providing access to the latest technical advances, including deep analysis of data files like spreadsheets, enhanced image understanding and a greatly expanded context window, starting at 1 million tokens.

Additionally, Google also introduced Gemini 1.5 Flash, a more cost-efficient model built based on user feedback, with lower latencies; and Project Astra, Google’s vision for the next generation of AI assistants, a responsive agent that can understand and react to the context of conversations.

Gemini 1.5

Generative AI in Search

Google is integrating Gemini into Search, enhancing its ability to understand and respond to complex queries.

This includes features like:

  • AI Overview – designed for advanced multi-step reasoning, planning, and multimodal capabilities. This enhancement ensures that people can ask intricate, multi-step questions, tailor their search outcomes, and interact using videos for an enriched query experience. This is set to launch soon, starting in the US before expanding globally.
  • Multi-step reasoning: which breaks down complex questions into smaller parts, synthesising the most relevant information and stitching it all together into a comprehensive AI Answer.
  • Search with video: which allows users to ask questions about video content by taking a quick video and get AI-powered answers in response. This feature will be available beginning with the US and rolling out to other regions over time.




Gemini for Android

Gemini is being integrated into Android to power new features like:

  • “Circle to Search,” which allows users to search for anything they see on their screen. This feature is expanding to more surfaces like Chrome desktop and tablets.
  • TalkBack: Gemini Nano will enhance TalkBack, Android’s screen reader, with new features that make it easier for people with visual impairments to navigate their devices and access information. This feature will come first to Pixel devices later in the year.
  • Live scam detection: Gemini Nano will be used to detect scam phone calls in real-time, providing users with warnings and helping them avoid falling victim to fraud. This feature will come first to Pixel devices later in the year.
  • Gemini as an Assistant on Android: An AI assistant providing contextual suggestions and anticipating user actions based on the current app screen. This feature will be available where the Gemini app is already available and requires Android 10+ and 2GB+ RAM. The new overlay features announced at I/O will roll out over the coming months.


Gemini for Workspace

This helps businesses and everyday users get more out of their Google apps — from drafting emails in Gmail to organising project plans in Sheets.

Over the last year, more than a million people and tens of thousands of companies have used generative AI in Workspace when they need an extra hand or dose of inspiration. Today, Google announced more updates across Gemini for Workspace, including new Gemini in Gmail features on mobile and additional language support.

  • Gemini in the Workspace side panel: Now using the Gemini 1.5 Pro model, it is available for Workspace Labs and Gemini for Workspace Alpha users and will be generally available next month to Gemini for Workspace customers and Google One AI Premium subscribers. It has a longer context window and more advanced reasoning to give you more insightful responses. Plus, it’s easy to get started with proactive summaries, suggested prompts and more.
  • Gemini Advanced subscribers will also soon get access to Live, a new mobile conversational experience. With Live, you can talk to Gemini and choose from different natural-sounding voices. You can speak at your own pace and even interrupt with questions, making conversations more intuitive. [Blog Post]

New Gemini features in the Gmail mobile app:

  1. See more detailed suggested replies: Gemini will automatically provide draft email responses that you can edit, or simply send. This will be available to Workspace Labs users on mobile and web in the coming months, and to Gemini for Workspace customers and Google One AI Premium subscribers later this year.
  2. Get summaries: Gemini can analyse email threads and provide a summarised view directly in the Gmail app. Summaries will be available to Workspace Labs users this week, and to all Gemini for Workspace customers and Google One AI Premium subscribers next month.
  3. Chat with Gemini: Soon, Gemini in Gmail will offer helpful features, like “summarise this email,” “list the next steps,” or “suggest a reply,” when you click the Gemini icon in the mobile app. These features will be available to Workspace Labs users in the coming weeks, and to Gemini for Workspace customers and Google One AI Premium subscribers in the coming months.
  • Use Help me write in more languages: Starting today, Help me write in Gmail and Docs is supported in Spanish and Portuguese.




Ask Photos

Google Photos is getting a new feature called “Ask Photos,” which uses Gemini to answer questions about photos and videos, such as finding specific images or recalling past events. This feature will be available beginning with the US and rolling out to other countries soon.


Imagen 3

Imagen 3, Google’s latest text-to-image model, is now available to select creators in private preview. It generates high-quality images with incredible detail, high-quality lighting, fewer distracting artefacts, and significant improvements in their ability to render text.

It will be available in three model variants: one optimised for speed, one balancing speed and quality, and one prioritising the highest quality images with the best text alignment.

Upgrades to image-generation in Workspace and the Gemini app and web experience are coming soon. Imagen 3 will be available where Vertex AI and ImageFX are available, via waitlist.



Veo is Google’s most capable video generation model, capable of creating high-quality 1080p videos up to a minute or more long.

Veo closely follows user prompts and offers unprecedented creative control, accurately following directions like quick zooming or slow-motion crane shots. It captures the nuance and emotional tone of prompts in various visual styles, from photorealism to animation, and maintains consistency across complex details.

Veo builds upon years of generative video model work and combines architecture, scaling laws, and novel techniques to improve latency and output resolution. Starting today,

Veo is available to select creators in private preview in VideoFX by joining the waitlist. In the future, Veo’s capabilities will be included in YouTube Shorts and other products.


Music AI tools

Our collaborations with the music community: Google is collaborating with musicians, songwriters, and producers, in partnership with YouTube, to better understand the role of AI in music creation.

They are developing a suite of music AI tools that can create instrumental sections, transfer styles between tracks, and more.

These collaborations inform the development of generative music technologies like Lyria, Google’s most advanced family of models for AI music generation.

New experimental music created with these tools by Grammy winner Wyclef Jean, electronic musician Marc Rebillet, songwriter Justin Tranter, and others was released on their respective YouTube channels at I/O.

SynthID for text and video

Google is extending SynthID to text and video, allowing for watermarking of AI-generated content. SynthID can now embed a digital watermark directly into the pixels of an image or video, making it imperceptible to the human eye but detectable for identification. This technology will be integrated into Gemini and Search’s creative queries.

AI Test Kitchen

AI Test Kitchen is expanding its reach, now available in over 100 countries and territories, including several in Sub-Saharan Africa like Kenya, Nigeria, South Africa, and more. Users can now experience and provide feedback on Google’s latest AI technologies, like ImageFX and MusicFX, in 37 languages, including Arabic, Chinese, English, French, German, Hindi, Japanese, Korean, Portuguese, and Spanish.


Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: