Google I/O 2025: Smarter search, beam, and Gemini upgrades

At its annual I/O 2025 conference, Google revealed a powerful lineup of AI-driven innovations aimed at transforming the way users search, communicate, and interact with technology. From more intelligent search capabilities and real-time translation in meetings to lifelike video avatars and advanced research tools, the tech giant is embedding artificial intelligence deeper across its platforms. With over 400 million active Gemini users and billions interacting with AI in Search, Google is signaling a bold new chapter in consumer and enterprise AI.

AI - freepik
Image credit: Freepik

At Google I/O 2025, the tech giant unveiled several major AI-powered tools and updates that are set to reshape user experience across its ecosystem. One of the biggest announcements was the introduction of AI Mode for Search. This new feature allows users to type much longer and more detailed queries—up to three times the usual length—making searches deeper and more context-aware. It will first be rolled out in the U.S. and is part of Google’s push to make Search a more intelligent and interactive experience.

Another key highlight was Google Beam, a new communication platform designed to create 3D avatars of people during video calls. Using cameras and AI, Beam builds real-time, lifelike representations of participants, offering a more immersive version of video chat. Positioned as a more accessible alternative to the earlier Project Starline, Beam aims to bring high-end virtual communication to a wider audience.

On the advanced research front, DeepMind CEO and Nobel laureate Demis Hassabis introduced Gemini 2.5 Pro Deep Think, an AI system designed for high-level reasoning and problem-solving using parallel thinking techniques. Though still in early testing and limited to select users, this tool is expected to play a major role in the future of research and analytical applications.

Google Meet is also getting smarter. A new live translation feature is being rolled out that translates speech in real time while preserving tone and delivery. The tool currently supports translation between English and Spanish, with more languages to be added soon. This feature is now available in beta for subscribers and aims to make cross-language communication smoother during virtual meetings.

The event also showcased updates to Google’s real-time AI camera tools. Users will now be able to use the Live feature in Search to interact with anything their phone’s camera captures. This functionality is also coming to the Gemini app on iOS, expanding from its earlier availability on Android.

Additionally, Gemini Live, which was initially exclusive to Pixel devices, is now expanding to all supported Android and iOS phones through the Gemini app. This feature allows users to ask questions about what they see through their phone cameras or in screenshots, further blending AI with everyday tasks.

Google CEO Sundar Pichai also shared new user stats: the Gemini chatbot app now serves over 400 million monthly users, and AI Overviews in Search have reached 1.5 billion monthly users, highlighting the massive scale and adoption of Google’s AI advancements.

Leave a comment

Subscribe To Newsletter

Stay ahead in the dynamic world of trade and commerce with India Business & Trade's weekly newsletter.