As the company’s flagship developer conference, Google I/O always has plenty to show each year, and 2025 is no exception. Like clockwork, a slew of announcements were made during the keynote held on 20 May, with artificial intelligence (AI) taking centrestage.
The technological advancements span Google’s entire portfolio, including Search and Gemini, as well as external parties and collaborators like Android. For easier reference, here’s a lowdown of the biggest news from Google I/O 2025 across productivity, content creation, and everything in between.
1) Google Beam
Formerly known as Project Starline, Google Beam is touted as the tech giant’s “AI-first 3D video communication platform” that transforms standard 2D video streams into realistic 3D experiences. It runs on a combination of software and hardware, such as a six-camera array positioned at different angles, to create a sense of depth and dimensionality, delivering 60fps streaming and millimetre-level head tracking.
Beam will also offer near-real time speech translations when used with Google Meet, while preserving the voice, tone, and expression of the original speaker. Google is working with Zoom and HP to bring the feature to enterprises, and the first devices are set to launch for select customers later this year.
2) Real-time speech translations in Meet

Powered by a large language audio model from Google DeepMind, speech translation in Meet translates spoken words into the preferred language of the user’s conversation partner, with the dubbed version overlaid on top. It will first be available in English and Spanish, followed by Italian, German, and Portuguese in the coming weeks, and is now live in beta for users who subscribe to Google’s AI Pro plan or the newly-announced AI Ultra subscription.
3) AI Mode
AI Mode leverages Google’s proprietary query fan-out technique to bring more advanced reasoning and multimodality, and the ability to go deeper through follow-up questions and helpful web links. Where Deep Search can create a fully-cited report in a few minutes by gathering hundreds of results and reasoning across scattered pieces of information, new agentic capabilities help users with tasks like purchasing tickets or making restaurant reservations.
The most interesting addition to the list is Shop on AI Mode, representing the colliding worlds of e-commerce and statistics. Alongside over 50 million product listings, it introduces virtual try-on technology that allows shoppers preview outfits on themselves just by uploading a photo – the first of its kind working at this scale.
AI Mode is only available in the U.S. for now, with more countries coming soon.
4) Gemini Live And Search Live

Also part of the AI Mode suite of features, Gemini Live boasts camera and screen-sharing capabilities such that users can stream video from their smartphone’s camera or screen to the AI Model and hold near-real time verbal conversations with it. Simply tap the “Live” icon in AI Mode or in Lens, point the camera, and enquire about the subject for an explanation of tricky concepts, suggestions, and links to different resources including websites, videos, forums, and more.
Gemini Live will be integrated more deeply with other Google apps in the coming weeks, bringing day-to-day convenience like offering directions from Google Maps, creating events in Calendar, and create to-do lists with Tasks.
5) Flow
A new video tool tailored for filmmaking, Flow is powered by Google’s trio of advanced generative AI models: Veo for video production, Imagen for image generation, and Gemini for text prompts. It lets users import characters or scenes and create those artefacts within the platform, granting access to key features that include camera control, a scene builder for editing or extending existing shots, asset management, and Flow TV – a showcase of clips, channels, and content generated with Veo.
Several upgrades are also coming to the other new models. Veo 3, for instance, can generate videos with audio for the first time, while Imagen 4 creates images in various aspect ratios and up to 2K resolution. Additionally, Google is set to launch SynthID detector, a verification portal that identifies AI-generated content.
6) Android XR

Android XR is the first Android platform built in the Gemini era, designed to power an ecosystem of headsets, glasses, and more. Equipped with a camera, microphones, and speakers, smart glasses running on it work alongside smartphones to display turn-by-turn directions, take photos, message friends, do live language translations, and more without users needing to reach into their pockets. Gentle Monster and Warby Parker will be the first two eyewear brands to bring it to life, and more collaboration efforts are expected in due time.
7) Stitch
Running on the Gemini 2.5 Pro model, Stitch turns rough user interface (UI) designs into app-ready ones, giving developers a better idea of how their conceputalisations will take shape. It works with wireframes, initial sketches, and screenshots of other UI templates, and touts the ability to turn text prompts and reference images into “complex UI designs and frontend code in minutes”.
The Stitch experiment is now available in Labs.
The post Google Beam, AI Mode, And More: 7 Highlights From Google I/O 2025 appeared first on Geek Culture.