Google I/O 2025: Everything announced at this year's Programmers Conference

Google I/O 2025, the largest conference in this Google program, takes place on Tuesday and Wednesday at the Shoreline Amphitheater at Mountain View. We are on earth, introducing the latest updates from this event.

I/O shows products from the entire Google portfolio. We have a lot of messages about Android, Chrome, Google Search, YouTube i-Obviously-Chatbot with AI Google, Gemini.

Google hosted a separate event devoted to Android updates: The Android Show. The company has announced new ways to find lost Android phones and other items, additional functions at the device level for an advanced protection program, safety tools to protect against fraud and theft, and a new design language called Materia 3 Expressive.

Here are all things announced on Google I/O 2025.

Gemini Ultra

According to Google, Gemini Ultra (for now only in the US) provides the “highest level of access” for applications and services powered by AI Google. It is priced at USD 249.99 per month and includes a Google Veo 3 video generator, a new company video application and a powerful function of artificial intelligence called Gemini 2.5 Pro Deep Think, which has not yet launched.

AI Ultra has higher limits on the Google and Whiskey Notebooklm platform, the company remixing application. AI Ultra subscribers also receive access to Google Gemini Chatbot in Chrome; Some “agency” tools powered by a mariner tech project; YouTube Premium; And 30tb of memory in Google, Google and Gmail photos.

Deep thinking in Gemini 2.5 Pro

Deep Think is a “improved” reasoning mode for the flagship model Google Gemini 2.5 Pro. It allows the model to consider many answers to questions before answering, increasing its performance on some comparative tests.

Google did not describe the Deep Think Works in detail, but it can be similar to O1-PRO OPENAI and upcoming O3-PRO models, which probably use the engine to search and synthesize the best solution to a given problem.

Deep thinking is available for “trusted testers” via API Gemini. Google said that conducting safety assessments takes extra time before it was deeply thinking.

AI VEO 3 VIDEO generating model

Google claims that VEO 3 can generate sound effects, background sounds, and even a dialogue that creates movies. VEO 3 also improves its predecessor, VEO 2, in terms of quality of material that can generate, says Google.

VEO 3 is available from Tuesday in the Google Gemini Chatbot application for Google subscribers 249.99 USD per month Ultra Ultra Ultra Plan, in which you can ask for a text or image.

Image 4 AI image generator

According to Google Imagen 4 is fast – faster than Imagen 3. And soon it will be faster. In the near future Google plans to publish the Imagen 4 variant, which is up to 10 times faster than Imagen 3.

According to Google, Imagen 4 is able to render “small details”, such as fabrics, water droplets and animal fur. It can support both photorealistic and abstract styles, creating images in terms of proportion and for 2K resolution.

Both VEO 3 and Imagen 4 will be used to power flow, video tools powered by AI directed to create movies.

Updates of the Gemini application

Google announced that Gemini applications have over 400 active users per month.

The camera and the possibilities of sharing the Gemini Live screen will be introduced this week for all users on iOS and Android. This function, powered by Project Astra, allows people to conduct verbal verbal conversations from Gemini, while sending a stream video from the camera or smartphone screen to the AI ​​model.

Google claims that Gemini Live will also integrate deeper with other applications in the coming weeks: soon he will be able to offer tips from Google maps, create events on the Google calendar and create lists of things with Google tasks.

Google claims that he updates deep research, agent AI Gemini, who generates accurate research reports, enabling users to send their own private PDF files and images.

Stitch

Stitch is a tool powered by artificial intelligence that helps people design the front ends of internet and mobile applications by generating the necessary elements of the user and code interface. Stitch can be monitored by creating applications with a few words or even a picture, providing HTML and CSS tags for generated projects.

Stitch is a bit more limited in what it can do compared to other climate coding products, but there are a lot of adjustment options.

Google also expanded access to Jules, his AI agent, whose goal is to help programmers to repair errors in the code. The tool helps programmers understand the complex code, create PULL demands in GitHub and support some of the elements of arrears and programming tasks.

Project Mariner

Project Mariner is an experimental agent AI Google who browses and uses websites. Google claims that it has significantly updated how Project Mariner works, enabling the agent to undertake almost a dozen tasks at the same time, and now introduces it to users.

For example, Project Mariner users can buy tickets for a baseball game or buy online groceries without visiting the website of another company. People can simply talk to AI Google agent, visit websites and take action for them.

Project Astra

Low Google delay, AI multimodal experience, Project Astra, will supply a number of new search experiences, AI Gemini application and third -way products.

Project Astra was born with Google Deepmind as a way to present almost multimodal AI possibilities in real time. The company claims that now is building Astra glasses with partners, including Samsung and Warby Parker, but the company does not yet have a fixed release date.

AI mode

Google introduces AI mode, an experimental Google search function that allows people to ask complex multi -part questions via AI interface, users in the US this week.

AI will support the use of complex data in sports and financial queries and offers “Try” options for clothing. Search live, which is later introduced this summer, will allow you to ask questions based on what the phone camera sees in real time.

Gmail is the first application supported by a personalized context.

Beam 3d teleconferences

Beam, previously called Starline, uses a combination of software and hardware, in this board of six cameras and a non -standard light field display so that the user talks to someone as if he were in the same conference room. The AI ​​model converts video from cameras, which are set at different angles and indicated to the user, in 3D rendering.

The Google beam offers “almost perfect” head tracking at a millimeter level and video transmission of 60 frames per second. When used from Google Meet, Beam provides the function of speech translation in real time, which retains the voice, tone and expression of the original speaker.

Speaking of Google Meet, Google announced that the meeting received speech translation in real time.

More AI updates

Google launches Gemini in Chrome, which will provide people with access to a new assistant to browsing artificial intelligence, which will help them quickly understand the context of the page and perform tasks.

Gemma 3N is a model designed for “liquid” on phones, laptops and tablets. Is available in view from Tuesday; According to Google, it can support sound, text, paintings and movies.

The company also announced a lot of functions of the AI ​​work space in Gmail, Google Docs and Google VIDS. In particular, Gmail receives personalized intelligent answers and a new function of cleaning the inbox, while VIDS receives new ways of creating and editing content.

Video review appears to the notebook, and the company introduced Synthid detector, a verification portal that uses Google synthesizer technology to help identify the content generated by AI. Lyria Realtime, the AI ​​model, which drives an experimental application for music production, is now available via API.

Wear OS 6

Wear OS 6 introduces a unified tile font to view the application, and pixel watches get dynamic topics that synchronize the colors of the application with clock faces.

The basic promise of the new DESIGN reference platform is to enable programmers to better adapt in applications along with smooth transitions. The company releases Design guidelines for programmers with Figma design files.

Google Play

Google raises the Play Store for Android programmers with fresh tools for support for subscription, pages with topics so that users can immerse themselves in specific interests, audio samples to provide people to get acquainted with the content of the application and a new cash impression to sell accessories.

The pages of “Browsing the topic” to movies and programs (only for now) will combine users with applications related to lots of programs and movies. In addition, developers receive dedicated pages for testing and publications, as well as tools for observing and improving application implementation. Google programmers can now stop the application live if a critical problem arises.

Subscription management tools also get update with the cash register for many products. Devs will soon be able to offer subscription add -ons along with the main subscriptions, all under one payment.

Android Studio

Android Studio integrates new AI functions, including “Journeys”, the “Agentic AI” function, which coincides with the release of the Gemini 2.5 Pro model. And “Agent mode” will be able to support more intriguing development processes.

Android Studio will receive new AI capabilities, including an improved function of “fighting failure” in the Quality Insights application panel. This improvement, powered by Gemini, will analyze the source code of the application to identify potential causes of failure and suggest corrections.

LEAVE A REPLY

Please enter your comment!
Please enter your name here