Stay Updated Icon

Subscribe to Our Tech & Career Digest

Join thousands of readers getting the latest insights on tech trends, career tips, and exclusive updates delivered straight to their inbox.

Google AI Mode Rolls Out to US, Bringing Deep Search, AI Shopping, and More to Transform Search

6:24 AM   |   21 May 2025

Google AI Mode Rolls Out to US, Bringing Deep Search, AI Shopping, and More to Transform Search

Google’s AI Mode Rolls Out to US, Bringing Deep Search, AI Shopping, and More to Transform Search

At its annual developer conference, Google I/O 2025, Google unveiled a significant expansion of its artificial intelligence capabilities within its core search product. The experimental Google Search feature known as AI Mode, which has been available for testing in Google’s Search Labs, is now rolling out to all users in the United States starting this week. This move represents a major step in Google’s strategy to integrate generative AI more deeply into the search experience, transforming how users find information and interact with the web.

The rollout of AI Mode builds upon Google’s existing AI-powered search feature, AI Overviews. Launched last year, AI Overviews display AI-generated summaries at the top of search results pages, aiming to provide quick, synthesized answers to user queries. While AI Overviews have seen rapid adoption, reaching over 1.5 billion monthly users according to Google, they have also faced scrutiny for occasional inaccuracies and questionable suggestions, famously including a recommendation to use non-toxic glue on pizza. Despite these early challenges, Google is pushing forward, announcing that AI Overviews will now exit its “Labs” testing phase and expand globally to over 200 countries and territories, supporting more than 40 languages.

AI Mode, however, represents a more advanced and interactive evolution of AI in search. Designed to handle complex, multi-part questions and facilitate follow-up queries within a conversational interface, it directly addresses the capabilities offered by emerging AI companies like Perplexity and OpenAI, which have begun to encroach on Google’s traditional search territory with their own AI-powered web search features. By making AI Mode broadly available, Google is signaling its vision for the future of search – one that is less about navigating lists of links and more about engaging in a dialogue with an intelligent agent that can understand nuance and context.

Beyond Summaries: The Power of AI Mode’s New Capabilities

As AI Mode becomes more widely accessible, Google is highlighting several powerful new capabilities designed to tackle increasingly complex user needs. These features go far beyond simply summarizing web pages, aiming to perform deeper analysis, assist with tasks, and provide more comprehensive, actionable results.

Deep Search: Unlocking Comprehensive Research

One of the flagship features within AI Mode is “Deep Search.” While AI Mode itself can break down a complex question into subtopics, Deep Search takes this concept to an entirely new level. Google explains that Deep Search can issue dozens, or even hundreds, of individual queries behind the scenes to gather information relevant to a user’s complex request. The result is not just a simple answer, but a fully cited report generated in minutes, complete with links to the sources used. Google suggests this feature could save users hours of manual research.

Imagine you’re planning a major purchase, like a new home appliance, or trying to find the perfect summer camp for your children. Traditionally, this would involve visiting numerous websites, comparing specifications, reading reviews, checking prices, and compiling information manually. Deep Search aims to automate this laborious process. By asking a complex, multi-faceted question – for example, “Compare the energy efficiency, user reviews, and warranty options for top-rated smart refrigerators under $2000” or “Find summer camps in the Bay Area for 10-year-olds that offer coding and outdoor activities, comparing costs and session dates” – Deep Search can delve into the web, synthesize the relevant data points from various sources, and present a structured, comparative report. This capability has the potential to significantly streamline decision-making processes that require extensive information gathering.

AI-Powered Shopping: From Virtual Try-On to Automated Purchase

Google is also integrating advanced AI into the shopping experience within AI Mode. A notable upcoming feature is a virtual “try it on” option for apparel. Users will be able to upload a picture of themselves, and the AI will generate an image of them wearing the item in question. This isn’t just a simple overlay; Google states the feature will have an understanding of 3D shapes and fabric types, allowing it to realistically render how the garment would drape and stretch on the user’s body. This feature is beginning its rollout in Search Labs, indicating Google’s commitment to making online shopping more interactive and personalized.

Looking ahead, Google plans to introduce an even more ambitious shopping tool for U.S. users. This feature will act as an agent that can purchase items on your behalf once a specific price target is met. While the user will still need to initiate the action by clicking a “buy for me” button, this capability hints at a future where AI agents can execute tasks based on user preferences and market conditions, moving beyond information retrieval to active participation in online transactions. These shopping features demonstrate Google’s intent to keep users within its ecosystem for more than just searching, extending its reach into the transactional aspects of online activity.

Handling Complex Data: Sports, Finance, and Beyond

AI Mode is also being enhanced to support the analysis of complex data sets, initially focusing on sports and finance queries. This feature, available through Labs “soon,” will allow users to ask intricate questions that require comparing and synthesizing data from multiple sources. For example, a user could ask, “Compare the Phillies and White Sox home game win percentages by year for the past five seasons.”

The AI will be able to search across various sports statistics databases and news sources, aggregate the relevant data points, and present them in a single, coherent answer. Crucially, it can also create visualizations on the fly, such as charts or graphs, to help users better understand the trends and comparisons within the data. This capability is particularly valuable for researchers, analysts, or simply curious individuals who need to quickly make sense of structured data without manually collecting and organizing it from disparate sources. The potential applications extend beyond sports and finance to any domain where comparative or trend analysis of data is required.

Project Mariner: Enabling Agent-Like Actions

Underpinning some of these new capabilities is Project Mariner, Google’s agent technology designed to interact with the web and take actions on a user’s behalf. Initially, this agent functionality will be available for queries involving restaurants, events, and other local services. Instead of simply providing a list of restaurants or event listings, AI Mode leveraging Project Mariner can save users time by researching prices, checking availability, and potentially even assisting with booking or purchasing across multiple sites. For instance, if you’re looking for affordable concert tickets, the AI could scour various ticketing platforms, compare prices and seating options, and present the best deals, potentially even guiding you through the purchase process.

This move towards agent-like capabilities signifies a shift in Google Search from being a passive information provider to an active assistant that can help users complete tasks. It positions Google to compete with dedicated service platforms and booking sites by integrating these functionalities directly into the search experience.

Search Live: Real-Time Visual and Audio Interaction

Perhaps one of the most futuristic features announced is Search Live, set to roll out later this summer. This capability will allow users to ask questions based on what their phone’s camera is seeing in real time. This goes significantly beyond the existing visual search capabilities of Google Lens, which primarily identifies objects or provides information based on a single image. Search Live enables an interactive, back-and-forth conversation with the AI using both video and audio input.

This feature is reminiscent of Google’s multimodal AI system, Project Astra, which demonstrated the ability to understand and discuss the user’s environment in real time. With Search Live, you could point your camera at a complex piece of machinery and ask, “What is this part and how do I fix it?” or walk through a foreign city and ask about landmarks or directions based on what you see. The AI could process the visual stream, understand your spoken questions, and respond verbally, creating a truly interactive and context-aware search experience tied to the physical world. This represents a profound evolution in how users can query information, moving from text-based input to a dynamic, multimodal interaction.

Personalization Through Google Apps

Adding another layer of utility, Google announced that search results will become more personalized based on users’ past searches and, optionally, by connecting their Google Apps. This feature, also rolling out this summer, allows users to grant AI Mode access to data within their Google services, starting with Gmail.

For example, if you connect your Gmail account, AI Mode could access booking confirmation emails to understand your upcoming travel dates and destination. Armed with this context, it could then proactively recommend events, restaurants, or activities in that specific city that will be taking place during your visit. This level of personalization, while potentially very helpful, also raises privacy considerations. Google acknowledges this, emphasizing that connecting apps is optional and users retain control, able to connect or disconnect their apps at any time. This feature highlights the potential for AI to act as a proactive assistant, anticipating user needs based on their digital footprint, but also underscores the importance of user consent and data control.

The Engine Behind the Evolution: Gemini 2.5

Powering both the expanded AI Overviews and the new AI Mode capabilities is a custom version of Google’s Gemini 2.5 large language model. Google states that the advanced capabilities initially debuting in AI Mode will gradually roll out and become integrated into the standard AI Overviews experience over time. This suggests a phased approach, where AI Mode serves as a testing ground for cutting-edge features before they are broadly deployed across the main search interface.

The reliance on Gemini 2.5 underscores Google’s commitment to using its most advanced AI models to redefine search. The capabilities demonstrated – complex query understanding, multi-source data synthesis, agentic actions, and multimodal processing – require a highly sophisticated model capable of handling diverse data types and executing multi-step reasoning.

The Evolving Landscape of Search

Google’s aggressive rollout of these AI features is a clear response to the rapidly evolving landscape of information access. The rise of generative AI chatbots and specialized AI search engines has challenged Google’s long-held dominance in the search market. Companies like Perplexity AI offer conversational interfaces and cited answers, directly competing with the core function of AI Overviews and AI Mode.

Google’s strategy appears to be twofold: first, to enhance its existing search product with AI summaries (AI Overviews) and second, to push the boundaries of what search can do with a more interactive, task-oriented interface (AI Mode) and its advanced features like Deep Search, Search Live, and agent capabilities. This isn’t just about providing answers; it’s about facilitating complex research, assisting with real-world tasks, and offering personalized experiences, all within the Google Search environment.

The shift from keyword-based queries to natural language conversations and task execution represents a fundamental change in user interaction with search engines. Users are increasingly expecting AI to understand their intent, handle ambiguity, and perform actions on their behalf, rather than simply returning a list of links to external websites. Google is clearly positioning AI Mode and its associated features as its answer to these evolving user expectations and the competitive pressures from the AI startup ecosystem.

Challenges and the Road Ahead

While the potential of these new features is significant, Google still faces challenges. The early stumbles with AI Overviews, such as the infamous “glue on pizza” suggestion, highlight the ongoing need for accuracy, reliability, and safety in AI-generated content. As AI takes on more complex tasks, including synthesizing data for Deep Search reports or performing actions via Project Mariner, the potential for errors or unintended consequences increases. Maintaining user trust will be paramount.

Transparency is also key. Google’s commitment to providing citations in features like Deep Search is a positive step, allowing users to verify information and delve deeper into the sources. However, the complexity of the underlying processes – issuing dozens or hundreds of queries, synthesizing vast amounts of data – makes it challenging for users to fully understand how an answer was derived. Ensuring users understand the nature of the AI’s output and its limitations will be an ongoing effort.

Furthermore, the integration of personalization features based on connected Google Apps raises important questions about data privacy and security. While Google emphasizes user control, the potential for a more comprehensive profile of user activity being used to tailor search results requires clear communication and robust privacy safeguards.

Conclusion

The rollout of Google’s AI Mode to all U.S. users and the global expansion of AI Overviews, coupled with the introduction of powerful new features like Deep Search, AI-powered shopping, complex data analysis, Project Mariner’s agent capabilities, and Search Live, mark a pivotal moment in the evolution of Google Search. These developments, powered by Gemini 2.5, signal Google’s determination to remain at the forefront of information access in the age of generative AI.

By moving beyond traditional keyword search to embrace conversational interfaces, deep research capabilities, task execution, and real-time multimodal interaction, Google is attempting to redefine what a search engine can be. The vision is clear: a more intelligent, personalized, and actionable search experience that can handle the complexity of human queries and assist users in navigating both the digital and physical worlds. While challenges related to accuracy, transparency, and privacy remain, the features unveiled at Google I/O 2025 demonstrate a bold step towards a future where AI is not just augmenting search, but fundamentally transforming it.