Google’s AI Mode Now Opens Links Without Leaving the Page Here’s What Changes

{Post Title} | AiToolInsight

Google just made the most annoying thing about AI-powered search a lot less annoying. A new update to AI Mode in Chrome means you no longer have to choose between your search conversation and the page you want to read both live side-by-side, right in your browser.

If you’ve used Google’s AI Mode at all since it launched last year, you’ve probably run into the same frustration. You ask a question, you get a thoughtful AI-generated response, you click a source link to verify a claim and suddenly you’re gone. New tab. New context. You hit back, lose your thread, and start over. It’s the digital equivalent of writing a note on your hand, washing it off, and trying to remember what you wrote.

That workflow is now changing. On April 16, 2026, Google rolled out a significant upgrade to AI Mode in Chrome: clicking any source link will now open the page directly alongside the AI chat in a split-screen view no new tab, no broken flow, no context loss. It’s live now for users in the US, with a global rollout coming soon.

This isn’t just a cosmetic tweak. It signals something bigger about where Google sees the browser going and what it means for how tens of millions of Americans research, shop, learn, and decide online every day.

The Tab-Hopping Problem Google Is Finally Solving

To understand why this update matters, think about how you actually use search for anything non-trivial. You type a question, read an AI summary, open a source to verify, go back to get more context, open another source and before you know it, you’ve got 11 tabs open, your memory of the original question is fuzzy, and you’re clicking through browser history trying to remember where you started.

Google calls this “tab hopping,” and the company has been quietly obsessing over it as AI Mode evolved throughout late 2025. The solution they landed on is deceptively clean: when you click any link inside AI Mode, the destination page opens in a panel beside the AI chat instead of replacing it or spawning a new tab. The AI conversation stays fully active. You can read the article, watch a video, check a product page and ask follow-up questions that pull context from both the web and what’s on the screen right now.

What’s notable here is that Google is essentially turning the browser itself into the AI interface. AI Mode isn’t a pop-up or a widget it’s becoming the persistent layer that follows you wherever you browse. That’s a meaningful philosophical shift from search-as-destination to search-as-companion.

The Second Big Feature: Search Across Your Open Tabs

The split-screen view gets the headlines, but there’s a second feature in this update that might actually be more powerful for heavy users: the ability to search across multiple open Chrome tabs at once.

A new “plus” menu has been added to the search box on Chrome’s New Tab page and within AI Mode itself. Tap it, and you’ll see a list of your recently open tabs. Select any combination tabs, images, PDF files and AI Mode pulls them all into a single synthesized search context. You can then ask questions that require the AI to reason across all that material simultaneously.

The use cases here go well beyond casual browsing. As TechCrunch reported, a student studying for an exam can pull in class notes, lecture slides, and academic papers simultaneously and ask the AI to generate additional examples for a tricky concept. Someone shopping for a laptop can open multiple review tabs and ask AI Mode to compare them head-to-head without copying and pasting anything. A freelancer doing competitive research can gather half a dozen competitor pages and ask for a structured comparison in seconds.

This is the kind of multi-document reasoning that power users have been doing with tools like Claude AI and ChatGPT by manually copy-pasting content into prompts. Google is now making it a native browser feature zero friction, no copy-paste required.

What This Means for How You Actually Search

Let’s be concrete about the day-to-day impact, because it’s easy for feature announcements like this to stay abstract until you’re actually using them.

Shopping scenario: You’re looking for a coffee maker that fits your small apartment and can handle lattes. You type the request into AI Mode, get a curated list with explanations, click on a model you like the product page opens beside the chat. You ask AI Mode directly: “How easy is this to clean?” The AI reads the product page in context and answers immediately. No new tab. No search restart. The whole experience stays inside one view.

Research scenario: You’re trying to understand recent US AI policy developments. You have three articles open from CNBC, The Verge, and a government page. You select all three via the plus menu, ask AI Mode to summarize the key points and identify the disagreements. It synthesizes across all three and points you to additional sources.

The pattern is the same in both cases: AI Mode stays present as a reasoning layer, not just an answer machine at the start of a session. That’s genuinely new behavior for a browser.

How This Fits Google’s Bigger AI Search Strategy

This update doesn’t exist in isolation. Over the past year, as TechCrunch noted, Google has been steadily expanding AI Mode’s capabilities adding outfit and decor generation from descriptions, visualized travel planning, restaurant reservation lookups, and increasingly prominent source attribution after sustained criticism that AI Search was hurting publisher traffic.

That last point is worth keeping in mind. The decision to show sources more prominently was partially a response to media organizations complaining that AI overviews were reducing click-through rates to their sites. The side-by-side feature can be read as Google’s answer to that problem: instead of replacing web visits, it’s trying to make web visits better and more contextual. Whether that actually helps publishers in a meaningful way is still an open question if users can get their answers summarized without really reading the page, the traffic “visit” may not translate to meaningful engagement.

The competitive context matters too. Microsoft’s Copilot in Edge has offered AI-assisted browsing for over a year now. Perplexity has built its entire product around the idea of AI-augmented research with citations. Google is moving to make that same experience the default for everyone using Chrome which, as of early 2026, still holds roughly 65% of the global desktop browser market. When Google makes something the default Chrome experience for the entire US user base, that’s a different scale of deployment than anything a standalone AI app can achieve.

The browser is becoming the AI interface. That’s what this update is really announcing, even if Google isn’t saying it quite that directly. If you want to see where Chrome is headed, check out how the broader Gemini task automation roadmap is taking shape it connects directly to what AI Mode is building toward.

The Publisher Traffic Question Nobody Is Fully Answering

There is a legitimate concern buried inside this announcement that deserves more attention than it’s getting.

If users can click a source link and have AI Mode immediately summarize the page for them answering their follow-up questions without the user actually reading the article what does that mean for publishers’ revenue models? A “visit” in analytics terms might register, but if nobody reads the content, the engagement metrics that determine ad value collapse.

Google’s framing is that the side-by-side view makes it easier to “explore relevant websites.” But exploration and deep reading are different things. The risk is that publishers get the visit count without the real engagement the worst of both worlds. Expect this debate to intensify when the rollout goes global and traffic data starts rolling in.

Who Gets It and How to Enable It

As of April 16, 2026, the side-by-side link feature and the multi-tab search functionality are live for US English users. To access them, you need Chrome version 146.0.7680.174 or later check your Chrome settings under “Help → About Google Chrome” to confirm your version and trigger an update if needed.

AI Mode itself is still in Google Labs, meaning you need to have opted into the Labs experiment to access it. If you haven’t done that yet, navigate to labs.google.com in Chrome and enable the AI Mode experiment. Once active, the new split-screen behavior kicks in automatically when you click sources there’s no additional toggle to flip.

The multi-tab search feature works via the new “plus” icon in either the New Tab page search bar or within AI Mode itself. On desktop, this includes the ability to add tabs, images, and PDFs. Mobile (Android and iOS) gets the tab-selection feature but the split-screen page view is currently desktop-only. Google has not yet given a firm timeline for the mobile split-screen rollout Google I/O in May is the likely venue for that announcement.

What Comes Next

Google has confirmed a global rollout is coming “soon” likely a staged expansion by country and language over the coming weeks. Google I/O 2026, typically held in May, is almost certainly where Google will show the next phase of this roadmap. Based on the trajectory of AI Mode additions over the past year, expect deeper integrations: more tool types accessible via the plus menu, better context retention across sessions, and possibly AI Mode becoming the default start page experience rather than an opt-in Labs feature.

The broader trend is unmistakable. Google is no longer just building a better search engine. It’s building a persistent AI layer that wraps around everything you do in the browser. For everyday users, that’s genuinely useful once you’ve experienced research without tab-hopping, going back feels clunky. For the ecosystem of publishers, advertisers, and developers who depend on how people interact with the web, the implications are still unfolding.

What’s clear today is that Google moved fast on a real pain point, shipped it clean, and tied it directly to the browser itself. That’s a combination that’s hard to ignore and harder to compete with for anyone not named Microsoft. The next few months will show whether the productivity gains for users come at a cost to the open web that built them. Keep an eye on this one.

And while Google is reshaping search, it’s worth noting this is part of a broader pattern: as we recently covered, OpenAI is also building a desktop superapp designed to consolidate AI tools into a single persistent interface a different path to the same destination Google is now racing toward inside Chrome.