Google Tests AI-Generated Audio Overviews for Search Queries


Google is now testing a new feature entitled Audio Aperviews in its search interface, allowing users to listen to summaries generated by the AI of their requests. This decision is part of a broader change in the technology giant to integrate its powerful artificial gemini intelligence in addition to products – transforming traditional research experience into a multimodal and voice trip.
Friday, in a blog post, Google said that the functionality, now available in Google Labs, will generate fast and conversational audio previews for selected requests using its Gemini models. The company explained that the new experience aims to help users “get a land base” while multitasking or simply consuming information in an audio format.
The deployment follows the integration of similar audio functions in Notebooklm, the research and notes assistant fueled by Google, and more recently in the Gemini mobile application, where users can generate podcast style summaries of their downloaded documents.
Register For TEKEDIA Mini-MBA Edition 17 (June 9 – September 6, 2025)) Today for early reductions. An annual for access to Blurara.com.
Tekedia Ai in Masterclass Business open registration.
Join Tekedia Capital Syndicate and co-INivest in large world startups.
Register become a better CEO or director with CEO program and director of Tekedia.
The new audio option comes just a few weeks after the Wall Street Journal said that the previews of Google AI – the summaries based on the text introduced earlier this year – considerably reduce traffic to websites, especially for publishers and content creators.
According to the newspaper, several media have seen clear drops in research traffic since the previews generated by AI started to appear at the top of the search results. The functionality often responds directly to user requests, summarizing content from several sources without obliging users to click on original websites. Experts and industry publishers have expressed their concern that this cannibalise web traffic, the undervaluation of advertising revenues and constituted an existential threat to journalism.
Audio previews should worsen this situation. Instead of browsing the summaries or clicking for more, users can now passively listen to the Narrations of the AI - a format that further reduces the incentive to visit the original sources. Although Google indicates that it includes links to source equipment within the audio player, it is to be feared that these references will be ignored by most listeners.

Google maintains that the objective is to improve accessibility and improve the user experience. In his ad, he said that the functionality will only appear for certain requests and will be refined according to user comments. The audio player is delivered with controls for reading speed, volume and break, as well as visible source links that users can explore to obtain more in -depth information.
But publishers argue that the provision of information in this type of prepackaged audio format – even when it is original – disincitate engagement with original content.
Development is the last in a series of Disturbances focused on AI for the digital publication ecosystem. Google, as well as competitors like Openai and Meta, were examined to use the editor’s content to train AI models without appropriate compensation or credit. Several writing rooms are already pursuing legal and political remedies to defend their content and guarantee a more equitable digital economy.

For the moment, audio previews remain experimental and are available for a limited group of users registered with Google Labs. But if the functionality develops widely, it could become a central element of Google’s vision for a search more integrated with AI – and a boost for news publishers who are still struggling to adapt to the changing dynamics of the web.