Latest news with #GoogleLabs


Express Tribune
2 days ago
- Express Tribune
Google Saerch Live feature: What is it, where is it available, plus full details
Google has launched a new voice-activated feature, Search Live, for its Google app on iOS and Android, giving users in the United States a more natural, conversational way to search the web. The rollout is part of the company's AI Mode initiative, currently being trialled in Google Labs. Search Live allows users to speak directly to Google Search, pose follow-up questions, and receive responses — all without needing to type. The feature is aimed at people searching while multitasking or on the move, offering a hands-free experience similar to Google's existing Gemini Live tool. By tapping the newly introduced Live icon in the app, users can initiate voice conversations with the search engine. The assistant responds with spoken answers and links to relevant webpages. Users can continue chatting with the AI even while switching between apps, making multitasking smoother. A Transcript feature enables mid-conversation switching between voice and text, while a searchable AI Mode history allows users to return to previous queries and resume where they left off. Introducing Search Live with voice in AI Mode, which lets you have free-flowing conversations with Search on the go 🎙️ 🗣️ Talk with and listen to Search hands-free 🔊 Get AI-generated audio responses 🔗 Learn more with links — Google (@Google) June 18, 2025 Although innovative, Search Live's similarity to Gemini Live has raised questions about Google's strategy. Both services offer overlapping functionalities, including camera-based input integration — a feature Google says is coming to Search Live in future updates, despite already being present in Gemini Live. Google has stated that Search Live runs on a customised version of its Gemini AI model, with enhanced voice interaction capabilities. However, the introduction of parallel apps with nearly identical tools has led to confusion among users and industry observers. The new feature is only available in the US at present, with no announcements yet regarding an international rollout.


Hans India
2 days ago
- Hans India
Google Search Now Lets You Chat with AI Using Voice in New ‘Search Live' Mode
Google is ushering in a new era of voice-enabled search with its latest feature—Search Live in AI Mode—now rolling out on the Google app for Android and iOS. Previewed at the recent Google I/O conference, this experimental feature is currently available to users in the United States who have opted into the AI Mode through Google Labs. This innovative voice interaction capability is powered by a customized version of Google's Gemini AI model, tailored specifically for natural and dynamic conversations. The system draws on Google's robust search infrastructure to deliver real-time, high-quality responses that adapt seamlessly to users' spoken queries. One of the core technologies behind this feature is query fan-out, which broadens the scope of web content shown to users. This ensures that people receive not just a direct answer, but also a diverse range of sources to explore further. The new voice feature is designed with mobility and multitasking in mind. For example, users can simply tap the 'Live' icon in the Google app and ask something like, 'What are some tips for preventing a linen dress from wrinkling in a suitcase?' The AI responds aloud, making it easier to get help while your hands are full—whether you're packing or cooking. Google highlights that users can continue the conversation naturally with follow-ups like, 'What should I do if it still wrinkles?' All while viewing relevant web links on screen that provide deeper context—without disrupting the flow of conversation. When activated, the interface shows a sparkle-badged waveform icon beneath the search bar—the same used for Gemini Live. Alternatively, users can access it via a new button next to the search field. The full-screen view supports both light and dark themes, featuring a gradient 'G' in the top-left corner and an arc-shaped waveform in AI Mode colors. Additional controls include a pill-shaped Mute/Unmute button, and a 'Transcript' toggle for switching to text-based interaction. Importantly, this feature supports background operation, allowing the voice session to continue even if the user locks their screen or opens another app. To end the session, users can tap the 'X' in the corner. The overflow menu provides access to Search history and Voice settings, including four distinct modes: Cassini, Cosmo, Neso, and Terra—each offering different interaction styles or personalities. As voice-based AI becomes more central to how we engage with technology, Google's latest feature is a major step toward making search more conversational, personalized, and hands-free.

The Hindu
2 days ago
- The Hindu
Google is testing voice feature in search with AI Mode
Google has said they are testing Search Live with voice input for AI Mode in the Google app. Available to the Android and iOS users in the U.S., users on Google Labs can access the feature. 'This is perfect for when you're on the go or multitasking, like if you're packing for a trip. Simply open the Google app, tap the new 'Live' icon and verbally ask your question, like, 'What are some tips for preventing a linen dress from wrinkling in a suitcase?' You'll hear a helpful AI-generated audio response and you can easily follow up with another question, like, 'What should I do if it still wrinkles?' You'll also find easy-to-access links right on your screen so you can dig deeper with content from the web,' a blog posted by the company said. Users can continue chatting in Search Live even as they use other apps and even click on the 'transcript' button to see the text version of the response. They can also go back to a Search Live response via AI Mode history. Search Live is based on a custom version of the Gemini model with advanced voice capabilities and it also uses a query fan-out technique to display a wider range of results. Google will be rolling out more capabilities to Search Live in AI Mode including camera so it can see what the user is looking at in real-time. The Search Live option appears as a new icon directly under the search bar once inside AI Mode in Labs.


Winnipeg Free Press
4 days ago
- Politics
- Winnipeg Free Press
The deepfake era has only just started
Opinion Last month, Google released its newest content tool, Veo 3, powered by artificial intelligence (AI). 'We're entering a new era of creation with combined audio and video generation that's incredibly realistic,' declared Josh Woodward, the vice-president of Google Labs, the tech company's experimental division. And Google isn't alone. Synthetic media tools have existed for years. With each iteration, the technology unleashes new innovations and commercial possibilities. South Korean broadcasters use digital news anchors to deliver breaking stories more quickly. Hollywood uses AI for 'de-aging' older actors to play younger characters onscreen. Digital avatars allow customers to try on clothing virtually. British software firm Synthesia has helped thousands of multinational companies develop audiovisual training programs and communications materials reflecting the languages and ethnicities of workers across their supply chains. Or clients in different global regions. But AI deepfakes — digital forgeries created by machine learning models — are empowering bad actors too. Whether democratic societies are equipped to deal with the consequences remains an open question. Indeed, many are currently reeling from the corrosive effects of far cruder forms of disinformation. The only certainty going forward is deepfake tools will become more sophisticated and easier to use. Commonly available generative AI apps can already make real people appear to say or do things they never did — or render fake characters uncannily persuasive. To demonstrate, CBC News used Google's Veo 3 to create a hyper-realistic news segment about wildfires spreading in Alberta after entering just a one-sentence prompt. Deepfake scams are surging as well. Altered images and recordings of real people — often created using their own content uploaded to social media — are being used to dupe others into fake online romances or bogus investment deals. It just takes feeding a 30-second clip of someone's speech into generative AI to clone their voice. The political dangers and possibilities are frightening. In early October 2023, Michal Šimečka, a progressive leader vying to be Slovakia's prime minister, lost out to his pro-Kremlin opponent after a fake audio clip emerged online days before the ballot. In it, Šimečka supposedly suggests to a journalist that he'd consider buying votes to seal a victory. In Canada, a network of more than two dozen fake Facebook accounts tried to smear Prime Minister Mark Carney to users outside the country by running deepfake ads featuring Carney announcing dramatic new regulations shortly after winning election. In his latest book, Nexus, historian Yuval Noah Harari argues that all large democracies owe their successes to 'self-correction mechanisms.' This includes civil society, the media, the courts, opposition parties and institutional experts, among others. The caveat is each entity relies on modern information technologies. And to function, their actions must be based on information grounded in truth. The problem: today's tech giants have instead obsessed over capturing greater market share in the attention economy, prioritizing user engagement above all else. 'Instead of investing in self-correcting mechanisms that would reward truth telling, the social media giants actually developed unprecedented error-enhancing mechanisms that reward lies and fiction,' Harari writes. This pattern is now being repeated with AI. For example, just as Google released Veo 3, the founder of Telegram forged a new partnership with Elon Musk's AI company to integrate its Grok chatbot into Telegram's platform. However, Telegram is notoriously hands-off with moderation. It is a haven for extremists, grifters and nihilists. 'If Grok allows Telegram (users) to create more persuasive memes and other forms of propaganda at scale, that could make it an even more powerful tool for spreading toxicity, from disinformation to hate speech to other odious content,' warns Bloomberg tech columnist Parmy Olson. This is being further aggravated by partisan agendas in Washington. Republican lawmakers have inserted a stealth clause into their tax bill winding through Congress that, if passed, would ban states — including California, which has authority over Silicon Valley — from regulating AI for 10 years. Social polarization, foreign interference, fraud and personal revenge schemes will likely all worsen as deepfakes become indiscernible from reality, tearing at the fabric of liberal democracy. There is also another grim possibility. Rather than stoke outrage, tribalism, and conspiratorial thinking among voters, these new digital tools might soon breed something arguably much worse: apathy. Put off by civic life becoming awash with misinformation and deepfakes, an even larger portion of the electorate may eventually choose to avoid politics altogether. For them, the time, stress, and confusion involved in discerning fact from fiction won't be worth it. Especially not when AI elsewhere delivers instant, endless entertainment and escapism on demand — genuine or not. Kyle Hiebert is a Montreal-based political risk analyst and former deputy editor of the Africa Conflict Monitor.
&w=3840&q=100)

Business Standard
5 days ago
- Business Standard
Google expands 'AI Audio Overviews' feature to Search: Here's how it works
Google is expanding its Audio Overviews feature – previously available in NotebookLM and the Gemini app – to Search through a new experiment in Google Labs. The tool turns complex written content into brief, conversational audio summaries, designed to make information more accessible. The AI-generated overviews go beyond user-inputted queries. Google supplements results with additional sources to create a more comprehensive summary and includes links to the original content for users to explore further. In a blog post announcing the update, Google said: 'We're launching a new Search experiment in Labs – Audio Overviews, which uses our latest Gemini models to generate quick, conversational audio overviews for certain search queries.' ALSO READ: iOS 26: Apple will let apps like Spotify access this new Music app featureHow Google's AI Audio Overviews work in SearchThe feature is currently live for select users in Google Labs, the company's platform for testing experimental tools. When available, users will see an option to generate an audio overview for certain search queries – particularly those where a summary may be helpful. Once initiated, a basic audio player launches with the following features:Play/pause, volume control, and playback speedSource links to view original content used in the overviewFeedback options with thumbs up/down to rate the summary or overall featureThis feature helps streamline search experiences by offering a podcast-like summary of complex topics, allowing users to consume information hands-free. ALSO READ: Neuralink device Blindsight helps monkey see something that's not there In related news, Google has started rolling out the Android 16 update to eligible Pixel smartphones. First previewed last month at its inaugural Android Show: I/O Edition, the update introduces several new features including live updates, the Pixel VIPs widget, deeper Gemini integration, and more. According to Google, Android 16 sets the stage for the broader adoption of its new Material 3 Expressive design language. However, most of the visual changes tied to this new design approach are not yet live in this release.