
Google Search gets smarter: No typing. Just keep talking for results
Google recently announced AI mode for Search that brings advanced and hassle-free search capabilities to users. Now, similar to ChatGPT voice mode and Gemini Live, Google has announced 'Search Live' within AI mode, allowing users to make voice-based queries. With voice-based AI search, users can make back-to-back conversations to gather desired information. Search Live is similar to Gemini Live; however, both AI-based voice models offer different features and services. This new AI feature will not only resolve queries but also provide users with easy-to-access links for greater knowledge. Know more about how the Search Live feature on AI Mode works.
Also read: Google pauses 'Ask Photos' AI Feature to address performance issues
Search Live is a conversational AI search rolling out to the Google app. This feature can be accessed within the Google app, allowing users to have a voice-based conversation with Search instead of having to type queries. This feature is currently available to people who are enrolled on the AI Mode experiment in Labs. To access Search Live on Android or iOS, users simply need to open the Google app and click on the new 'Live' icon below the search bar to start conversations.
Google says that Search Live also has the ability to run in the background. Therefore, users can continue having back-and-forth conversations while being on another app. This makes multitasking and Google search easier and hassle-free. The Search Live also include a 'transcript' button to convert the voice responses into text. Reportedly, the feature utilises a custom version of Gemini with advanced voice capabilities for accurate information. Additionally, all the past voice-based queries can be accessed via the AI Mode history, allowing users to revisit specific responses.
Also read: Google I/O 2025: Gemini Live with camera now free for everyone, Veo 3 for AI Ultra and other reveals
Google also revealed that Search Live in AI Mode will soon get camera capabilities, which will allow users to ask real-time questions showcasing any object, place, or specific location on the camera. Notably, the feature is being tested before it rolls out globally. As of now, it is only available in the US, and we expect a stable release soon. These upcoming Google Search features could give tough competition to AI Chatbots, including OpenAI's Search GPT, which recently started to trend. Additionally, it also provides us with a glimpse of the future on how users will start to leverage AI for the smallest of tasks and queries.
Mobile Finder: Google Pixel 10 Pro LATEST specs, features, and price

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
5 hours ago
- Time of India
Algebra, philosophy and…: These AI chatbot queries cause most harm to environment, study claims
Representative Image Queries demanding complex reasoning from AI chatbots, such as those related to abstract algebra or philosophy, generate significantly more carbon emissions than simpler questions, a new study reveals. These high-level computational tasks can produce up to six times more emissions than straightforward inquiries like basic history questions. A study conducted by researchers at Germany's Hochschule München University of Applied Sciences, published in the journal Frontiers (seen by The Independent), found that the energy consumption and subsequent carbon dioxide emissions of large language models (LLMs) like OpenAI's ChatGPT vary based on the chatbot, user, and subject matter. An analysis of 14 different AI models consistently showed that questions requiring extensive logical thought and reasoning led to higher emissions. To mitigate their environmental impact, the researchers have advised frequent users of AI chatbots to consider adjusting the complexity of their queries. Why do these queries cause more carbon emissions by AI chatbots In the study, author Maximilian Dauner wrote: 'The environmental impact of questioning trained LLMs is strongly determined by their reasoning approach, with explicit reasoning processes significantly driving up energy consumption and carbon emissions. We found that reasoning-enabled models produced up to 50 times more carbon dioxide emissions than concise response models.' by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Americans Are Freaking Out Over This All-New Hyundai Tucson (Take a Look) Smartfinancetips Learn More Undo The study evaluated 14 large language models (LLMs) using 1,000 standardised questions to compare their carbon emissions. It explains that AI chatbots generate emissions through processes like converting user queries into numerical data. On average, reasoning models produce 543.5 tokens per question, significantly more than concise models, which use only 40 tokens. 'A higher token footprint always means higher CO2 emissions,' the study adds. The study highlights that Cogito, one of the most accurate models with around 85% accuracy, generates three times more carbon emissions than other similarly sized models that offer concise responses. 'Currently, we see a clear accuracy-sustainability trade-off inherent in LLM technologies. None of the models that kept emissions below 500 grams of carbon dioxide equivalent achieved higher than 80 per cent accuracy on answering the 1,000 questions correctly,' Dauner explained. Researchers used carbon dioxide equivalent to measure the climate impact of AI models and hope that their findings encourage more informed usage. For example, answering 600,000 questions with DeepSeek R1 can emit as much carbon as a round-trip flight from London to New York. In comparison, Alibaba Cloud's Qwen 2.5 can answer over three times more questions with similar accuracy while producing the same emissions. 'Users can significantly reduce emissions by prompting AI to generate concise answers or limiting the use of high-capacity models to tasks that genuinely require that power,' Dauner noted. AI Masterclass for Students. Upskill Young Ones Today!– Join Now


Deccan Herald
5 hours ago
- Deccan Herald
Samsung to launch Galaxy M36 5G next week in India
Thanks to deeper collaboration with Google and Samsung, the Galaxy M36 will support advanced versions of Gemini AI features. It will be priced under Rs 20,000 in India.
&w=3840&q=100)

First Post
6 hours ago
- First Post
16 billion passwords compromised, says report; have you changed yours?
A massive breach has exposed over 16 billion usernames and passwords from platforms like Google, Apple, Facebook, and more. The leak raises serious cybersecurity concerns, prompting urgent calls for stronger passwords, two-factor authentication, and regular dark web exposure checks. read more A staggering 16 billion usernames and passwords have been exposed in what experts are calling the largest-ever database of stolen credentials. The trove of compromised data includes login details from major platforms such as Apple, Google, Facebook, Telegram, GitHub and even government services, raising alarms over the global state of digital security. Cybersecurity researchers say the breach stems from a collection of 30 massive datasets, each holding tens of millions to over 3.5 billion records. The information, mostly acquired through infostealing malware, appears to be freshly leaked, with nearly all of the datasets previously unreported except for one earlier disclosure of 184 million passwords by researcher Jeremiah Fowler, according to a new investigation by Cybernews. STORY CONTINUES BELOW THIS AD 'Most of these credentials are structured as URLs followed by usernames and passwords, and they cover virtually every type of online service imaginable,' said Vilius Petkauskas, a Cybernews analyst who has been investigating the leak since the beginning of the year. The scale of this breach surpasses previous incidents, including last year's so-called 'Mother of All Breaches' which exposed 26 billion records. While it's unclear whether some of the leaked data might have been repackaged from earlier incidents, researchers insist that this leak is largely new. Lawrence Pingree, vice president at cybersecurity firm Dispersive, explained that such datasets are often circulated and resold on the dark web—sometimes bundled with other leaks, sometimes offered piecemeal. 'Whether it's a repackaged leak or not, 16 billion records is a huge number,' Pingree said. 'This kind of data is valuable precisely because it is so often misused.' The breach underscores how widespread the threat of credential theft has become, with attackers targeting social media platforms, corporate portals, developer tools, and VPN services alike. In response, experts urge users to adopt better security hygiene. Basic protections include running antivirus scans to detect infostealers, checking dark web exposure via tools like Google One's 'Dark Web Report,' and crucially, using strong and unique passwords for every service.