Google is banking on AI agents, smart glasses to defend its search crown
Google (GOOG, GOOGL) took its first steps to move beyond its traditional Search product during its I/O conference on Tuesday, debuting a host of new technologies that won't necessarily supplant the 10 blue links that dominate the world of search, but pave the way for a future where they're far less necessary to our everyday lives.
From its ChatGPT-style AI Mode built directly into Search and agentic AI feature that will help you shop for products to its renewed push into smart glasses, Google provided early hints at how it's working to evolve its services at a time when the company is under threat from both AI upstarts and government antitrust enforcers.
Google built its advertising empire on the back of its Search platform, and it's still its most important business. But companies like OpenAI (OPAI.PVT) and Perplexity (PEAI.PVT) have developed their own competing generative AI search products.
Google's efforts to fend off its newest foes came into stark relief during one of its recent antitrust hearings when Apple's (AAPL) senior vice president of services Eddy Cue revealed that searches made via that company's Safari browser fell for the first time ever in April. Google is the default search engine for Safari, a part of a $20-billion-a-year deal between the two companies that the Department of Justice is seeking to break up via its antitrust case.
Cue attributed the decline to customers opting to use generative AI services like ChatGPT, but Google pushed back in a statement saying that it continues to see overall query growth in Search.
But the report sent shockwaves through Wall Street, with shares falling as much as 7.5% when the news broke on May 7.
Google has been on its back foot since OpenAI and its partner Microsoft (MSFT) raised the specter of a potential threat to Google's search crown in late 2022. And now the company is pulling out its big guns to prove to its customers and Wall Street that it should remain the search king.
One of the biggest changes Google is making to its Search platform is the addition of what it calls AI Mode. Previously only available via the company's Labs testing program, AI Mode allows users to have a back-and-forth conversation with Google's AI, similar to the kind of interactions you'd have with ChatGPT, Bing, or Perplexity.
Available first to users in the US, AI Mode is Google's way of competing in the chatbot space without having to ditch its traditional search product. Rather than replacing Search, AI Mode is available as a tab in Search, similar to items like Images, News, and Videos.
AI Mode uses Google's frontier models and takes advantage of what the company calls its "query fan-out" technique. The method, Google says, breaks down your queries into smaller subtopics, running a number of separate searches at the same time. That, Google explains, allows AI Mode to perform deeper searches than traditional Search.
Google Search's AI Overviews are also getting an update, with some search results pulling information from AI Mode's latest AI models, providing a kind of bridge between the two search options.
Google says it's also bringing agentic AI functionality to AI Mode, allowing the software to do things like keep tabs on products you're shopping for and run through the entire checkout flow without you having to lift a finger until it's time to make the purchase.
AI Mode also adds a new try-on feature that lets you upload an image of yourself and see what clothes look like on you. It's clear based on those announcements alone that Google is putting a heavy emphasis on AI Mode, setting it up as a potential successor to the traditional Search product.
But the company isn't just focusing on improving its odds against newer AI firms; it's also working to combat the emerging threat of smart glasses. Google's biggest advertising rival, Meta, already offers its Ray-Ban Meta smart glasses in hopes that the eyewear will ring in a consumer tech revolution.
Meta is already building its Meta AI to perform search functions for users, and if smart glasses continue to improve and drive users to abandon their smartphones, or at least use them less in favor of searching on their eyewear, Google could find itself in serious trouble.
To that end, the company announced it's working with Samsung, Qualcomm (QCOM), Warby Parker (WRBY), and Gentle Monster to develop attractive smart glasses of their own.
There's no guarantee that smart glasses will become the go-to tech for consumers around the world like smartphones have throughout the years. But with the threats mounting to its Search business, Google can't afford to let the opportunity slip away.
Email Daniel Howley at dhowley@yahoofinance.com. Follow him on X/Twitter at @DanielHowley.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
19 minutes ago
- Yahoo
AI chatbots and TikTok reshape how young people get their daily news
Artificial intelligence is changing the way people get their news, with more readers turning to chatbots like ChatGPT to stay up to date. At the same time, nearly half of young adults now rely on platforms such as TikTok as their main source of news. The findings come from the Reuters Institute's annual Digital News Report, released this week. The Oxford University-affiliated study surveyed nearly 97,000 people across 48 countries to track how global news habits are shifting. The study found that a notable number of people are using AI chatbots to read headlines and get news updates – a shift described by the institute's director Mitali Mukherjee as a 'new chapter' in the way audiences consume information. While only 7 percent overall say they use AI chatbots to find news, that number rises among younger audiences – 12 percent of under-35s and 15 percent of under-25s now rely on tools such as OpenAI's ChatGPT, Google's Gemini or Meta's Llama for their news. 'Personalised, bite-sized and quick – that's how younger audiences want their news, and AI tools are stepping in to deliver exactly that,' Mukherjee noted. Beyond reading headlines, many readers are turning to AI for more complex tasks: 27 percent use it to summarise news articles, 24 percent for translations, and 21 percent for recommendations on what to read next. Nearly one in five have quizzed AI directly about current events. (with newswires) Read more on RFI EnglishRead also:AI steals spotlight from Nobel winners who highlight Its power and risksAI showcase pays off for France, but US tech scepticism endures'By humans, for humans': French dubbing industry speaks out against AI threat


Android Authority
an hour ago
- Android Authority
I tested Gemini's latest image generator and here are the results
Back in November, I tested the image generation capabilities within Google's Gemini, which was powered by the Imagen 3 model. While I liked it, I ran into its limitations pretty quickly. Google recently rolled out its successor — Imagen 4 — and I've been putting it through its paces over the last couple of weeks. I think the new version is definitely an improvement, as some of the issues I had with Imagen 3 are now thankfully gone. But some frustrations still remain, meaning the new version isn't quite as good as I'd like. How often do you create images with AI? 0 votes It's a daily thing for me. NaN % Maybe once per week. NaN % A few times per month. NaN % Never. NaN % So, what has improved? The quality of the images produced has generally improved, though the improvement isn't massive. Imagen 3 was already generally good at creating images of people, animals, and scenery, but the new version consistently produces sharper, more detailed images. When it comes to generating images of people — which is only possible with Gemini Advanced — I had persistent issues with Imagen 3 where it would create cartoonish-looking photos, even when I wasn't asking for that specific style. Prompting it to change the image to something more realistic was often a losing battle. I haven't experienced any of that with Imagen 4. All the images of people it generates look very professional — perhaps a bit too much, which is something we'll touch on later. One of my biggest frustrations with the older model was the limited control over aspect ratios. I often felt stuck with 1:1 square images, which severely limited their use case. I couldn't use them for online publications, and printing them for a standard photo frame was out of the question. While Imagen 4 still defaults to a 1:1 ratio, I can now simply prompt it to use a different one, like 16:9, 9:16, or 4:3. This is the feature I've been waiting for, as it makes the images created far more versatile and usable. Imagen 4 also works a lot more smoothly. While I haven't found it to be noticeably faster — although a faster model is reportedly in the works — there are far fewer errors. With the previous version, Gemini would sometimes show an error message, saying it couldn't produce an image for an unknown reason. I have received none of those with Imagen 4. It just works. Still looks a bit too retouched While Imagen 4 produces better images, is more reliable, and allows for different aspect ratios, some of the issues I encountered when testing its predecessor are still present. My main problem is that the images often aren't as realistic as I'd like, especially when creating close-ups of people and animals. Images tend to come out quite saturated, and many feature a prominent bokeh effect that professionally blurs the background. They all look like they were taken by a photographer with 15 years of experience instead of by me, just pointing a camera at my cat and pressing the shutter. Sure, they look nice, but a 'casual mode' would be a fantastic addition — something more realistic, where the lighting isn't perfect and the subject isn't posing like a model. I prompted Gemini to make an image more realistic by removing the bokeh effect and generally making it less perfect. The AI did try, but after prompting it three or four times on the same image, it seemed to reach its limit and said it couldn't do any better. Each new image it produced was a bit more casual, but it was still quite polished, clearly hinting that it was AI-generated. You can see that in the images above, going from left to right. The first one includes a strong bokeh effect, and the man has very clear skin, while the other two progress to the man looking older and older, as well as more tired. He even started balding a bit in the last image. It's not what I really meant when prompting Gemini to make the image more realistic, although it does come out more casual. Imagen 4 does a much better job with random images like landscapes and city skylines. These images, taken from afar, don't include as many close-up details, so they look more genuine. Still, it can be a hit or miss. An image of the Sydney Opera House looks great, although the saturation is bumped up quite a bit — the grass is extra green, and the water is a picture-perfect blue. But when I asked for a picture of the Grand Canyon, it came out looking completely artificial and wouldn't fool anyone into thinking it was a real photo. It did perform better after a few retries, though. Editing is better, but not quite there One of my gripes with the previous version was its clumsy editing. When asked to change something minor — like the color of a hat — the AI would do it, but it would also generate a brand new, completely different image. The ideal scenario would be to create an image and then be allowed to edit every detail precisely, such as changing a piece of clothing, adding a specific item, or altering the weather conditions while leaving everything else exactly as is. Imagen 4 is better in this regard, but not by much. When I prompted it to change the color of a jacket to blue, it created a new image. However, by specifically asking it to keep all other details the same, it managed to maintain a lot of the scenery and subject from the original. That's what happened in the examples above. The woman in the third image was the same, and she appeared to be in a similar room, but her pose and the camera angle were different, making it more of a re-shoot than an edit. Here's another example of a cat eating a popsicle. I prompted Gemini to change the color of the popsicle, and it did, and it kept a lot of the details. The cat's the same, and so is most of the background. But the cat's ears are now sticking out, and the hat is a bit different. Still, a good try. Despite its shortcomings, Imagen 4 is a great tool Even with its issues and a long wishlist of missing functionality, Imagen 4 is still among the best AI image generators available. Most of the problems I've mentioned are also present in other AI image-generation software, so it's not as if Gemini is behind the competition. It seems there are significant technical hurdles that need to be overcome before these types of tools can reach the next level of precision and realism. Other limitations are still in place, such as the inability to create images of famous people or generate content that violates Google's safety guidelines. Whether that's a good or a bad thing is a matter of opinion. For users seeking fewer restrictions, there are alternatives like Grok. Have you tried out the latest image generation in Gemini? Let me know your thoughts in the comments.
Yahoo
2 hours ago
- Yahoo
AI chatbots and TikTok reshape how young people get their daily news
Artificial intelligence is changing the way people get their news, with more readers turning to chatbots like ChatGPT to stay up to date. At the same time, nearly half of young adults now rely on platforms such as TikTok as their main source of news. The findings come from the Reuters Institute's annual Digital News Report, released this week. The Oxford University-affiliated study surveyed nearly 97,000 people across 48 countries to track how global news habits are shifting. The study found that a notable number of people are using AI chatbots to read headlines and get news updates – a shift described by the institute's director Mitali Mukherjee as a 'new chapter' in the way audiences consume information. While only 7 percent overall say they use AI chatbots to find news, that number rises among younger audiences – 12 percent of under-35s and 15 percent of under-25s now rely on tools such as OpenAI's ChatGPT, Google's Gemini or Meta's Llama for their news. 'Personalised, bite-sized and quick – that's how younger audiences want their news, and AI tools are stepping in to deliver exactly that,' Mukherjee noted. Beyond reading headlines, many readers are turning to AI for more complex tasks: 27 percent use it to summarise news articles, 24 percent for translations, and 21 percent for recommendations on what to read next. Nearly one in five have quizzed AI directly about current events. (with newswires) Read more on RFI EnglishRead also:AI steals spotlight from Nobel winners who highlight Its power and risksAI showcase pays off for France, but US tech scepticism endures'By humans, for humans': French dubbing industry speaks out against AI threat