
The 60% Problem — How AI Search Is Draining Your Traffic
The online marketing landscape is undergoing its largest change in nearly three decades since the inception of Google. As AI-powered search assistants like ChatGPT, Perplexity and Search Generative Experience by Google are gaining popularity at lightning speed, businesses are witnessing unprecedented dips in organic traffic – but things aren't quite as straightforward as the data makes them out to be.
Recent research has shown that AI Overviews can cause a whopping 15-64% decline in organic traffic, based on industry and search type. This radical change is causing marketers to reconsider their whole strategy regarding digital visibility.
Roughly 60% of searches now yield no clicks at all, as AI-generated answers satisfy them directly on the search results page. In addition, Google's AI Overviews have displaced top-ranked links by as much as 1,500 pixels – which is about two full screen scrolls on a desktop and three full screen scrolls on a mobile device – significantly lowering click-through rates even for highly ranked pages.
"Using search engines like Google is like walking into a library and getting a list of book titles that might help. You then have to go pull the books off the shelves, skim through them, and try to piece together the answer on your own," explains Erik Wikander, Co-founder and CEO of Wilgot.ai, a company specializing in AI-powered SEO strategies.
"AI search flips that on its head," Wikander continued in an email response. "It's like having a super smart librarian who reads every book for you and then explains the answer, in your language, tailored to your context. And if you don't quite get it? You just ask another question, and they clarify. It's fluid, interactive and helpful in a way that traditional search simply isn't."
If you're not sure what AI Overviews are – you've probably seen them without knowing what they were or how they work.
If you ask Google a question such as 'What's the significance of Passover' or 'Who attended the Last Supper' – the first entry at the top of the screen will be a Google AI Overview, which is a curated summarized response with footnotes. For 6-out-of-10 viewers of the response, the summary details are all they need so they don't click any other link from the search screen to another website.
While that's fine for Google and other AI search engines, the websites on the other end are seeing a significant decrease of click through rates to their sites. So their keyword advertising, onsite marketing content and search engine optimization activities are going to waste if they're not one of the handful of web pages captured in the AI Overview.
According to Winkander, resources based on informational content like guides and how-to tutorials are hit hardest. With AI Overviews providing in-depth answers within search results, the need to click through to these resources has significantly declined. This shift is particularly damaging for content marketers who have built strategies on top-of-funnel informational content to create awareness.
And the race for prominence has also gotten fiercer, with AI Summaries usually coming from a few of the highest-ranked websites – usually positions 1-2 or above. This is a case of winner-takes-all, with smaller firms and newer websites having very little to no chance of prominence.
In spite of these obstacles, there is a silver lining for companies that are able to move quickly. Wikander continues that even though the total traffic drops, the quality of the visitors that do click through is far greater.
'While AI search engines may drive less traffic overall, the value of that traffic is significantly higher than traditional search. Visitors arriving via AI search are often much further along in their buyer journey — ready to take action. This is especially evident in high-consideration categories like software, where customers typically do extensive research before converting. In fact, many of the companies we speak with report that up to 10% of their conversions now come from AI-driven search,' he wrote.
For businesses struggling with these changes, Wikander recommends a complete overhaul of how we go about search engine optimization.
"The first step is to reframe SEO as a content-first strategy. The technical SEO playbook still matters, but it's no longer the primary driver of visibility," he advised. "To stand out in AI search, the content needs to answer real questions, not just repeat popular keywords. It should anticipate follow-up questions, be written in a natural, conversational tone, and offer something truly useful, whether that's an expert opinion, proprietary data or unique context."
This involves going beyond the conventional keyword optimization, which Wikander describes as "not dead, but no longer the winning move on its own." Rather, businesses must focus on developing content that reads naturally, is rich and delivers concrete value to the reader.
As AI transforms the search landscape, companies need to evolve or risk not being seen. The old SEO playbook — built on keywords, backlinks and technical adjustments—is quickly becoming irrelevant. Instead, a new playbook built on quality, relevance and authority is taking its place.
"Content is becoming the new performance marketing," Wikander emphasizes. "As paid channels get more expensive and traditional search fragments, your best lever for growth is content. Not just any content, but content that AI sees as helpful, credible and worth including."
For marketers who are ready to adapt to this new reality, the dividends are significant. By producing excellent, authoritative content that AI software identifies as worthy, brands can position themselves as trusted sources of information — and secure their place in the AI-driven search future.
"Success today means becoming the best answer — in every language, every market and every moment that matters," Wikander concludes.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Gizmodo
39 minutes ago
- Gizmodo
The $14 Billion AI Google Killer
A new AI darling is making waves in Silicon Valley. It's called Perplexity, and according to reports, both Meta and Apple have quietly explored acquiring it. Valued at a staggering $14 billion following a May funding round, the startup is being hailed as a revolutionary threat to Google Search's search dominance. But here's the thing: it mostly just summarizes web results and sends you links. So why the frenzy? Perplexity billed itself an 'answer engine.' You ask a question, and it uses large language models to spit out a human-sounding summary, complete with footnotes. It's essentially ChatGPT with a bibliography. You might ask for the best books about the French Revolution or a breakdown of the Genius Act. In seconds, it generates a paragraph with links to Wikipedia, news outlets, or Reddit threads. Its pitch is a cleaner, ad-free, chatbot-driven search experience. No SEO junk, no scrolling. But critics say it's little more than a glorified wrapper around Google and OpenAI's APIs, with minimal proprietary tech and lots of smoke. It's fast, clean, and slick. But, they argue, at its core, it's mostly just reorganizing the internet. Big Tech's Obsession That hasn't stopped the hype. In May 2025, the San Francisco, California based company closed another $500 million funding round, pushing its valuation to $14 billion, a sharp increase from its $9 billion valuation in December 2024. Jeff Bezos, via the Jeff Bezos Family Fund, and Nvidia are among its notable backers And now, tech giants are circling. According to Bloomberg, Apple has held talks about acquiring Perplexity. Meta has also reportedly considered the move, though no formal offers have been confirmed. The logic is clear. Perplexity is fast-growing and increasingly seen as a 'Google killer,' especially among tech influencers and X power users. Traffic to its site has exploded in recent months. The company now offers a Chrome extension, mobile app, and a Pro version that gives users access to top-tier AI models like GPT-4 and Claude. Still, it's unclear what exactly makes Perplexity worth $14 billion, other than the fact that it's riding the AI wave. Why AI Skeptics Are Rolling Their Eyes For AI skeptics, Perplexity's rise is yet another example of hype outpacing substance. The site doesn't train its own models. It's not building new infrastructure. It's not revolutionizing search. It's just offering a polished interface to ask questions and get AI-generated summaries pulled from public websites. There are also growing concerns about how Perplexity sources its information. A number of news organizations, including The New York Times, Forbes, and Wired, have accused the company of plagiarizing and scraping content without permission or proper attribution. Journalists and publishers warn that this kind of AI-powered search experience threatens to cannibalize news traffic while giving little back to content creators. On June 20, the BBC became the latest outlet to threaten legal action against Perplexity AI, alleging that the company is using BBC content to train its 'default AI model,' according to the Financial Times. Perplexity CEO Aravind Srinivas has defended the company as an 'aggregator of information.' In July 2024, the startup launched a revenue-sharing program to address the backlash. 'We have always believed that we can build a system where the whole Internet wins,' Srinivas said at the time. So Why the Gold Rush? Simple. Search is money. Google earned $50.7 billion from search ads in the first quarter, a 9.8% increase year over year. If Perplexity can convince even a small share of users to switch, and then monetize that experience, it becomes a real threat. Apple and Meta, both increasingly wary of relying on Google, see Perplexity as a fast track into the AI search race. But the stakes go even deeper. Whoever controls the next search interface controls the user. Just as Google replaced Yahoo, Perplexity could theoretically replace Google. That's why Big Tech wants in, even if it's not entirely clear what they're buying.


Android Authority
2 hours ago
- Android Authority
I tested Gemini's latest image generator and here are the results
Back in November, I tested the image generation capabilities within Google's Gemini, which was powered by the Imagen 3 model. While I liked it, I ran into its limitations pretty quickly. Google recently rolled out its successor — Imagen 4 — and I've been putting it through its paces over the last couple of weeks. I think the new version is definitely an improvement, as some of the issues I had with Imagen 3 are now thankfully gone. But some frustrations still remain, meaning the new version isn't quite as good as I'd like. How often do you create images with AI? 0 votes It's a daily thing for me. NaN % Maybe once per week. NaN % A few times per month. NaN % Never. NaN % So, what has improved? The quality of the images produced has generally improved, though the improvement isn't massive. Imagen 3 was already generally good at creating images of people, animals, and scenery, but the new version consistently produces sharper, more detailed images. When it comes to generating images of people — which is only possible with Gemini Advanced — I had persistent issues with Imagen 3 where it would create cartoonish-looking photos, even when I wasn't asking for that specific style. Prompting it to change the image to something more realistic was often a losing battle. I haven't experienced any of that with Imagen 4. All the images of people it generates look very professional — perhaps a bit too much, which is something we'll touch on later. One of my biggest frustrations with the older model was the limited control over aspect ratios. I often felt stuck with 1:1 square images, which severely limited their use case. I couldn't use them for online publications, and printing them for a standard photo frame was out of the question. While Imagen 4 still defaults to a 1:1 ratio, I can now simply prompt it to use a different one, like 16:9, 9:16, or 4:3. This is the feature I've been waiting for, as it makes the images created far more versatile and usable. Imagen 4 also works a lot more smoothly. While I haven't found it to be noticeably faster — although a faster model is reportedly in the works — there are far fewer errors. With the previous version, Gemini would sometimes show an error message, saying it couldn't produce an image for an unknown reason. I have received none of those with Imagen 4. It just works. Still looks a bit too retouched While Imagen 4 produces better images, is more reliable, and allows for different aspect ratios, some of the issues I encountered when testing its predecessor are still present. My main problem is that the images often aren't as realistic as I'd like, especially when creating close-ups of people and animals. Images tend to come out quite saturated, and many feature a prominent bokeh effect that professionally blurs the background. They all look like they were taken by a photographer with 15 years of experience instead of by me, just pointing a camera at my cat and pressing the shutter. Sure, they look nice, but a 'casual mode' would be a fantastic addition — something more realistic, where the lighting isn't perfect and the subject isn't posing like a model. I prompted Gemini to make an image more realistic by removing the bokeh effect and generally making it less perfect. The AI did try, but after prompting it three or four times on the same image, it seemed to reach its limit and said it couldn't do any better. Each new image it produced was a bit more casual, but it was still quite polished, clearly hinting that it was AI-generated. You can see that in the images above, going from left to right. The first one includes a strong bokeh effect, and the man has very clear skin, while the other two progress to the man looking older and older, as well as more tired. He even started balding a bit in the last image. It's not what I really meant when prompting Gemini to make the image more realistic, although it does come out more casual. Imagen 4 does a much better job with random images like landscapes and city skylines. These images, taken from afar, don't include as many close-up details, so they look more genuine. Still, it can be a hit or miss. An image of the Sydney Opera House looks great, although the saturation is bumped up quite a bit — the grass is extra green, and the water is a picture-perfect blue. But when I asked for a picture of the Grand Canyon, it came out looking completely artificial and wouldn't fool anyone into thinking it was a real photo. It did perform better after a few retries, though. Editing is better, but not quite there One of my gripes with the previous version was its clumsy editing. When asked to change something minor — like the color of a hat — the AI would do it, but it would also generate a brand new, completely different image. The ideal scenario would be to create an image and then be allowed to edit every detail precisely, such as changing a piece of clothing, adding a specific item, or altering the weather conditions while leaving everything else exactly as is. Imagen 4 is better in this regard, but not by much. When I prompted it to change the color of a jacket to blue, it created a new image. However, by specifically asking it to keep all other details the same, it managed to maintain a lot of the scenery and subject from the original. That's what happened in the examples above. The woman in the third image was the same, and she appeared to be in a similar room, but her pose and the camera angle were different, making it more of a re-shoot than an edit. Here's another example of a cat eating a popsicle. I prompted Gemini to change the color of the popsicle, and it did, and it kept a lot of the details. The cat's the same, and so is most of the background. But the cat's ears are now sticking out, and the hat is a bit different. Still, a good try. Despite its shortcomings, Imagen 4 is a great tool Even with its issues and a long wishlist of missing functionality, Imagen 4 is still among the best AI image generators available. Most of the problems I've mentioned are also present in other AI image-generation software, so it's not as if Gemini is behind the competition. It seems there are significant technical hurdles that need to be overcome before these types of tools can reach the next level of precision and realism. Other limitations are still in place, such as the inability to create images of famous people or generate content that violates Google's safety guidelines. Whether that's a good or a bad thing is a matter of opinion. For users seeking fewer restrictions, there are alternatives like Grok. Have you tried out the latest image generation in Gemini? Let me know your thoughts in the comments.
Yahoo
4 hours ago
- Yahoo
AI tools collect, store your data – how to be aware of what you're revealing
Like it or not, artificial intelligence has become part of daily life. Many devices — including electric razors and toothbrushes — have become "AI-powered," using machine learning algorithms to track how a person uses the device, how the device is working in real time, and provide feedback. From asking questions to an AI assistant like ChatGPT or Microsoft Copilot to monitoring a daily fitness routine with a smartwatch, many people use an AI system or tool every day. While AI tools and technologies can make life easier, they also raise important questions about data privacy. These systems often collect large amounts of data, sometimes without people even realizing their data is being collected. The information can then be used to identify personal habits and preferences, and even predict future behaviors by drawing inferences from the aggregated data. As an assistant professor of cybersecurity at West Virginia University, I study how emerging technologies and various types of AI systems manage personal data and how we can build more secure, privacy-preserving systems for the future. Generative AI software uses large amounts of training data to create new content such as text or images. Predictive AI uses data to forecast outcomes based on past behavior, such as how likely you are to hit your daily step goal, or what movies you may want to watch. Both types can be used to gather information about you. Generative AI assistants such as ChatGPT and Google Gemini collect all the information users type into a chat box. Every question, response and prompt that users enter is recorded, stored and analyzed to improve the AI model. OpenAI's privacy policy informs users that "we may use content you provide us to improve our Services, for example to train the models that power ChatGPT." Even though OpenAI allows you to opt out of content use for model training, it still collects and retains your personal data. Although some companies promise that they anonymize this data, meaning they store it without naming the person who provided it, there is always a risk of data being reidentified. Beyond generative AI assistants, social media platforms like Facebook, Instagram and TikTok continuously gather data on their users to train predictive AI models. Every post, photo, video, like, share and comment, including the amount of time people spend looking at each of these, is collected as data points that are used to build digital data profiles for each person who uses the service. The profiles can be used to refine the social media platform's AI recommender systems. They can also be sold to data brokers, who sell a person's data to other companies to, for instance, help develop targeted advertisements that align with that person's interests. Many social media companies also track users across websites and applications by putting cookies and embedded tracking pixels on their computers. Cookies are small files that store information about who you are and what you clicked on while browsing a website. One of the most common uses of cookies is in digital shopping carts: When you place an item in your cart, leave the website and return later, the item will still be in your cart because the cookie stored that information. Tracking pixels are invisible images or snippets of code embedded in websites that notify companies of your activity when you visit their page. This helps them track your behavior across the internet. This is why users often see or hear advertisements that are related to their browsing and shopping habits on many of the unrelated websites they browse, and even when they are using different devices, including computers, phones and smart speakers. One study found that some websites can store over 300 tracking cookies on your computer or mobile phone. Like generative AI platforms, social media platforms offer privacy settings and opt-outs, but these give people limited control over how their personal data is aggregated and monetized. As media theorist Douglas Rushkoff argued in 2011, if the service is free, you are the product. Many tools that include AI don't require a person to take any direct action for the tool to collect data about that person. Smart devices such as home speakers, fitness trackers and watches continually gather information through biometric sensors, voice recognition and location tracking. Smart home speakers continually listen for the command to activate or "wake up" the device. As the device is listening for this word, it picks up all the conversations happening around it, even though it does not seem to be active. Some companies claim that voice data is only stored when the wake word — what you say to wake up the device — is detected. However, people have raised concerns about accidental recordings, especially because these devices are often connected to cloud services, which allow voice data to be stored, synced and shared across multiple devices such as your phone, smart speaker and tablet. If the company allows, it's also possible for this data to be accessed by third parties, such as advertisers, data analytics firms or a law enforcement agency with a warrant. This potential for third-party access also applies to smartwatches and fitness trackers, which monitor health metrics and user activity patterns. Companies that produce wearable fitness devices are not considered "covered entities" and so are not bound by the Health Information Portability and Accountability Act. This means that they are legally allowed to sell health- and location-related data collected from their users. Concerns about HIPAA data arose in 2018, when Strava, a fitness company released a global heat map of users' exercise routes. In doing so, it accidentally revealed sensitive military locations across the globe through highlighting the exercise routes of military personnel. The Trump administration has tapped Palantir, a company that specializes in using AI for data analytics, to collate and analyze data about Americans. Meanwhile, Palantir has announced a partnership with a company that runs self-checkout systems. Such partnerships can expand corporate and government reach into everyday consumer behavior. This one could be used to create detailed personal profiles on Americans by linking their consumer habits with other personal data. This raises concerns about increased surveillance and loss of anonymity. It could allow citizens to be tracked and analyzed across multiple aspects of their lives without their knowledge or consent. Some smart device companies are also rolling back privacy protections instead of strengthening them. Amazon recently announced that starting on March 28, 2025, all voice recordings from Amazon Echo devices would be sent to Amazon's cloud by default, and users will no longer have the option to turn this function off. This is different from previous settings, which allowed users to limit private data collection. Changes like these raise concerns about how much control consumers have over their own data when using smart devices. Many privacy experts consider cloud storage of voice recordings a form of data collection, especially when used to improve algorithms or build user profiles, which has implications for data privacy laws designed to protect online privacy. All of this brings up serious privacy concerns for people and governments on how AI tools collect, store, use and transmit data. The biggest concern is transparency. People don't know what data is being collected, how the data is being used, and who has access to that data. Companies tend to use complicated privacy policies filled with technical jargon to make it difficult for people to understand the terms of a service that they agree to. People also tend not to read terms of service documents. One study found that people averaged 73 seconds reading a terms of service document that had an average read time of 29 to 32 minutes. Data collected by AI tools may initially reside with a company that you trust, but can easily be sold and given to a company that you don't trust. AI tools, the companies in charge of them and the companies that have access to the data they collect can also be subject to cyberattacks and data breaches that can reveal sensitive personal information. These attacks can by carried out by cybercriminals who are in it for the money, or by so-called advanced persistent threats, which are typically nation/state-sponsored attackers who gain access to networks and systems and remain there undetected, collecting information and personal data to eventually cause disruption or harm. While laws and regulations such as the General Data Protection Regulation in the European Union and the California Consumer Privacy Act aim to safeguard user data, AI development and use have often outpaced the legislative process. The laws are still catching up on AI and data privacy. For now, you should assume any AI-powered device or platform is collecting data on your inputs, behaviors and patterns. Although AI tools collect people's data, and the way this accumulation of data affects people's data privacy is concerning, the tools can also be useful. AI-powered applications can streamline workflows, automate repetitive tasks and provide valuable insights. But it's crucial to approach these tools with awareness and caution. When using a generative AI platform that gives you answers to questions you type in a prompt, don't include any personally identifiable information, including names, birth dates, Social Security numbers or home addresses. At the workplace, don't include trade secrets or classified information. In general, don't put anything into a prompt that you wouldn't feel comfortable revealing to the public or seeing on a billboard. Remember, once you hit enter on the prompt, you've lost control of that information. Remember that devices which are turned on are always listening — even if they're asleep. If you use smart home or embedded devices, turn them off when you need to have a private conversation. A device that's asleep looks inactive, but it is still powered on and listening for a wake word or signal. Unplugging a device or removing its batteries is a good way of making sure the device is truly off. Finally, be aware of the terms of service and data collection policies of the devices and platforms that you are using. You might be surprised by what you've already agreed to. Christopher Ramezan is an assistant professor of cybersecurity at West Virginia University. This article is republished from The Conversation under a Creative Commons license. This article is part of a series on data privacy that explores who collects your data, what and how they collect, who sells and buys your data, what they all do with it, and what you can do about it. This article originally appeared on Erie Times-News: AI devices collect your data, raise questions about privacy | Opinion