logo
Majority of Americans have used AI models like ChatGPT: Survey

Majority of Americans have used AI models like ChatGPT: Survey

Yahoo13-03-2025

A majority of Americans have used ChatGPT-like artificial intelligence (AI) models according to a new survey.
In the survey from Elon University's Imagining the Digital Future Center, 52 percent said they 'use artificial intelligence (AI) large language models,' a category that includes OpenAI's famous ChatGPT.
Out of that percentage, 5 percent said they use the models 'almost constantly,' 7 percent said they use them 'several times a day,' 5 percent said they use them 'about once a day,' 10 percent said they use them 'several times a week' and 25 percent said they use them 'less often.' Forty-seven percent said they use them 'not at all.'
'The rise of large language models has been historic. In less than two-and-and-half years, half the adults in America say they have used LLMs. Few, if any, communications and general technologies have seen this pace of growth across the entire population,' a report on the survey reads.
Despite Americans appearing to be more comfortable with AI, a recent poll found 55 percent disagree with the government using AI to make choices about eligibility for unemployment assistance, college tuition aid, research investments, food aid and small business loans.
Among 500 users of large language models surveyed in the Imagining the Digital Future Center survey, 52 percent said they use them 'for work activities.' Thirty-six percent said they use them 'for schoolwork and homework activities.'
The Imagining the Digital Future Center survey for the 500 users of large language models took place from Jan. 21 to 23. and has 5.1 percentage points as its margin of error. Another wider group of 939 people, both users and non-users of large language models, has 3.2 percentage points as its margin of error.
The Hill has reached out to the Imagining the Digital Future Center about the survey dates for the wider group.
Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI chatbots and TikTok reshape how young people get their daily news
AI chatbots and TikTok reshape how young people get their daily news

Yahoo

timean hour ago

  • Yahoo

AI chatbots and TikTok reshape how young people get their daily news

Artificial intelligence is changing the way people get their news, with more readers turning to chatbots like ChatGPT to stay up to date. At the same time, nearly half of young adults now rely on platforms such as TikTok as their main source of news. The findings come from the Reuters Institute's annual Digital News Report, released this week. The Oxford University-affiliated study surveyed nearly 97,000 people across 48 countries to track how global news habits are shifting. The study found that a notable number of people are using AI chatbots to read headlines and get news updates – a shift described by the institute's director Mitali Mukherjee as a 'new chapter' in the way audiences consume information. While only 7 percent overall say they use AI chatbots to find news, that number rises among younger audiences – 12 percent of under-35s and 15 percent of under-25s now rely on tools such as OpenAI's ChatGPT, Google's Gemini or Meta's Llama for their news. 'Personalised, bite-sized and quick – that's how younger audiences want their news, and AI tools are stepping in to deliver exactly that,' Mukherjee noted. Beyond reading headlines, many readers are turning to AI for more complex tasks: 27 percent use it to summarise news articles, 24 percent for translations, and 21 percent for recommendations on what to read next. Nearly one in five have quizzed AI directly about current events. (with newswires) Read more on RFI EnglishRead also:AI steals spotlight from Nobel winners who highlight Its power and risksAI showcase pays off for France, but US tech scepticism endures'By humans, for humans': French dubbing industry speaks out against AI threat

Is Using Tech To Make Your Own Sparkling Water Worthwhile?
Is Using Tech To Make Your Own Sparkling Water Worthwhile?

Forbes

timean hour ago

  • Forbes

Is Using Tech To Make Your Own Sparkling Water Worthwhile?

This portable system makes instant sparkling water Americans apparently love effervescence. According to Google's Gemini, the global sparkling water market was valued at about $42.62 billion last year. And it's projected to grow significantly, with estimates maxing it out at $108 billion by 2032. That's a lot of burps. So it shouldn't shock you that companies are flocking to get in on a piece of and Kirkland flavored sparkling water are mainstays in our home. The labeling implies there's no sugar – just essentially water and CO2. So it's way better for you than carbonated soda. And to me, it's so much tastier than plain drinking water, with all the used to have a Sodastream unit, in which we made our own seltzer water by carbonating ordinary tap water and adding flavor syrup. Somewhere along the way, it broke or stopped working. So we just went back to buying cans of the good stuff. Of course, this habit can get a little pricey. But more than anything, I really just don't like carrying the heavy cases of it in from the car, once we get home from I heard about Aerflo, which brings portability to the category. It's a single drinking water bottle in which the top holds a refillable CO2 canister -- making it a portable, zero-waste carbonation system. It's kind of an online sensation, I noticed, with reviewers posting how-to videos and hundreds of people joining in on the for $74, the system includes the portable carbonator, a reusable bottle, and a set of refillable CO₂ capsules that each make up to four bottles of sparkling water. It's compact enough to fit in your front-seat cup holder; is free of PFAS, BPA and microplastics; and is backed by a circular exchange model. Just drop used capsules in the mail using the prepaid return box, and Aerflo refills and recirculates them from its New Jersey facility. The company claims it's ideal for those who care about sustainability, simplicity and well-made gear. And it of course eliminates the need for counter-top appliances that carbonate two weeks, I've been trying Aerflo – along with friends and family. It's easy to use: You place the small CO2 canister in the lid, fill the water bottle, tighten the lid, press the lid in the marked spot three times or so, shake the container, and then repeat the last two steps three times. When the water has carbonated enough, it lets out a noise of air escaping. Then you remove the lid and drink. The entire process takes maybe 30 my brief experience, it works fine but the water does not get as carbonated as a can of Lacroix – no matter how much I've tried carbonating and even over-carbonating. Yet it generates a pleasing amount of bubbles that does the job. The company asks you not add syrup or flavoring, but you can just pour the water into a separate glass with syrup if you want. I added a lime wedge to the Aerflo bottle, and that worked fine. Also, I was only able to get two glasses of carbonation out of any canister – even once I started pressing the lid the minimum amount of times per glass. So I'm not sure how much savings it's truly offering over just buying cans of sparkling water. But it's definitely better for the environment than throwing out can after an industry clearly growing exponentially, it's good that there are options. I expect there will be more products like this emerging as time goes on. And that makes me feel bubbly.

AI tools collect, store your data – how to be aware of what you're revealing
AI tools collect, store your data – how to be aware of what you're revealing

Yahoo

time2 hours ago

  • Yahoo

AI tools collect, store your data – how to be aware of what you're revealing

Like it or not, artificial intelligence has become part of daily life. Many devices — including electric razors and toothbrushes — have become "AI-powered," using machine learning algorithms to track how a person uses the device, how the device is working in real time, and provide feedback. From asking questions to an AI assistant like ChatGPT or Microsoft Copilot to monitoring a daily fitness routine with a smartwatch, many people use an AI system or tool every day. While AI tools and technologies can make life easier, they also raise important questions about data privacy. These systems often collect large amounts of data, sometimes without people even realizing their data is being collected. The information can then be used to identify personal habits and preferences, and even predict future behaviors by drawing inferences from the aggregated data. As an assistant professor of cybersecurity at West Virginia University, I study how emerging technologies and various types of AI systems manage personal data and how we can build more secure, privacy-preserving systems for the future. Generative AI software uses large amounts of training data to create new content such as text or images. Predictive AI uses data to forecast outcomes based on past behavior, such as how likely you are to hit your daily step goal, or what movies you may want to watch. Both types can be used to gather information about you. Generative AI assistants such as ChatGPT and Google Gemini collect all the information users type into a chat box. Every question, response and prompt that users enter is recorded, stored and analyzed to improve the AI model. OpenAI's privacy policy informs users that "we may use content you provide us to improve our Services, for example to train the models that power ChatGPT." Even though OpenAI allows you to opt out of content use for model training, it still collects and retains your personal data. Although some companies promise that they anonymize this data, meaning they store it without naming the person who provided it, there is always a risk of data being reidentified. Beyond generative AI assistants, social media platforms like Facebook, Instagram and TikTok continuously gather data on their users to train predictive AI models. Every post, photo, video, like, share and comment, including the amount of time people spend looking at each of these, is collected as data points that are used to build digital data profiles for each person who uses the service. The profiles can be used to refine the social media platform's AI recommender systems. They can also be sold to data brokers, who sell a person's data to other companies to, for instance, help develop targeted advertisements that align with that person's interests. Many social media companies also track users across websites and applications by putting cookies and embedded tracking pixels on their computers. Cookies are small files that store information about who you are and what you clicked on while browsing a website. One of the most common uses of cookies is in digital shopping carts: When you place an item in your cart, leave the website and return later, the item will still be in your cart because the cookie stored that information. Tracking pixels are invisible images or snippets of code embedded in websites that notify companies of your activity when you visit their page. This helps them track your behavior across the internet. This is why users often see or hear advertisements that are related to their browsing and shopping habits on many of the unrelated websites they browse, and even when they are using different devices, including computers, phones and smart speakers. One study found that some websites can store over 300 tracking cookies on your computer or mobile phone. Like generative AI platforms, social media platforms offer privacy settings and opt-outs, but these give people limited control over how their personal data is aggregated and monetized. As media theorist Douglas Rushkoff argued in 2011, if the service is free, you are the product. Many tools that include AI don't require a person to take any direct action for the tool to collect data about that person. Smart devices such as home speakers, fitness trackers and watches continually gather information through biometric sensors, voice recognition and location tracking. Smart home speakers continually listen for the command to activate or "wake up" the device. As the device is listening for this word, it picks up all the conversations happening around it, even though it does not seem to be active. Some companies claim that voice data is only stored when the wake word — what you say to wake up the device — is detected. However, people have raised concerns about accidental recordings, especially because these devices are often connected to cloud services, which allow voice data to be stored, synced and shared across multiple devices such as your phone, smart speaker and tablet. If the company allows, it's also possible for this data to be accessed by third parties, such as advertisers, data analytics firms or a law enforcement agency with a warrant. This potential for third-party access also applies to smartwatches and fitness trackers, which monitor health metrics and user activity patterns. Companies that produce wearable fitness devices are not considered "covered entities" and so are not bound by the Health Information Portability and Accountability Act. This means that they are legally allowed to sell health- and location-related data collected from their users. Concerns about HIPAA data arose in 2018, when Strava, a fitness company released a global heat map of users' exercise routes. In doing so, it accidentally revealed sensitive military locations across the globe through highlighting the exercise routes of military personnel. The Trump administration has tapped Palantir, a company that specializes in using AI for data analytics, to collate and analyze data about Americans. Meanwhile, Palantir has announced a partnership with a company that runs self-checkout systems. Such partnerships can expand corporate and government reach into everyday consumer behavior. This one could be used to create detailed personal profiles on Americans by linking their consumer habits with other personal data. This raises concerns about increased surveillance and loss of anonymity. It could allow citizens to be tracked and analyzed across multiple aspects of their lives without their knowledge or consent. Some smart device companies are also rolling back privacy protections instead of strengthening them. Amazon recently announced that starting on March 28, 2025, all voice recordings from Amazon Echo devices would be sent to Amazon's cloud by default, and users will no longer have the option to turn this function off. This is different from previous settings, which allowed users to limit private data collection. Changes like these raise concerns about how much control consumers have over their own data when using smart devices. Many privacy experts consider cloud storage of voice recordings a form of data collection, especially when used to improve algorithms or build user profiles, which has implications for data privacy laws designed to protect online privacy. All of this brings up serious privacy concerns for people and governments on how AI tools collect, store, use and transmit data. The biggest concern is transparency. People don't know what data is being collected, how the data is being used, and who has access to that data. Companies tend to use complicated privacy policies filled with technical jargon to make it difficult for people to understand the terms of a service that they agree to. People also tend not to read terms of service documents. One study found that people averaged 73 seconds reading a terms of service document that had an average read time of 29 to 32 minutes. Data collected by AI tools may initially reside with a company that you trust, but can easily be sold and given to a company that you don't trust. AI tools, the companies in charge of them and the companies that have access to the data they collect can also be subject to cyberattacks and data breaches that can reveal sensitive personal information. These attacks can by carried out by cybercriminals who are in it for the money, or by so-called advanced persistent threats, which are typically nation/state-sponsored attackers who gain access to networks and systems and remain there undetected, collecting information and personal data to eventually cause disruption or harm. While laws and regulations such as the General Data Protection Regulation in the European Union and the California Consumer Privacy Act aim to safeguard user data, AI development and use have often outpaced the legislative process. The laws are still catching up on AI and data privacy. For now, you should assume any AI-powered device or platform is collecting data on your inputs, behaviors and patterns. Although AI tools collect people's data, and the way this accumulation of data affects people's data privacy is concerning, the tools can also be useful. AI-powered applications can streamline workflows, automate repetitive tasks and provide valuable insights. But it's crucial to approach these tools with awareness and caution. When using a generative AI platform that gives you answers to questions you type in a prompt, don't include any personally identifiable information, including names, birth dates, Social Security numbers or home addresses. At the workplace, don't include trade secrets or classified information. In general, don't put anything into a prompt that you wouldn't feel comfortable revealing to the public or seeing on a billboard. Remember, once you hit enter on the prompt, you've lost control of that information. Remember that devices which are turned on are always listening — even if they're asleep. If you use smart home or embedded devices, turn them off when you need to have a private conversation. A device that's asleep looks inactive, but it is still powered on and listening for a wake word or signal. Unplugging a device or removing its batteries is a good way of making sure the device is truly off. Finally, be aware of the terms of service and data collection policies of the devices and platforms that you are using. You might be surprised by what you've already agreed to. Christopher Ramezan is an assistant professor of cybersecurity at West Virginia University. This article is republished from The Conversation under a Creative Commons license. This article is part of a series on data privacy that explores who collects your data, what and how they collect, who sells and buys your data, what they all do with it, and what you can do about it. This article originally appeared on Erie Times-News: AI devices collect your data, raise questions about privacy | Opinion

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store