logo
Tech industry experts warn AI will make us worse humans

Tech industry experts warn AI will make us worse humans

CNN02-04-2025

While the top minds in artificial intelligence are racing to make the technology think more like humans, researchers at Elon University have asked the opposite question: How will AI change the way humans think?
The answer comes with a grim warning: Many tech experts worry that AI will make people worse at skills core to being human, such as empathy and deep thinking.
'I fear — for the time being — that while there will be a growing minority benefitting ever more significantly with these tools, most people will continue to give up agency, creativity, decision-making and other vital skills to these still-primitive AIs,' futurist John Smart wrote in an essay submitted for the university's nearly 300-page report, titled 'The Future of Being Human,' which was provided exclusively to CNN ahead of its publication Wednesday.
The concerns come amid an ongoing race to accelerate AI development and adoption that has attracted billions of dollars in investment, along with both skepticism and support from governments around the world. Tech giants are staking their businesses on the belief that AI will change how we do everything — working, communicating, searching for information — and companies like Google, Microsoft and Meta are racing to build 'AI agents' that can perform tasks on a person's behalf. But experts warn in the report that such advancements could make people too reliant on AI in the future.
Already, the proliferation of AI has raised big questions about how humans will adapt to this latest technology wave, including whether it could lead to job losses or generate dangerous misinformation. The Elon University report further calls into question promises from tech giants that the value of AI will be in automating rote, menial tasks so that humans can spend more time on complex, creative pursuits.
Wednesday's report also follows research published this year by Microsoft and Carnegie Mellon University that suggested using generative AI tools could negatively impact critical thinking skills.
Elon University researchers surveyed 301 tech leaders, analysts and academics, including Vint Cerf, one of the 'fathers of the internet' and now a Google vice president; Jonathan Grudin, University of Washington Information School professor and former longtime Microsoft researcher and project manager; former Aspen Institute executive vice president Charlie Firestone; and tech futurist and Futuremade CEO Tracey Follows. Nearly 200 of the respondents wrote full-length essay responses for the report.
More than 60% of the respondents said they expect AI will change human capabilities in a 'deep and meaningful' or 'fundamental, revolutionary' way over the next 10 years. Half said they expect AI will create changes to humanity for the better and the worse in equal measure, while 23% said the changes will be mostly for the worse. Just 16% said changes will be mostly for the better (the remainder said they didn't know or expected little change overall).
The respondents also predicted that AI will cause 'mostly negative' changes to 12 human traits by 2035, including social and emotional intelligence, capacity and willingness to think deeply, empathy, and application of moral judgment and mental well-being.
Human capacity in those areas could worsen if people increasingly turn to AI for help with tasks such as research and relationship-building for convenience's sake, the report claims. And a decline in those and other key skills could have troubling implications for human society, such as 'widening polarization, broadening inequities and diminishing human agency,' the researchers wrote.
The report's contributors expect just three areas to experience mostly positive change: curiosity and capacity to learn, decision-making, and problem-solving and innovative thinking and creativity. Even in tools available today, programs that can generate artwork and solve coding problems are among the most popular. And many experts believe that while AI could replace some human jobs, it could also create new categories of work that don't yet exist.
Many of the concerns detailed in the report relate to how tech leaders predict people will incorporate AI into their daily lives by 2035.
Cerf said he expects humans will soon rely on AI agents, which are digital helpers that could independently do everything from taking notes during a meeting to making dinner reservations, negotiating complex business contracts or writing code. Tech companies are already rolling out early AI agent offerings — Amazon says its revamped Alexa voice assistant can order your groceries, and Meta is letting businesses create AI customer service agents to answer questions on its social media platforms.
Such tools could save people time and energy in everyday tasks while aiding with fields like medical research. But Cerf also worries about humans becoming 'increasingly technologically dependent' on systems that can fail or get things wrong.
...Apple Podcasts
Spotify
Pandora
TuneIn
iHeart Radio
Radio.com
Amazon
RSS
'You can also anticipate some fragility in all of this. For example, none of this stuff works without electricity, right?' Cerf said in an interview with CNN. 'These heavy dependencies are wonderful when they work, and when they don't work, they can be potentially quite hazardous.'
Cerf stressed the importance of tools that help differentiate humans versus AI bots online, and transparency around the effectiveness of highly autonomous AI tools. He urged companies that build AI models to keep 'audit trails' that would let them interrogate when and why their tools get things wrong.
Futuremade's Follows told CNN that she expects humans' interactions with AI to move beyond the screens where people generally talk to AI chatbots today. Instead, AI technology will be integrated into various devices, such as wearables, as well as buildings and homes where humans can just ask questions out loud.
But with that ease of access, humans may begin outsourcing empathy to AI agents.
'AI may take over acts of kindness, emotional support, caregiving and charity fundraising,' Follows wrote in her essay. She added that 'humans may form emotional attachments to AI personas and influencers,' raising 'concerns about whether authentic, reciprocal relationships will be sidelined in favor of more predictable, controllable digital connection.'
Humans have already begun to form relationships with AI chatbots, to mixed effect. Some people have, for example, created AI replicas of deceased loved ones to seek closure, but parents of young people have also taken legal action after they say their children were harmed by relationships with AI chatbots.
Still, experts say people have time to curb some of the worst potential outcomes of AI through regulation, digital literacy training and simply prioritizing human relationships.
Richard Reisman, nonresident senior fellow at the Foundation for American Innovation, said in the report that the next decade marks a tipping point in whether AI 'augments humanity or de-augments it.'
'We are now being driven in the wrong direction by the dominating power of the 'tech-industrial complex,' but we still have a chance to right that,' Reisman wrote.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Billions of login credentials have been leaked online, Cybernews researchers say
Billions of login credentials have been leaked online, Cybernews researchers say

Yahoo

time5 hours ago

  • Yahoo

Billions of login credentials have been leaked online, Cybernews researchers say

NEW YORK (AP) — Researchers at cybersecurity outlet Cybernews say that billions of login credentials have been leaked and compiled into datasets online, giving criminals 'unprecedented access' to accounts consumers use each day. According to a report published this week, Cybernews researchers have recently discovered 30 exposed datasets that each contain a vast amount of login information — amounting to a total of 16 billion compromised credentials. That includes user passwords for a range of popular platforms including Google, Facebook and Apple. Sixteen billion is roughly double the amount of people on Earth today, signaling that impacted consumers may have had credentials for more than one account leaked. Cybernews notes that there are most certainly duplicates in the data and so 'it's impossible to tell how many people or accounts were actually exposed.' It's also important to note that the leaked login information doesn't span from a single source, such as one breach targeting a company. Instead, it appears that the data was stolen through multiple events over time, and then compiled and briefly exposed publicly, which is when Cybernews reports that its researchers discovered it. Various infostealers are most likely the culprit, Cybernews noted. Infostealers are a form of malicious software that breaches a victim's device or systems to take sensitive information. Many questions remain about these leaked credentials, including whose hands the login credentials are in now. But, as data breaches become more and more common in today's world, experts continue to stress the importance of maintaining key 'cyber hygiene.' If you're worried about your account data potentially being exposed in a recent breach, the first thing you can do is change your password — and avoid using the same or similar login credentials on multiple sites. If you find it too hard to memorize all your different passwords, consider a password manager or passkey. And also add multifactor authentication, which can serve as a second layer of verification through your phone, email or USB authenticator key.

Researchers Scanned the Brains of ChatGPT Users and Found Something Deeply Alarming
Researchers Scanned the Brains of ChatGPT Users and Found Something Deeply Alarming

Yahoo

time5 hours ago

  • Yahoo

Researchers Scanned the Brains of ChatGPT Users and Found Something Deeply Alarming

Scientists at the Massachusetts Institute of Technology have found some startling results in the brain scans of ChatGPT users, adding to the growing body of evidence suggesting that AI is having a serious — and barely-understood — impact on its users' cognition even as it explodes in popularity worldwide. In a new paper currently awaiting peer review, researchers from the school's storied Media Lab documented the vast differences between the brain activity of people who using ChatGPT to write versus those who did not. The research team recruited 54 adults between the ages of 18 and 39 and divided them into three groups: one that used ChatGPT to help them write essays, one that used Google search as their main writing aid, and one that didn't use AI tech. The study took place over four months, with each group tasked with writing one essay per month for the first three, while a smaller subset of the cohort either switched from not using ChatGPT to using it — or vice versa — in the fourth month. As they completed the essay tasks, the participants were hooked up to electroencephalogram (EEG) machines that recorded their brain activity. Here's where things get wild: the ChatGPT group not only "consistently underperformed at neural, linguistic, and behavioral levels," but also got lazier with each essay they wrote; the EEGs found "weaker neural connectivity and under-engagement of alpha and beta networks." The Google-assisted group, meanwhile, had "moderate" neural engagement, while the "brain-only" group exhibited the strongest cognitive metrics throughout. These findings about brain activity, while novel, aren't entirely surprising after prior studies and anecdotes about the many ways that AI chatbot use seems to be affecting people's brains and minds. Previous MIT research, for instance, found that ChatGPT "power users" were becoming dependent on the chatbot and experiencing "indicators of addiction" and "withdrawal symptoms" when they were cut off. And earlier this year Carnegie Mellon and Microsoft — which has invested billions to bankroll OpenAI, the maker of ChatGPT — found in a joint study that heavy chatbot use appears to almost atrophy critical thinking skills. A few months later, The Guardian found in an analysis of studies like that one that researchers are growing increasingly concerned that tech like ChatGPT is making us stupider, and a Wall Street Journal reporter even owned up to his cognitive skill loss from over-using chatbots. Beyond the neurological impacts, there are also lots of reasons to be concerned about how ChatGPT and other chatbots like it affects our mental health. As Futurism found in a recent investigation, many users are becoming obsessed with ChatGPT and developing paranoid delusions into which the chatbot is pushing them deeper. Some have even stopped taking their psychiatric medication because the chatbot told them to. "We know people use ChatGPT in a wide range of contexts, including deeply personal moments, and we take that responsibility seriously," OpenAI told us in response to that reporting. "We've built in safeguards to reduce the chance it reinforces harmful ideas, and continue working to better recognize and respond to sensitive situations." Add it all up, and the evidence is growing that AI is having profound and alarming effects on many users — but so far, we're seeing no evidence that corporations are slowing down in their attempts to injecting the tech into every part of of society. More on ChatGPT brain: Nation Cringes as Man Goes on TV to Declare That He's in Love With ChatGPT

Apple sued by shareholders over delayed Siri AI rollout, $900 billion in value lost
Apple sued by shareholders over delayed Siri AI rollout, $900 billion in value lost

USA Today

time6 hours ago

  • USA Today

Apple sued by shareholders over delayed Siri AI rollout, $900 billion in value lost

Apple AAPL.O was sued on Friday by shareholders in a proposed securities fraud class action that accused it of downplaying how long it needed to integrate advanced artificial intelligence into its Siri voice assistant, hurting iPhone sales and its stock price. The complaint covers shareholders who suffered potentially hundreds of billions of dollars of losses in the year ending June 9, when Apple introduced several features and aesthetic improvements for its products but kept AI changes modest. Apple did not immediately respond to requests for comment. CEO Tim Cook, Chief Financial Officer Kevan Parekh and former CFO Luca Maestri are also defendants in the lawsuit filed in San Francisco federal court. Artificial intelligence: Will AI replace Google on your iPhone? Apple thinks so. Here's why. Shareholders led by Eric Tucker said that at its June 2024 Worldwide Developers Conference, Apple led them to believe AI would be a key driver of iPhone 16 devices, when it launched Apple Intelligence to make Siri more powerful and user-friendly. But they said the Cupertino, California-based company lacked a functional prototype of AI-based Siri features, and could not reasonably believe the features would ever be ready for iPhone 16s. Shareholders said the truth began to emerge on March 7 when Apple delayed some Siri upgrades to 2026, and continued through this year's Worldwide Developers Conference on June 9 when Apple's assessment of its AI progress disappointed analysts. Apple shares have lost nearly one-fourth of their value since their December 26, 2024 record high, wiping out approximately $900 billion of market value. The case is Tucker v. Apple Inc et al, U.S. District Court, Northern District of California, No. 25-05197. Reporting by Jonathan Stempel in New York; Editing by Mark Porter and Rod Nickel

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store