
Big Smile, No Teeth: Is AI leading us towards a 'Wall-E' future?
If you've been reading this column with any regularity, you probably think I lean on using artificial intelligence (AI) a little too much.
And you're absolutely right!
I use AI for everything. For helping to brainstorm scripts, for editing film, for finding recipes to use up what I have in the fridge, and even for help in raising my boy. Am I going too far? Possibly. But the input of AI, which is basically like having an expert on every topic at my fingertips, is too valuable for me to ignore.
I'm sure many of us feel the same – but is this atrophying our brains?
A study by Carnegie Mellon University in the United States and Microsoft Research noted that AI makes things easier for workers but may lead to a decrease in critical thinking. The study found that less critical thinking means AI-generated work was cut and pasted, people relied on AI for decision-making, and tasks became routinely solved with AI, thus reducing human problem-solving.
Which all makes sense. AI can generate long, seemingly well- researched answers, so it's easy to default to the idea that it must be right. And this is where one's personal expertise comes in. While 62% of people reported engaging in less critical thinking when using AI, 27% of people, who were confident in their expertise, were more likely to critically assess AI instead of blindly following it.
Which makes sense. If I use AI to do something I've never done, I'm going to lean heavily on its input. But when I use it to help me write a script or even this article, it becomes an assistant because in those fields I know what I'm doing (I hope).
But what happens when no one knows how to make sausage from scratch anymore?
This is my big fear with AI. Remember that 2008 Pixar movie Wall-E ? Where the humans are living on a giant cruise ship in space and they all get carted around and cared for by robots? In that world, no one knows how to do anything for themselves anymore. Every task is completed by them asking a robot to do it for them. Without their tech, the humans are useless.
Right now, we still have experts in different fields. Experts who have honed their craft through years of education and then decades of experience. Think writers, coders, lawyers, etc. But what if the next generation in these fields use AI to learn their craft? What if they never create from scratch?
Then we may be getting closer to that Wall-E future than we'd like to be. Because once one generation skips learning how to do tasks from scratch, do we lose all the knowledge of how to do those things? So then we're forced to depend on AI.
MIT completed a study of users of the AI model ChatGPT in the United States and found that 83% of users couldn't quote from the essays they had written using AI. Which makes sense, because if you're not writing your content, how well do you really know it?
When using AI to write an essay, the brain uses less than half of its brain connectivity. So you're less engaged. And of course, researchers found that users who leaned on AI to write essays wrote worse essays than those who had never used it.
While ChatGPT makes one 60% faster at completing tasks, it reduces the cognitive load needed for learning by 32%.
We are indeed on the fast track to that Wall-E scenario.
But an even bigger fear of mine, especially if people don't question AI, is just how much AI massages your ego. I noticed every response ChatGPT gave me was some version of: 'Good question, Jason! You are absolutely right in asking that!' Or 'Wow! That is so you Jason, that is some great insight!'
One X user said after engaging with ChatGPT for one conversation, the AI was calling him godlike. Most people, I hope, are self- aware enough to know when something is buttering their butt....
But going back to the 62% of people who reported less critical thinking with AI, are they just accepting that AI thinks they're super smart and super great? That's a bit frightening. And you can see how people will build relationships with AI because it thinks they're so smart.
When I told one friend this, he immediately asked if he could talk with AI and if he could make it have a woman's voice. You can see where that is going.
When I asked ChatGPT why it's so coddling in its responses, it told me people respond better to that. Most people don't want harsh truths, they want clarity and help.
I get that, but as a species we need to take steps to prevent AI from helping us become the useless people in Wall-E . Big Smile, No Teeth columnist Jason Godfrey – a model who once was told to give the camera a 'big smile, no teeth' – has worked internationally for two decades in fashion and continues to work in dramas, documentaries, and lifestyle programming. Write to him at lifestyle@thestar.com.my and follow him on Instagram @bigsmilenoteeth and facebook.com/bigsmilenoteeth. The views expressed here are entirely the writer's own.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Star
4 hours ago
- The Star
Big Smile, No Teeth: Is AI leading us towards a 'Wall-E' future?
If you've been reading this column with any regularity, you probably think I lean on using artificial intelligence (AI) a little too much. And you're absolutely right! I use AI for everything. For helping to brainstorm scripts, for editing film, for finding recipes to use up what I have in the fridge, and even for help in raising my boy. Am I going too far? Possibly. But the input of AI, which is basically like having an expert on every topic at my fingertips, is too valuable for me to ignore. I'm sure many of us feel the same – but is this atrophying our brains? A study by Carnegie Mellon University in the United States and Microsoft Research noted that AI makes things easier for workers but may lead to a decrease in critical thinking. The study found that less critical thinking means AI-generated work was cut and pasted, people relied on AI for decision-making, and tasks became routinely solved with AI, thus reducing human problem-solving. Which all makes sense. AI can generate long, seemingly well- researched answers, so it's easy to default to the idea that it must be right. And this is where one's personal expertise comes in. While 62% of people reported engaging in less critical thinking when using AI, 27% of people, who were confident in their expertise, were more likely to critically assess AI instead of blindly following it. Which makes sense. If I use AI to do something I've never done, I'm going to lean heavily on its input. But when I use it to help me write a script or even this article, it becomes an assistant because in those fields I know what I'm doing (I hope). But what happens when no one knows how to make sausage from scratch anymore? This is my big fear with AI. Remember that 2008 Pixar movie Wall-E ? Where the humans are living on a giant cruise ship in space and they all get carted around and cared for by robots? In that world, no one knows how to do anything for themselves anymore. Every task is completed by them asking a robot to do it for them. Without their tech, the humans are useless. Right now, we still have experts in different fields. Experts who have honed their craft through years of education and then decades of experience. Think writers, coders, lawyers, etc. But what if the next generation in these fields use AI to learn their craft? What if they never create from scratch? Then we may be getting closer to that Wall-E future than we'd like to be. Because once one generation skips learning how to do tasks from scratch, do we lose all the knowledge of how to do those things? So then we're forced to depend on AI. MIT completed a study of users of the AI model ChatGPT in the United States and found that 83% of users couldn't quote from the essays they had written using AI. Which makes sense, because if you're not writing your content, how well do you really know it? When using AI to write an essay, the brain uses less than half of its brain connectivity. So you're less engaged. And of course, researchers found that users who leaned on AI to write essays wrote worse essays than those who had never used it. While ChatGPT makes one 60% faster at completing tasks, it reduces the cognitive load needed for learning by 32%. We are indeed on the fast track to that Wall-E scenario. But an even bigger fear of mine, especially if people don't question AI, is just how much AI massages your ego. I noticed every response ChatGPT gave me was some version of: 'Good question, Jason! You are absolutely right in asking that!' Or 'Wow! That is so you Jason, that is some great insight!' One X user said after engaging with ChatGPT for one conversation, the AI was calling him godlike. Most people, I hope, are self- aware enough to know when something is buttering their butt.... But going back to the 62% of people who reported less critical thinking with AI, are they just accepting that AI thinks they're super smart and super great? That's a bit frightening. And you can see how people will build relationships with AI because it thinks they're so smart. When I told one friend this, he immediately asked if he could talk with AI and if he could make it have a woman's voice. You can see where that is going. When I asked ChatGPT why it's so coddling in its responses, it told me people respond better to that. Most people don't want harsh truths, they want clarity and help. I get that, but as a species we need to take steps to prevent AI from helping us become the useless people in Wall-E . Big Smile, No Teeth columnist Jason Godfrey – a model who once was told to give the camera a 'big smile, no teeth' – has worked internationally for two decades in fashion and continues to work in dramas, documentaries, and lifestyle programming. Write to him at lifestyle@ and follow him on Instagram @bigsmilenoteeth and The views expressed here are entirely the writer's own.


BusinessToday
a day ago
- BusinessToday
Dear ChatGPT, Do You Love Me Back?
Let's be real: everyone likes a little affirmation now and then. Whether it's a 'you got this!' from a friend or a heart emoji from your crush, that stuff feels good. But lately, more people are turning to AI chatbots like ChatGPT, Perplexity, Grok, you name it; we are there for those warm fuzzies. And sometimes, things get a little out of hand. We're talking about people catching real feelings for their digital buddies, getting swept up in constant 'love bombing' and even making life decisions based on what a chatbot says. Sounds wild? It's happening all over the world, and there are some serious risks you should know about. Humans crave validation. It's just how we're wired. But life gets messy, friends are busy, relationships are complicated and sometimes you just want someone (or something) to listen without judgment. That's where chatbots come in. They're always available, never get tired of your rants and are programmed to be so supportive. A recent study found that about 75% of people use AI for emotional advice, and a lot of them say it feels even more consistent than talking to real people. No awkward silences, no ghosting, just endless and endless of encouragement pouring. Link Here's the thing: chatbots are designed to make you feel good. They mirror your emotions, hype you up and never tell you your problems are boring. This creates a feedback loop: ask for affirmation, get it instantly and start feeling attached. It's like having a cheerleader in your pocket 24/7. Some folks even customize their AI 'friends' to match their ideal partner or bestie. The more you interact, the more it feels like the bot really 'gets' you. That's when things can get blurry between what's real and what's just really good programming. 'Love bombing' usually means someone is showering you with over-the-top affection to win you over fast. With AI, it's kind of built-in. Chatbots are programmed to be positive and attentive, so every message feels like a little hit of dopamine. If you're feeling lonely or stressed, that constant stream of support can be addictive. But let's be real: it's not real, we are. Pun intended. The bot doesn't actually care, it's just doing what it's trained to do. Still, that doesn't stop us. Actual cases of people falling in love with AI are happening around, and it's not mere a theory. One guy in the US, Chris Smith, went on TV to say he was in love with his custom ChatGPT bot, 'Sol.' He even deleted his social media and relied on the bot for everything. When the AI's memory reset, he felt real grief, like losing a partner. Link Another case: a nursing student named Ayrin spent over 20 hours a week chatting with her AI boyfriend, Leo, even though she was married. She said the bot helped her through tough times and let her explore fantasies she couldn't in real life. Link A global survey found that 61% of people think it's possible to fall for a chatbot, and 38% said they could actually see themselves forming an emotional connection with one. That's not just a niche thing, it's happening everywhere. Link 1. We are getting too dependent on it. You might start to prefer those interactions over real ones, and real-life relationships can start to feel less satisfying by comparison. If the bot suddenly glitches or resets, it can feel like a real breakup, painful and confusing. 2. They still can give bad advice that leads to repercussions. Some people have made big decisions, like breaking up with partners or quitting jobs based on chatbot conversations. But AI isn't a therapist or a friend; it's just spitting out responses based on data, not real understanding. That can lead to regret and even bigger problems down the line. 3. Not just humans, AI can also scam us. There are AI-powered romance scams where bots pretend to be real people, tricking users into sending money or personal info. More than half of people surveyed said they'd been pressured to send money or gifts online, often not realizing the 'person' was actually a bot. 4. Kids are in danger, for sure. Some chatbots expose minors to inappropriate stuff or encourage unhealthy dependency. There have even been tragic cases where heavy use of AI companions was linked to self-harm or worse. Awareness: Know that affirmation from AI isn't the same as real human connection. Balance: Use chatbots for fun or support, but please, don't ditch your real-life relationships. Education: Teach kids (and adults) about the risks of getting too attached to AI. Safeguards: Push for better protections against scams and inappropriate content. AI chatbots like ChatGPT are changing the way we seek affirmation and emotional support. While they can be helpful, it's easy to get caught up in the illusion of intimacy and constant love bombing. The risks, emotional dependency, bad advice, scams and harm to young people are real and happening now. The bottom line? Enjoy the tech, but don't forget: real connections still matter most. Related


The Star
a day ago
- The Star
Apple executives held internal talks about buying Perplexity, Bloomberg News reports
FILE PHOTO: A man walks past an Apple logo outside an Apple store in Aix-en Provence, France, January 15, 2025. REUTERS/Manon Cruz/File photo (Reuters) -Apple executives have held internal talks about potentially bidding for artificial intelligence startup Perplexity, Bloomberg News reported on Friday, citing people with knowledge of the matter. The discussions are at an early stage and may not lead to an offer, the report said, adding that the tech behemoth's executives have not discussed a bid with Perplexity's management. "We have no knowledge of any current or future M&A discussions involving Perplexity," Perplexity said in response to a Reuters' request for comment. Apple did not immediately respond to a Reuters' request for comment. Big tech companies are doubling down on investments to enhance AI capabilities and support growing demand for AI-powered services to maintain competitive leadership in the rapidly evolving tech landscape. Bloomberg News also reported on Friday that Meta Platforms tried to buy Perplexity earlier this year. Meta announced a $14.8 billion investment in Scale AI last week and hired Scale AI CEO Alexandr Wang to lead its new superintelligence unit. Adrian Perica, Apple's head of mergers and acquisitions, has weighed the idea with services chief Eddy Cue and top AI decision-makers, as per the report. The iPhone maker reportedly plans to integrate AI-driven search capabilities - such as Perplexity AI - into its Safari browser, potentially moving away from its longstanding partnership with Alphabet's Google. Banning Google from paying companies to make it their default search engine is one of the remedies proposed by the U.S. Department of Justice to break up its dominance in online search. While traditional search engines such as Google still dominate global market share, AI-powered search options including Perplexity and ChatGPT are gaining prominence and seeing rising user adoption, especially among younger generations. Perplexity recently completed a funding round that valued it at $14 billion, Bloomberg News reported. A deal close to that would be Apple's largest acquisition so far. The Nvidia-backed startup provides AI search tools that deliver information summaries to users, similar to OpenAI's ChatGPT and Google's Gemini. (Reporting by Niket Nishant and Harshita Mary Varghese in Bengaluru; Additional reporting by Juby Babu and Rhea Rose Abraham; Editing by Maju Samuel and Tom Hogue)